Science.gov

Sample records for adaptive predictive coding

  1. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  2. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  3. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum

    PubMed Central

    Ziauddeen, Hisham; Vestergaard, Martin D.; Spencer, Tom

    2017-01-01

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that

  4. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.

    PubMed

    Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C

    2017-02-15

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness.SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that

  5. Sample-adaptive-prediction for HEVC SCC intra coding with ridge estimation from spatially neighboring samples

    NASA Astrophysics Data System (ADS)

    Kang, Je-Won; Ryu, Soo-Kyung

    2017-02-01

    In this paper a sample-adaptive prediction technique is proposed to yield efficient coding performance in an intracoding for screen content video coding. The sample-based prediction is to reduce spatial redundancies in neighboring samples. To this aim, the proposed technique uses a weighted linear combination of neighboring samples and applies the robust optimization technique, namely, ridge estimation to derive the weights in a decoder side. The ridge estimation uses L2 norm based regularization term, and, thus the solution is more robust to high variance samples such as in sharp edges and high color contrasts exhibited in screen content videos. It is demonstrated with the experimental results that the proposed technique provides an improved coding gain as compared to the HEVC screen content video coding reference software.

  6. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  7. Long-range accelerated BOTDA sensor using adaptive linear prediction and cyclic coding.

    PubMed

    Muanenda, Yonas; Taki, Mohammad; Pasquale, Fabrizio Di

    2014-09-15

    We propose and experimentally demonstrate a long-range accelerated Brillouin optical time domain analysis (BOTDA) sensor that exploits the complementary noise reduction benefits of adaptive linear prediction and optical pulse coding. The combined technique allows using orders of magnitude less the number of averages of the backscattered BOTDA traces compared to a standard single pulse BOTDA, enabling distributed strain measurement over 10 km of a standard single mode fiber with meter-scale spatial resolution and 1.8 MHz Brillouin frequency shift resolution. By optimizing the system parameters, the measurement is achieved with only 20 averages for each Brillouin gain spectrum scanned frequency, allowing for an eight times faster strain measurement compared to the use of cyclic pulse coding alone.

  8. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    PubMed

    Boulgouris, N V; Tzovaras, D; Strintzis, M G

    2001-01-01

    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  9. Telescope Adaptive Optics Code

    SciTech Connect

    Phillion, D.

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST

  10. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  11. Predictive coding of multisensory timing

    PubMed Central

    Shi, Zhuanghua; Burr, David

    2016-01-01

    The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’. PMID:27695705

  12. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  13. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  14. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  15. Predictive depth coding of wavelet transformed images

    NASA Astrophysics Data System (ADS)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  16. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  17. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  18. Code-excited linear predictive coding of multispectral MR images

    NASA Astrophysics Data System (ADS)

    Hu, Jian-Hong; Wang, Yao; Cahill, Patrick

    1996-02-01

    This paper reports a multispectral code excited linear predictive coding method for the compression of well-registered multispectral MR images. Different linear prediction models and the adaptation schemes have been compared. The method which uses forward adaptive autoregressive (AR) model has proven to achieve a good compromise between performance, complexity and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over non-overlapping square macroblocks. Each macro-block is further divided into several micro-blocks and, the best excitation signals for each microblock are determined through an analysis-by-synthesis procedure. To satisfy the high quality requirement for medical images, the error between the original images and the synthesized ones are further specified using a vector quantizer. The MFCELP method has been applied to 26 sets of clinical MR neuro images (20 slices/set, 3 spectral bands/slice, 256 by 256 pixels/image, 12 bits/pixel). It provides a significant improvement over the discrete cosine transform (DCT) based JPEG method, a wavelet transform based embedded zero-tree wavelet (EZW) coding method, as well as the MSARMA method we developed before.

  19. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    NASA Technical Reports Server (NTRS)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  20. Neural Elements for Predictive Coding

    PubMed Central

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by

  1. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  2. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  3. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  4. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  5. Canonical microcircuits for predictive coding

    PubMed Central

    Bastos, Andre M.; Usrey, W. Martin; Adams, Rick A.; Mangun, George R.; Fries, Pascal; Friston, Karl J.

    2013-01-01

    Summary This review considers the influential notion of a canonical (cortical) microcircuit in light of recent theories about neuronal processing. Specifically, we conciliate quantitative studies of microcircuitry and the functional logic of neuronal computations. We revisit the established idea that message passing among hierarchical cortical areas implements a form of Bayesian inference – paying careful attention to the implications for intrinsic connections among neuronal populations. By deriving canonical forms for these computations, one can associate specific neuronal populations with specific computational roles. This analysis discloses a remarkable correspondence between the microcircuitry of the cortical column and the connectivity implied by predictive coding. Furthermore, it provides some intuitive insights into the functional asymmetries between feedforward and feedback connections and the characteristic frequencies over which they operate. PMID:23177956

  6. Adaptable recursive binary entropy coding technique

    NASA Astrophysics Data System (ADS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  7. A review of predictive coding algorithms.

    PubMed

    Spratling, M W

    2017-03-01

    Predictive coding is a leading theory of how the brain performs probabilistic inference. However, there are a number of distinct algorithms which are described by the term "predictive coding". This article provides a concise review of these different predictive coding algorithms, highlighting their similarities and differences. Five algorithms are covered: linear predictive coding which has a long and influential history in the signal processing literature; the first neuroscience-related application of predictive coding to explaining the function of the retina; and three versions of predictive coding that have been proposed to model cortical function. While all these algorithms aim to fit a generative model to sensory data, they differ in the type of generative model they employ, in the process used to optimise the fit between the model and sensory data, and in the way that they are related to neurobiology.

  8. Adaptive down-sampling video coding

    NASA Astrophysics Data System (ADS)

    Wang, Ren-Jie; Chien, Ming-Chen; Chang, Pao-Chi

    2010-01-01

    Down-sampling coding, which sub-samples the image and encodes the smaller sized images, is one of the solutions to raise the image quality at insufficiently high rates. In this work, we propose an Adaptive Down-Sampling (ADS) coding for H.264/AVC. The overall system distortion can be analyzed as the sum of the down-sampling distortion and the coding distortion. The down-sampling distortion is mainly the loss of the high frequency components that is highly dependent of the spatial difference. The coding distortion can be derived from the classical Rate-Distortion theory. For a given rate and a video sequence, the optimum down-sampling resolution-ratio can be derived by utilizing the optimum theory toward minimizing the system distortion based on the models of the two distortions. This optimal resolution-ratio is used in both down-sampling and up-sampling processes in ADS coding scheme. As a result, the rate-distortion performance of ADS coding is always higher than the fixed ratio coding or H.264/AVC by 2 to 4 dB at low to medium rates.

  9. Two-layer and Adaptive Entropy Coding Algorithms for H.264-based Lossless Image Coding

    DTIC Science & Technology

    2008-04-01

    adaptive binary arithmetic coding (CABAC) [7], and context-based adaptive variable length coding (CAVLC) [3], should be adaptively adopted for advancing...Sep. 2006. [7] H. Schwarz, D. Marpe and T. Wiegand, Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, IEEE

  10. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding

    PubMed Central

    Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.

    2015-01-01

    To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974

  11. Adaptive discrete cosine transform based image coding

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Luoh, Shyan-Wen

    1996-04-01

    In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.

  12. Rate-distortion optimized adaptive transform coding

    NASA Astrophysics Data System (ADS)

    Lim, Sung-Chang; Kim, Dae-Yeon; Jeong, Seyoon; Choi, Jin Soo; Choi, Haechul; Lee, Yung-Lyul

    2009-08-01

    We propose a rate-distortion optimized transform coding method that adaptively employs either integer cosine transform that is an integer-approximated version of discrete cosine transform (DCT) or integer sine transform (IST) in a rate-distortion sense. The DCT that has been adopted in most video-coding standards is known as a suboptimal substitute for the Karhunen-Loève transform. However, according to the correlation of a signal, an alternative transform can achieve higher coding efficiency. We introduce a discrete sine transform (DST) that achieves the high-energy compactness in a correlation coefficient range of -0.5 to 0.5 and is applied to the current design of H.264/AVC (advanced video coding). Moreover, to avoid the encoder and decoder mismatch and make the implementation simple, an IST that is an integer-approximated version of the DST is developed. The experimental results show that the proposed method achieves a Bjøntegaard Delta-RATE gain up to 5.49% compared to Joint model 11.0.

  13. Adaptive Dynamic Event Tree in RAVEN code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur

    2014-11-01

    RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me

  14. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  15. Adaptive directional lifting-based wavelet transform for image coding.

    PubMed

    Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang

    2007-02-01

    We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.

  16. Motion-compensated wavelet video coding using adaptive mode selection

    NASA Astrophysics Data System (ADS)

    Zhai, Fan; Pappas, Thrasyvoulos N.

    2004-01-01

    A motion-compensated wavelet video coder is presented that uses adaptive mode selection (AMS) for each macroblock (MB). The block-based motion estimation is performed in the spatial domain, and an embedded zerotree wavelet coder (EZW) is employed to encode the residue frame. In contrast to other motion-compensated wavelet video coders, where all the MBs are forced to be in INTER mode, we construct the residue frame by combining the prediction residual of the INTER MBs with the coding residual of the INTRA and INTER_ENCODE MBs. Different from INTER MBs that are not coded, the INTRA and INTER_ENCODE MBs are encoded separately by a DCT coder. By adaptively selecting the quantizers of the INTRA and INTER_ENCODE coded MBs, our goal is to equalize the characteristics of the residue frame in order to improve the overall coding efficiency of the wavelet coder. The mode selection is based on the variance of the MB, the variance of the prediction error, and the variance of the neighboring MBs' residual. Simulations show that the proposed motion-compensated wavelet video coder achieves a gain of around 0.7-0.8dB PSNR over MPEG-2 TM5, and a comparable PSNR to other 2D motion-compensated wavelet-based video codecs. It also provides potential visual quality improvement.

  17. Fast prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  18. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  19. Adaptive neural coding: from biological to behavioral decision-making

    PubMed Central

    Louie, Kenway; Glimcher, Paul W.; Webb, Ryan

    2015-01-01

    Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666

  20. Predictive coding of music--brain responses to rhythmic incongruity.

    PubMed

    Vuust, Peter; Ostergaard, Leif; Pallesen, Karen Johanne; Bailey, Christopher; Roepstorff, Andreas

    2009-01-01

    During the last decades, models of music processing in the brain have mainly discussed the specificity of brain modules involved in processing different musical components. We argue that predictive coding offers an explanatory framework for functional integration in musical processing. Further, we provide empirical evidence for such a network in the analysis of event-related MEG-components to rhythmic incongruence in the context of strong metric anticipation. This is seen in a mismatch negativity (MMNm) and a subsequent P3am component, which have the properties of an error term and a subsequent evaluation in a predictive coding framework. There were both quantitative and qualitative differences in the evoked responses in expert jazz musicians compared with rhythmically unskilled non-musicians. We propose that these differences trace a functional adaptation and/or a genetic pre-disposition in experts which allows for a more precise rhythmic prediction.

  1. GAMER: GPU-accelerated Adaptive MEsh Refinement code

    NASA Astrophysics Data System (ADS)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2016-12-01

    GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

  2. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    PubMed

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  3. Predictive Coding Strategies for Developmental Neurorobotics

    PubMed Central

    Park, Jun-Cheol; Lim, Jae Hyun; Choi, Hansol; Kim, Dae-Shik

    2012-01-01

    In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions. PMID:22586416

  4. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  5. Adaptive face coding and discrimination around the average face.

    PubMed

    Rhodes, Gillian; Maloney, Laurence T; Turner, Jenny; Ewing, Louise

    2007-03-01

    Adaptation paradigms highlight the dynamic nature of face coding and suggest that identity is coded relative to an average face that is tuned by experience. In low-level vision, adaptive coding can enhance sensitivity to differences around the adapted level. We investigated whether sensitivity to differences around the average face is similarly enhanced. Converging evidence from three paradigms showed no enhancement. Discrimination of small interocular spacing differences was not better for faces close to the average (Study 1). Nor was perceived similarity reduced for face pairs close to (spanning) the average (Study 2). On the contrary, these pairs were judged most similar. Maximum likelihood perceptual difference scaling (Studies 3 and 4) confirmed that sensitivity to differences was reduced, not enhanced, around the average. We conclude that adaptive face coding does not enhance discrimination around the average face.

  6. Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.

    PubMed

    Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen

    2016-04-20

    The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the Quantization Parameter Cascading (QPC) scheme that assigns Quantization Parameters (Qps) to different hierarchical layers in order to further improve the Rate-Distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model (HM), which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to Group-Of-Picture (GOP)- level, frame-level and Coding Unit (CU)-level Qp assignments. Comprehensive experiments have demonstrated the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.

  7. Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.

    PubMed

    Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen

    2016-07-01

    The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the quantization parameter cascading (QPC) scheme that assigns quantization parameters (Qps) to different hierarchical layers in order to further improve the rate-distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model, which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for an HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions, and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to group-of-picture-level, frame-level and coding unit-level Qp assignments. Comprehensive experiments have demonstrated that the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in the videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.

  8. Visual mismatch negativity: a predictive coding view

    PubMed Central

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control “refractoriness” effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  9. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  10. A predictive coding account of MMN reduction in schizophrenia.

    PubMed

    Wacongne, Catherine

    2016-04-01

    The mismatch negativity (MMN) is thought to be an index of the automatic activation of a specialized network for active prediction and deviance detection in the auditory cortex. It is consistently reduced in schizophrenic patients and has received a lot of interest as a clinical and translational tool. The main neuronal hypothesis regarding the mechanisms leading to a reduced MMN in schizophrenic patients is a dysfunction of NMDA receptors (NMDA-R). However, this hypothesis has never been implemented in a neuronal model. In this paper, we examine the consequences of NMDA-R dysfunction in a neuronal model of MMN based on predictive coding principle. I also investigate how predictive processes may interact with synaptic adaptation in MMN generations and examine the consequences of this interaction for the use of MMN paradigms in schizophrenia research.

  11. Weighted adaptively grouped multilevel space time trellis codes

    NASA Astrophysics Data System (ADS)

    Jain, Dharmvir; Sharma, Sanjay

    2015-05-01

    In existing grouped multilevel space-time trellis codes (GMLSTTCs), the groups of transmit antennas are predefined, and the transmit power is equally distributed across all transmit antennas. When the channel parameters are perfectly known at the transmitter, adaptive antenna grouping and beamforming scheme can achieve the better performance by optimum grouping of transmit antennas and properly weighting transmitted signals based on the available channel information. In this paper, we present a new code designed by combining GMLSTTCs, adaptive antenna grouping and beamforming using the channel state information at transmitter (CSIT), henceforth referred to as weighted adaptively grouped multilevel space time trellis codes (WAGMLSTTCs). The CSIT is used to adaptively group the transmitting antennas and provide a beamforming scheme by allocating the different powers to the transmit antennas. Simulation results show that WAGMLSTTCs provide improvement in error performance of 2.6 dB over GMLSTTCs.

  12. The multiform motor cortical output: Kinematic, predictive and response coding.

    PubMed

    Sartori, Luisa; Betti, Sonia; Chinellato, Eris; Castiello, Umberto

    2015-09-01

    Observing actions performed by others entails a subliminal activation of primary motor cortex reflecting the components encoded in the observed action. One of the most debated issues concerns the role of this output: Is it a mere replica of the incoming flow of information (kinematic coding), is it oriented to anticipate the forthcoming events (predictive coding) or is it aimed at responding in a suitable fashion to the actions of others (response coding)? The aim of the present study was to disentangle the relative contribution of these three levels and unify them into an integrated view of cortical motor coding. We combined transcranial magnetic stimulation (TMS) and electromyography recordings at different timings to probe the excitability of corticospinal projections to upper and lower limb muscles of participants observing a soccer player performing: (i) a penalty kick straight in their direction and then coming to a full stop, (ii) a penalty kick straight in their direction and then continuing to run, (iii) a penalty kick to the side and then continuing to run. The results show a modulation of the observer's corticospinal excitability in different effectors at different times reflecting a multiplicity of motor coding. The internal replica of the observed action, the predictive activation, and the adaptive integration of congruent and non-congruent responses to the actions of others can coexist in a not mutually exclusive way. Such a view offers reconciliation among different (and apparently divergent) frameworks in action observation literature, and will promote a more complete and integrated understanding of recent findings on motor simulation, motor resonance and automatic imitation.

  13. Adaptive Modulation and Coding for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  14. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  15. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  16. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  17. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  18. Adaptive λ estimation in Lagrangian rate-distortion optimization for video coding

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Garbacea, Ilie

    2006-01-01

    In this paper, adaptive Lagrangian multiplier λ estimation in Larangian R-D optimization for video coding is presented that is based on the ρ-domain linear rate model and distortion model. It yields that λ is a function of rate, distortion and coding input statistics and can be written as λ(R, D, σ2) = β(ln(σ2/D) + δ)D/R + k 0, with β, δ and k 0 as coding constants, σ2 is variance of prediction error input. λ(R, D, σ2) describes its ubiquitous relationship with coding statistics and coding input in hybrid video coding such as H.263, MPEG-2/4 and H.264/AVC. The lambda evaluation is de-coupled with quantization parameters. The proposed lambda estimation enables a fine encoder design and encoder control.

  19. Divide sampling-based hybrid temporal-spatial prediction coding for H.264/AVC

    NASA Astrophysics Data System (ADS)

    Li, Hongwei; Song, Rui; Wu, Chengke; Zhang, Jie

    2011-11-01

    A divide sampling-based hybrid temporal-spatial prediction coding algorithm is designed to further improve the coding performance of the conventional H.264/AVC coding. In the proposed algorithm, a frame is first divided into four equal-sized subframes, and the first subframe is coded using the rate distortion optimization model with inter- or intraprediction adaptively. Then, the optimal prediction method of the macroblock in other subframes is selected flexibly and reasonably from intraprediction, the fast interprediction, and the spatial interpolation prediction. The simulation results show that compared with the conventional H.264/AVC coding, the average bit rate is reduced by 6.15% under the same peak signal-to-noise ratio (PSNR), the average PSNR is increased by 0.22 dB under the same bit rate, and the average coding time is saved by 12.40% in the proposed algorithm.

  20. Peripheral adaptation codes for high odor concentration in glomeruli.

    PubMed

    Lecoq, Jérôme; Tiret, Pascale; Charpak, Serge

    2009-03-11

    Adaptation is a general property of sensory receptor neurons and has been extensively studied in isolated cell preparation of olfactory receptor neurons. In contrast, little is known about the conditions under which peripheral adaptation occurs in the CNS during odorant stimulation. Here, we used two-photon laser-scanning microscopy and targeted extracellular recording in freely breathing anesthetized rats to investigate the correlate of peripheral adaptation at the first synapse of the olfactory pathway in olfactory bulb glomeruli. We find that during sustained stimulation at high concentration, odorants can evoke local field potential (LFP) postsynaptic responses that rapidly adapt with time, some within two inhalations. Simultaneous measurements of LFP and calcium influx at olfactory receptor neuron terminals reveal that postsynaptic adaptation is associated with a decrease in odorant-evoked calcium response, suggesting that it results from a decrease in glutamate release. This glomerular adaptation was concentration-dependent and did not change the glomerular input-output curve. In addition, in situ application of antagonists of either ionotropic glutamate receptors or metabotropic GABA(B) receptors did not affect this adaptation, thus discarding the involvement of local presynaptic inhibition. Glomerular adaptation, therefore, reflects the response decline of olfactory receptor neurons to sustained odorant. We postulate that peripheral fast adaptation is a means by which glomerular output codes for high concentration of odor.

  1. Adaptive EZW coding using a rate-distortion criterion

    NASA Astrophysics Data System (ADS)

    Yin, Che-Yi

    2001-07-01

    This work presents a new method that improves on the EZW image coding algorithm. The standard EZW image coder uses a uniform quantizer with a threshold (deadzone) that is identical in all subbands. The quantization step sizes are not optimized under the rate-distortion sense. We modify the EZW by applying the Lagrange multiplier to search for the best step size for each subband and allocate the bit rate for each subband accordingly. Then we implement the adaptive EZW codec to code the wavelet coefficients. Two coding environments, independent and dependent, are considered for the optimization process. The proposed image coder retains all the good features of the EZW, namely, embedded coding, progressive transmission, order of the important bits, and enhances it through the rate-distortion optimization with respect to the step sizes.

  2. Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding.

    PubMed

    Weinlich, Andreas; Amon, Peter; Hutter, Andreas; Kaup, André

    2016-03-01

    Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.

  3. Link-Adaptive Distributed Coding for Multisource Cooperation

    NASA Astrophysics Data System (ADS)

    Cano, Alfonso; Wang, Tairan; Ribeiro, Alejandro; Giannakis, Georgios B.

    2007-12-01

    Combining multisource cooperation and link-adaptive regenerative techniques, a novel protocol is developed capable of achieving diversity order up to the number of cooperating users and large coding gains. The approach relies on a two-phase protocol. In Phase 1, cooperating sources exchange information-bearing blocks, while in Phase 2, they transmit reencoded versions of the original blocks. Different from existing approaches, participation in the second phase does not require correct decoding of Phase 1 packets. This allows relaying of soft information to the destination, thus increasing coding gains while retaining diversity properties. For any reencoding function the diversity order is expressed as a function of the rank properties of the distributed coding strategy employed. This result is analogous to the diversity properties of colocated multi-antenna systems. Particular cases include repetition coding, distributed complex field coding (DCFC), distributed space-time coding, and distributed error-control coding. Rate, diversity, complexity and synchronization issues are elaborated. DCFC emerges as an attractive choice because it offers high-rate, full spatial diversity, and relaxed synchronization requirements. Simulations confirm analytically established assessments.

  4. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  5. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  6. Picturewise inter-view prediction selection for multiview video coding

    NASA Astrophysics Data System (ADS)

    Huo, Junyan; Chang, Yilin; Li, Ming; Yang, Haitao

    2010-11-01

    Inter-view prediction is introduced in multiview video coding (MVC) to exploit the inter-view correlation. Statistical analyses show that the coding gain benefited from inter-view prediction is unequal among pictures. On the basis of this observation, a picturewise interview prediction selection scheme is proposed. This scheme employs a novel inter-view prediction selection criterion to determine whether it is necessary to apply inter-view prediction to the current coding picture. This criterion is derived from the available coding information of the temporal reference pictures. Experimental results show that the proposed scheme can improve the performance of MVC with a comprehensive consideration of compression efficiency, computational complexity, and random access ability.

  7. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  8. Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways

    PubMed Central

    Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.

    2013-01-01

    Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101

  9. Predicting life-history adaptations to pollutants

    SciTech Connect

    Maltby, L.

    1995-12-31

    Animals may adapt to pollutant stress so that individuals from polluted environments are less susceptible than those from unpolluted environments. In addition to such direct adaptations, animals may respond to pollutant stress by life-history modifications; so-called indirect adaptations. This paper will demonstrate how, by combining life-history theory and toxicological data, it is possible to predict stress-induced alterations in reproductive output and offspring size. Pollutant-induced alterations in age-specific survival in favor of adults and reductions in juvenile growth, conditions are predicted to select for reduced investment in reproduction and the allocation of this investment into fewer, larger offspring. Field observations on the freshwater crustaceans, Asellus aquaticus and Gammarus pulex, support these predictions. Females from metal-polluted sites had lower investment in reproduction and produced larger offspring than females of the same species from unpolluted sites. Moreover, interpopulation differences in reproductive biology persisted in laboratory cultures indicating that they had a genetic basis and were therefore due to adaptation rather than acclimation. The general applicability of this approach will be considered.

  10. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    SciTech Connect

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  11. A optimized context-based adaptive binary arithmetic coding algorithm in progressive H.264 encoder

    NASA Astrophysics Data System (ADS)

    Xiao, Guang; Shi, Xu-li; An, Ping; Zhang, Zhao-yang; Gao, Ge; Teng, Guo-wei

    2006-05-01

    Context-based Adaptive Binary Arithmetic Coding (CABAC) is a new entropy coding method presented in H.264/AVC that is highly efficient in video coding. In the method, the probability of current symbol is estimated by using the wisely designed context model, which is adaptive and can approach to the statistic characteristic. Then an arithmetic coding mechanism largely reduces the redundancy in inter-symbol. Compared with UVLC method in the prior standard, CABAC is complicated but efficiently reduce the bit rate. Based on thorough analysis of coding and decoding methods of CABAC, This paper proposed two methods, sub-table method and stream-reuse methods, to improve the encoding efficiency implemented in H.264 JM code. In JM, the CABAC function produces bits one by one of every syntactic element. Multiplication operating times after times in the CABAC function lead to it inefficient.The proposed algorithm creates tables beforehand and then produce every bits of syntactic element. In JM, intra-prediction and inter-prediction mode selection algorithm with different criterion is based on RDO(rate distortion optimization) model. One of the parameter of the RDO model is bit rate that is produced by CABAC operator. After intra-prediction or inter-prediction mode selection, the CABAC stream is discard and is recalculated to output stream. The proposed Stream-reuse algorithm puts the stream in memory that is created in mode selection algorithm and reuses it in encoding function. Experiment results show that our proposed algorithm can averagely speed up 17 to 78 MSEL higher speed for QCIF and CIF sequences individually compared with the original algorithm of JM at the cost of only a little memory space. The CABAC was realized in our progressive h.264 encoder.

  12. Adaptive shape coding for perceptual decisions in the human brain

    PubMed Central

    Kourtzi, Zoe; Welchman, Andrew E.

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  13. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  14. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  15. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  16. Adaptive zero-tree structure for curved wavelet image coding

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Wang, Demin; Vincent, André

    2006-02-01

    We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.

  17. Efficient Coding of the Prediction Residual.

    DTIC Science & Technology

    1979-12-27

    Page I. Average of Fundamental and Formant Frequencies and Formant Amplitudes of Vowels by 76 Speakers .. ........... ... 28 II. Representation of IPA...0, 1, ..., M-1 p - Order of the LPC filter F1, F2, ... - Formant frequencies r,(m) - Output of the ith bandpass filter m = 0, 1, ..., M-l; z = 1, 2...correlation coefficients and other parameters that represent the formant frequency characteristics. The other waveform is the prediction residual. Figure

  18. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  19. Piecewise Mapping in HEVC Lossless Intra-prediction Coding.

    PubMed

    Sanchez, Victor; Auli-Llinas, Francesc; Serra-Sagrista, Joan

    2016-05-19

    The lossless intra-prediction coding modality of the High Efficiency Video Coding (HEVC) standard provides high coding performance while allowing frame-by-frame basis access to the coded data. This is of interest in many professional applications such as medical imaging, automotive vision and digital preservation in libraries and archives. Various improvements to lossless intra-prediction coding have been proposed recently, most of them based on sample-wise prediction using Differential Pulse Code Modulation (DPCM). Other recent proposals aim at further reducing the energy of intra-predicted residual blocks. However, the energy reduction achieved is frequently minimal due to the difficulty of correctly predicting the sign and magnitude of residual values. In this paper, we pursue a novel approach to this energy-reduction problem using piecewise mapping (pwm) functions. Specifically, we analyze the range of values in residual blocks and apply accordingly a pwm function to map specific residual values to unique lower values. We encode appropriate parameters associated with the pwm functions at the encoder, so that the corresponding inverse pwm functions at the decoder can map values back to the same residual values. These residual values are then used to reconstruct the original signal. This mapping is, therefore, reversible and introduces no losses. We evaluate the pwm functions on 4×4 residual blocks computed after DPCM-based prediction for lossless coding of a variety of camera-captured and screen content sequences. Evaluation results show that the pwm functions can attain maximum bit-rate reductions of 5.54% and 28.33% for screen content material compared to DPCM-based and block-wise intra-prediction, respectively. Compared to Intra- Block Copy, piecewise mapping can attain maximum bit-rate reductions of 11.48% for camera-captured material.

  20. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  1. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  2. Adaptive method for electron bunch profile prediction

    NASA Astrophysics Data System (ADS)

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.

  3. Long non-coding RNAs in innate and adaptive immunity

    PubMed Central

    Aune, Thomas M.; Spurlock, Charles F.

    2015-01-01

    Long noncoding RNAs (lncRNAs) represent a newly discovered class of regulatory molecules that impact a variety of biological processes in cells and organ systems. In humans, it is estimated that there may be more than twice as many lncRNA genes than protein-coding genes. However, only a handful of lncRNAs have been analyzed in detail. In this review, we describe expression and functions of lncRNAs that have been demonstrated to impact innate and adaptive immunity. These emerging paradigms illustrate remarkably diverse mechanisms that lncRNAs utilize to impact the transcriptional programs of immune cells required to fight against pathogens and maintain normal health and homeostasis. PMID:26166759

  4. Analysis of view synthesis prediction architectures in modern coding standards

    NASA Astrophysics Data System (ADS)

    Tian, Dong; Zou, Feng; Lee, Chris; Vetro, Anthony; Sun, Huifang

    2013-09-01

    Depth-based 3D formats are currently being developed as extensions to both AVC and HEVC standards. The availability of depth information facilitates the generation of intermediate views for advanced 3D applications and displays, and also enables more efficient coding of the multiview input data through view synthesis prediction techniques. This paper outlines several approaches that have been explored to realize view synthesis prediction in modern video coding standards such as AVC and HEVC. The benefits and drawbacks of various architectures are analyzed in terms of performance, complexity, and other design considerations. It is hence concluded that block-based VSP prediction for multiview video signals provides attractive coding gains with comparable complexity as traditional motion/disparity compensation.

  5. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  6. Fast bi-directional prediction selection in H.264/MPEG-4 AVC temporal scalable video coding.

    PubMed

    Lin, Hung-Chih; Hang, Hsueh-Ming; Peng, Wen-Hsiao

    2011-12-01

    In this paper, we propose a fast algorithm that efficiently selects the temporal prediction type for the dyadic hierarchical-B prediction structure in the H.264/MPEG-4 temporal scalable video coding (SVC). We make use of the strong correlations in prediction type inheritance to eliminate the superfluous computations for the bi-directional (BI) prediction in the finer partitions, 16×8/8×16/8×8 , by referring to the best temporal prediction type of 16 × 16. In addition, we carefully examine the relationship in motion bit-rate costs and distortions between the BI and the uni-directional temporal prediction types. As a result, we construct a set of adaptive thresholds to remove the unnecessary BI calculations. Moreover, for the block partitions smaller than 8 × 8, either the forward prediction (FW) or the backward prediction (BW) is skipped based upon the information of their 8 × 8 partitions. Hence, the proposed schemes can efficiently reduce the extensive computational burden in calculating the BI prediction. As compared to the JSVM 9.11 software, our method saves the encoding time from 48% to 67% for a large variety of test videos over a wide range of coding bit-rates and has only a minor coding performance loss.

  7. Dream to Predict? REM Dreaming as Prospective Coding

    PubMed Central

    Llewellyn, Sue

    2016-01-01

    The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies—through expectations derived from discerning patterns in associated past experiences—would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus—without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning—within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate—another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a

  8. Fast coding unit selection method for high efficiency video coding intra prediction

    NASA Astrophysics Data System (ADS)

    Xiong, Jian

    2013-07-01

    The high efficiency video coding (HEVC) video coding standard under development can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. To improve coding performance, a quad-tree coding structure and a robust rate-distortion (RD) optimization technique is used to select an optimum coding mode. Since the RD costs of all possible coding modes are computed to decide an optimum mode, high computational complexity is induced in the encoder. A fast learning-based coding unit (CU) size selection method is presented for HEVC intra prediction. The proposed algorithm is based on theoretical analysis that shows the non-normalized histogram of oriented gradient (n-HOG) can be used to help select CU size. A codebook is constructed offline by clustering n-HOGs of training sequences for each CU size. The optimum size is determined by comparing the n-HOG of the current CU with the learned codebooks. Experimental results show that the CU size selection scheme speeds up intra coding significantly with negligible loss of peak signal-to-noise ratio.

  9. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Since regular operation of the DOE/NASA MOD-1 wind turbine began in October 1979 about 10 nearby households have complained of noise from the machine. Development of the NASA-LeRC with turbine sound prediction code began in May 1980 as part of an effort to understand and reduce the noise generated by MOD-1. Tone sound levels predicted with this code are in generally good agreement with measured data taken in the vicinity MOD-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may be amplifying the actual sound levels by about 6 dB. Parametric analysis using the code has shown that the predominant contributions to MOD-1 rotor noise are: (1) the velocity deficit in the wake of the support tower; (2) the high rotor speed; and (3) off column operation.

  10. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.

  11. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  12. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  13. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS

    SciTech Connect

    McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.

  14. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  15. GenDecoder: genetic code prediction for metazoan mitochondria

    PubMed Central

    Abascal, Federico; Zardoya, Rafael; Posada, David

    2006-01-01

    Although the majority of the organisms use the same genetic code to translate DNA, several variants have been described in a wide range of organisms, both in nuclear and organellar systems, many of them corresponding to metazoan mitochondria. These variants are usually found by comparative sequence analyses, either conducted manually or with the computer. Basically, when a particular codon in a query-species is linked to positions for which a specific amino acid is consistently found in other species, then that particular codon is expected to translate as that specific amino acid. Importantly, and despite the simplicity of this approach, there are no available tools to help predicting the genetic code of an organism. We present here GenDecoder, a web server for the characterization and prediction of mitochondrial genetic codes in animals. The analysis of automatic predictions for 681 metazoans aimed us to study some properties of the comparative method, in particular, the relationship among sequence conservation, taxonomic sampling and reliability of assignments. Overall, the method is highly precise (99%), although highly divergent organisms such as platyhelminths are more problematic. The GenDecoder web server is freely available from . PMID:16845034

  16. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  17. Adaptation of H.264/AVC predictions for enabling fast transrating

    NASA Astrophysics Data System (ADS)

    Bordes, Philippe; Cherigui, Safa

    2010-01-01

    Fast video transrating algorithms for DCT-based video coding standards have proven their efficiency in many applications and are widely used in the industry. However, they cannot be re-used for H.264/AVC because they introduce an unacceptable level of drift. To settle this issue, this paper proposes to adapt the H.264/AVC predictions by separately processing the DC component from the other AC coefficients. This allows the drift to be removed from the requantization transrating algorithms. Experimental results show the amount of bits in our prediction scheme is only increased by 2.46 % for CIF and 1.87% for 720p in Intra in comparison with the H.264/AVC codec under the same PSNR. The performance of the fast transrating algorithms applied on streams generated with our method are improved dramatically, allowing to directly compete with the best in class, but computation load demanding Cascaded Pixel Domain decode and recode Transcoding (CPDT) architecture. Additionally, one potential application induced by this new prediction principle is the partial decoding of video streams to obtain reduced size images.

  18. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  19. Was Wright right? The canonical genetic code is an empirical example of an adaptive peak in nature; deviant genetic codes evolved using adaptive bridges.

    PubMed

    Seaborg, David M

    2010-08-01

    The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of "adaptive bridges," neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept.

  20. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  1. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  2. Biocomputational prediction of small non-coding RNAs in Streptomyces

    PubMed Central

    Pánek, Josef; Bobek, Jan; Mikulík, Karel; Basler, Marek; Vohradský, Jiří

    2008-01-01

    Background The first systematic study of small non-coding RNAs (sRNA, ncRNA) in Streptomyces is presented. Except for a few exceptions, the Streptomyces sRNAs, as well as the sRNAs in other genera of the Actinomyces group, have remained unstudied. This study was based on sequence conservation in intergenic regions of Streptomyces, localization of transcription termination factors, and genomic arrangement of genes flanking the predicted sRNAs. Results Thirty-two potential sRNAs in Streptomyces were predicted. Of these, expression of 20 was detected by microarrays and RT-PCR. The prediction was validated by a structure based computational approach. Two predicted sRNAs were found to be terminated by transcription termination factors different from the Rho-independent terminators. One predicted sRNA was identified computationally with high probability as a Streptomyces 6S RNA. Out of the 32 predicted sRNAs, 24 were found to be structurally dissimilar from known sRNAs. Conclusion Streptomyces is the largest genus of Actinomyces, whose sRNAs have not been studied. The Actinomyces is a group of bacterial species with unique genomes and phenotypes. Therefore, in Actinomyces, new unique bacterial sRNAs may be identified. The sequence and structural dissimilarity of the predicted Streptomyces sRNAs demonstrated by this study serve as the first evidence of the uniqueness of Actinomyces sRNAs. PMID:18477385

  3. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  4. AGR-1 Safety Test Predictions using the PARFUME code

    SciTech Connect

    Blaise Collin

    2012-05-01

    The PARFUME modeling code was used to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the first irradiation test of the Advanced Gas Reactor program (AGR-1). These calculations support the AGR-1 Safety Testing Experiment, which is part of the PIE effort on AGR-1. Modeling of the AGR-1 Safety Test Predictions includes a 620-day irradiation followed by a 300-hour heat-up phase of selected AGR-1 compacts. Results include fuel failure probability, palladium penetration, and fractional release of fission products. Results show that no particle failure is predicted during irradiation or heat-up, and that fractional release of fission products is limited during irradiation but that it significantly increases during heat-up.

  5. Development of a massively parallel parachute performance prediction code

    SciTech Connect

    Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.

    1997-04-01

    The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.

  6. Interpersonal predictive coding, not action perception, is impaired in autism

    PubMed Central

    von der Lühe, T.; Manera, V.; Barisic, I.; Becchio, C.; Vogeley, K.

    2016-01-01

    This study was conducted to examine interpersonal predictive coding in individuals with high-functioning autism (HFA). Healthy and HFA participants observed point-light displays of two agents (A and B) performing separate actions. In the ‘communicative’ condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the ‘individual’ condition, agent A's communicative action was substituted by a non-communicative action. Using a simultaneous masking-detection task, we demonstrate that observing agent A's communicative gesture enhanced visual discrimination of agent B for healthy controls, but not for participants with HFA. These results were not explained by differences in attentional factors as measured via eye-tracking, or by differences in the recognition of the point-light actions employed. Our findings, therefore, suggest that individuals with HFA are impaired in the use of social information to predict others' actions and provide behavioural evidence that such deficits could be closely related to impairments of predictive coding. PMID:27069050

  7. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  8. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  9. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  10. Customizing Countermeasure Prescriptions using Predictive Measures of Sensorimotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Miller, C. A.; Batson, C. D.; Wood, S. J.; Guined, J. R.; Cohen, H. S.; Buccello-Stout, R.; DeDios, Y. E.; Kofman, I. S.; Szecsy, D. L.; Erdeniz, B.; Koppelmans, V.; Seidler, R. D.

    2014-01-01

    Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the readapation phase following a return to a gravitational environment. These alterations may lead to disruption in the ability to perform mission critical functional tasks during and after these gravitational transitions. Astronauts show significant inter-subject variation in adaptive capability following gravitational transitions. The ability to predict the manner and degree to which each individual astronaut will be affected would improve the effectiveness of a countermeasure comprised of a training program designed to enhance sensorimotor adaptability. Due to this inherent individual variability we need to develop predictive measures of sensorimotor adaptability that will allow us to predict, before actual space flight, which crewmember will experience challenges in adaptive capacity. Thus, obtaining this information will allow us to design and implement better sensorimotor adaptability training countermeasures that will be customized for each crewmember's unique adaptive capabilities. Therefore the goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to design sensorimotor adaptability training countermeasures that are customized for each crewmember's individual sensorimotor adaptive characteristics. To achieve these goals we are currently pursuing the following specific aims: Aim 1: Determine whether behavioral metrics of individual sensory bias predict sensorimotor adaptability. For this aim, subjects perform tests that delineate individual sensory biases in tests of visual, vestibular, and proprioceptive function. Aim 2: Determine if individual capability for strategic and plastic-adaptive responses predicts sensorimotor adaptability. For this aim, each subject's strategic and plastic-adaptive motor learning abilities are assessed using

  11. A Programmable Liquid Collimator for Both Coded Aperture Adaptive Imaging and Multiplexed Compton Scatter Tomography

    DTIC Science & Technology

    2012-03-01

    Assessment of COMSCAN, A Compton Backscatter Imaging Camera , for the One-Sided Non-Destructive Inspection of Aerospace Compo- nents. Technical report...A PROGRAMMABLE LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Jack G. M. FitzGerald, 2d...LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Presented to the Faculty Department of

  12. Application study of piecewise context-based adaptive binary arithmetic coding combined with modified LZC

    NASA Astrophysics Data System (ADS)

    Su, Yan; Jun, Xie Cheng

    2006-08-01

    An algorithm of combining LZC and arithmetic coding algorithm for image compression is presented and both theory deduction and simulation result prove the correctness and feasibility of the algorithm. According to the characteristic of context-based adaptive binary arithmetic coding and entropy, LZC was modified to cooperate the optimized piecewise arithmetic coding, this algorithm improved the compression ratio without any additional time consumption compared to traditional method.

  13. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  14. Genome-environment associations in sorghum landraces predict adaptive traits

    PubMed Central

    Lasky, Jesse R.; Upadhyaya, Hari D.; Ramu, Punna; Deshpande, Santosh; Hash, C. Tom; Bonnette, Jason; Juenger, Thomas E.; Hyma, Katie; Acharya, Charlotte; Mitchell, Sharon E.; Buckler, Edward S.; Brenton, Zachary; Kresovich, Stephen; Morris, Geoffrey P.

    2015-01-01

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these could be used to predict phenotypic variation for adaptive traits. We tested this proposition in the global food crop Sorghum bicolor, characterizing 1943 georeferenced landraces at 404,627 SNPs and quantifying allelic associations with bioclimatic and soil gradients. Environment explained a substantial portion of SNP variation, independent of geographical distance, and genic SNPs were enriched for environmental associations. Further, environment-associated SNPs predicted genotype-by-environment interactions under experimental drought stress and aluminum toxicity. Our results suggest that genomic signatures of environmental adaptation may be useful for crop improvement, enhancing germplasm identification and marker-assisted selection. Together, genome-environment associations and phenotypic analyses may reveal the basis of environmental adaptation. PMID:26601206

  15. Fast motion prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  16. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  17. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  18. Numerical Prediction of SERN Performance using WIND code

    NASA Technical Reports Server (NTRS)

    Engblom, W. A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of single-expansion ramp nozzles (SERNs) during overexpanded operation and transonic flight. Three-dimensional Reynolds-Averaged Navier Stokes (RANS) results are obtained for two vehicle configurations, including the NASP Model 5B and ISTAR RBCC (a variant of X-43B) using the WIND code. Numerical predictions for nozzle integrated forces and pitch moments are directly compared to experimental data for the NASP Model 5B, and adequate-to-excellent agreement is found. The sensitivity of SERN performance and separation phenomena to freestream static pressure and Mach number is demonstrated via a matrix of cases for both vehicles. 3-D separation regions are shown to be induced by either lateral (e.g., sidewall) shocks or vertical (e.g., cowl trailing edge) shocks. Finally, the implications of this work to future preliminary design efforts involving SERNs are discussed.

  19. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Briones, Janette C.; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was con- ducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round- trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  20. Prediction of the space adaptation syndrome

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Homick, J. L.; Ryan, P.; Moseley, E. C.

    1984-01-01

    The univariate and multivariate relationships of provocative measures used to produce motion sickness symptoms were described. Normative subjects were used to develop and cross-validate sets of linear equations that optimally predict motion sickness in parabolic flights. The possibility of reducing the number of measurements required for prediction was assessed. After describing the variables verbally and statistically for 159 subjects, a factor analysis of 27 variables was completed to improve understanding of the relationships between variables and to reduce the number of measures for prediction purposes. The results of this analysis show that none of variables are significantly related to the responses to parabolic flights. A set of variables was selected to predict responses to KC-135 flights. A series of discriminant analyses were completed. Results indicate that low, moderate, or severe susceptibility could be correctly predicted 64 percent and 53 percent of the time on original and cross-validation samples, respectively. Both the factor analysis and the discriminant analysis provided no basis for reducing the number of tests.

  1. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition.

  2. AGR-2 safety test predictions using the PARFUME code

    SciTech Connect

    Collin, Blaise P.

    2014-09-01

    This report documents calculations performed to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the second irradiation test of the Advanced Gas Reactor program (AGR-2). The calculations include the modeling of the AGR-2 irradiation that occurred from June 2010 to October 2013 in the Advanced Test Reactor (ATR) and the modeling of a safety testing phase to support safety tests planned at Oak Ridge National Laboratory and at Idaho National Laboratory (INL) for a selection of AGR-2 compacts. The heat-up of AGR-2 compacts is a critical component of the AGR-2 fuel performance evaluation, and its objectives are to identify the effect of accident test temperature, burnup, and irradiation temperature on the performance of the fuel at elevated temperature. Safety testing of compacts will be followed by detailed examinations of the fuel particles to further evaluate fission product retention and behavior of the kernel and coatings. The modeling was performed using the particle fuel model computer code PARFUME developed at INL. PARFUME is an advanced gas-cooled reactor fuel performance modeling and analysis code (Miller 2009). It has been developed as an integrated mechanistic code that evaluates the thermal, mechanical, and physico-chemical behavior of fuel particles during irradiation to determine the failure probability of a population of fuel particles given the particle-to-particle statistical variations in physical dimensions and material properties that arise from the fuel fabrication process, accounting for all viable mechanisms that can lead to particle failure. The code also determines the diffusion of fission products from the fuel through the particle coating layers, and through the fuel matrix to the coolant boundary. The subsequent release of fission products is calculated at the compact level (release of fission products from the compact). PARFUME calculates the

  3. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  4. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  5. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  6. Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands

    NASA Astrophysics Data System (ADS)

    Gao, Fang; Guo, Shuxu

    2016-01-01

    An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.

  7. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  8. Sparse basis selection: new results and application to adaptive prediction of video source traffic.

    PubMed

    Atiya, Amir F; Aly, Mohamed A; Parlos, Alexander G

    2005-09-01

    Real-time prediction of video source traffic is an important step in many network management tasks such as dynamic bandwidth allocation and end-to-end quality-of-service (QoS) control strategies. In this paper, an adaptive prediction model for MPEG-coded traffic is developed. A novel technology is used, first developed in the signal processing community, called sparse basis selection. It is based on selecting a small subset of inputs (basis) from among a large dictionary of possible inputs. A new sparse basis selection algorithm is developed that is based on efficiently updating the input selection adaptively. When a new measurement is received, the proposed algorithm updates the selected inputs in a recursive manner. Thus, adaptability is not only in the weight adjustment, but also in the dynamic update of the inputs. The algorithm is applied to the problem of single-step-ahead prediction of MPEG-coded video source traffic, and the developed method achieves improved results, as compared to the published results in the literature. The present analysis indicates that the adaptive feature of the developed algorithm seems to add significant overall value.

  9. Predictive Coding: A Fresh View of Inhibition in the Retina

    NASA Astrophysics Data System (ADS)

    Srinivasan, M. V.; Laughlin, S. B.; Dubs, A.

    1982-11-01

    Interneurons exhibiting centre--surround antagonism within their receptive fields are commonly found in peripheral visual pathways. We propose that this organization enables the visual system to encode spatial detail in a manner that minimizes the deleterious effects of intrinsic noise, by exploiting the spatial correlation that exists within natural scenes. The antagonistic surround takes a weighted mean of the signals in neighbouring receptors to generate a statistical prediction of the signal at the centre. The predicted value is subtracted from the actual centre signal, thus minimizing the range of outputs transmitted by the centre. In this way the entire dynamic range of the interneuron can be devoted to encoding a small range of intensities, thus rendering fine detail detectable against intrinsic noise injected at later stages in processing. This predictive encoding scheme also reduces spatial redundancy, thereby enabling the array of interneurons to transmit a larger number of distinguishable images, taking into account the expected structure of the visual world. The profile of the required inhibitory field is derived from statistical estimation theory. This profile depends strongly upon the signal: noise ratio and weakly upon the extent of lateral spatial correlation. The receptive fields that are quantitatively predicted by the theory resemble those of X-type retinal ganglion cells and show that the inhibitory surround should become weaker and more diffuse at low intensities. The latter property is unequivocally demonstrated in the first-order interneurons of the fly's compound eye. The theory is extended to the time domain to account for the phasic responses of fly interneurons. These comparisons suggest that, in the early stages of processing, the visual system is concerned primarily with coding the visual image to protect against subsequent intrinsic noise, rather than with reconstructing the scene or extracting specific features from it. The treatment

  10. Shape-adaptive discrete wavelet transform for coding arbitrarily shaped texture

    NASA Astrophysics Data System (ADS)

    Li, Shipeng; Li, Weiping

    1997-01-01

    This paper presents a shape adaptive discrete wavelet transform (SA-DWT) scheme for coding arbitrarily shaped texture. The proposed SA-DWT can be used for object-oriented image coding. The number of coefficients after SA-DWT is identical to the number of pels contained in the arbitrarily shaped image objects. The locality property of wavelet transform and self-similarity among subbands are well preserved throughout this process.For a rectangular region, the SA-DWT is identical to a standard wavelet transform. With SA-DWT, conventional wavelet based coding schemes can be readily extended to the coding of arbitrarily shaped objects. The proposed shape adaptive wavelet transform is not unitary but the small energy increase is restricted at the boundary of objects in subbands. Two approaches of using the SA-DWT algorithm for object-oriented image and video coding are presented. One is to combine scalar SA-DWT with embedded zerotree wavelet (EZW) coding technique, the other is an extension of the normal vector wavelet coding (VWC) technique to arbitrarily shaped objects. Results of applying SA-VWC to real arbitrarily shaped texture coding are also given at the end of this paper.

  11. Unsupervised learning approach to adaptive differential pulse code modulation.

    PubMed

    Griswold, N C; Sayood, K

    1982-04-01

    This research is concerned with investigating the problem of data compression utilizing an unsupervised estimation algorithm. This extends previous work utilizing a hybrid source coder which combines an orthogonal transformation with differential pulse code modulation (DPCM). The data compression is achieved in the DPCM loop, and it is the quantizer of this scheme which is approached from an unsupervised learning procedure. The distribution defining the quantizer is represented as a set of separable Laplacian mixture densities for two-dimensional images. The condition of identifiability is shown for the Laplacian case and a decision directed estimate of both the active distribution parameters and the mixing parameters are discussed in view of a Bayesian structure. The decision directed estimators, although not optimum, provide a realizable structure for estimating the parameters which define a distribution which has become active. These parameters are then used to scale the optimum (in the mean square error sense) Laplacian quantizer. The decision criteria is modified to prevent convergence to a single distribution which in effect is the default condition for a variance estimator. This investigation was applied to a test image and the resulting data demonstrate improvement over other techniques using fixed bit assignments and ideal channel conditions.

  12. Optimal coding of vectorcardiographic sequences using spatial prediction.

    PubMed

    Augustyniak, Piotr

    2007-05-01

    This paper discusses principles, implementation details, and advantages of sequence coding algorithm applied to the compression of vectocardiograms (VCG). The main novelty of the proposed method is the automatic management of distortion distribution controlled by the local signal contents in both technical and medical aspects. As in clinical practice, the VCG loops representing P, QRS, and T waves in the three-dimensional (3-D) space are considered here as three simultaneous sequences of objects. Because of the similarity of neighboring loops, encoding the values of prediction error significantly reduces the data set volume. The residual values are de-correlated with the discrete cosine transform (DCT) and truncated at certain energy threshold. The presented method is based on the irregular temporal distribution of medical data in the signal and takes advantage of variable sampling frequency for automatically detected VCG loops. The features of the proposed algorithm are confirmed by the results of the numerical experiment carried out for a wide range of real records. The average data reduction ratio reaches a value of 8.15 while the percent root-mean-square difference (PRD) distortion ratio for the most important sections of signal does not exceed 1.1%.

  13. Modified linear predictive coding approach for moving target tracking by Doppler radar

    NASA Astrophysics Data System (ADS)

    Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao

    2016-07-01

    Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.

  14. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  15. Predicted functional RNAs within coding regions constrain evolutionary rates of yeast proteins.

    PubMed

    Warden, Charles D; Kim, Seong-Ho; Yi, Soojin V

    2008-02-13

    Functional RNAs (fRNAs) are being recognized as an important regulatory component in biological processes. Interestingly, recent computational studies suggest that the number and biological significance of functional RNAs within coding regions (coding fRNAs) may have been underestimated. We hypothesized that such coding fRNAs will impose additional constraint on sequence evolution because the DNA primary sequence has to simultaneously code for functional RNA secondary structures on the messenger RNA in addition to the amino acid codons for the protein sequence. To test this prediction, we first utilized computational methods to predict conserved fRNA secondary structures within multiple species alignments of Saccharomyces sensu strico genomes. We predict that as much as 5% of the genes in the yeast genome contain at least one functional RNA secondary structure within their protein-coding region. We then analyzed the impact of coding fRNAs on the evolutionary rate of protein-coding genes because a decrease in evolutionary rate implies constraint due to biological functionality. We found that our predicted coding fRNAs have a significant influence on evolutionary rates (especially at synonymous sites), independent of other functional measures. Thus, coding fRNA may play a role on sequence evolution. Given that coding regions of humans and flies contain many more predicted coding fRNAs than yeast, the impact of coding fRNAs on sequence evolution may be substantial in genomes of higher eukaryotes.

  16. 30 Mbit/s codec for the NTSC color TV signal using an interfield-intrafield adaptive prediction

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Hatori, Y.; Murakami, H.

    1981-12-01

    This paper proposes a new approach to the composite coding of the NTSC color TV signal, i.e., an interfield-intrafield adaptive prediction. First, concerning prediction efficiency for various moving pictures, an advantage of this coding scheme over interframe coding is clarified theoretically and experimentally. This adaptive prediction gives very good and stable performance for still to violently moving pictures. A 30 Mbit/s codec, based on this idea, and its performance are presented. Field transmission testing through an Intelsat satellite using this codec is also described. The picture quality is satisfactory for practically all the pictures expected in broadcast TV programs, and it is subjectively estimated to be a little better than that of the half-transponder FM transmission now employed in the Intelsat system.

  17. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel. [CONDOR code

    SciTech Connect

    Chen, K.F.; Olson, C.A.

    1983-09-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction.

  18. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  19. Predicting foreign-accent adaptation in older adults.

    PubMed

    Janse, Esther; Adank, Patti

    2012-01-01

    We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning.

  20. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    PubMed Central

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  1. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  2. Predicting Adaptive Behavior from the Bayley Scales of Infant Development.

    ERIC Educational Resources Information Center

    Hotard, Stephen; McWhirter, Richard

    To examine the proportion of variance in adaptive functioning predictable from mental ability, chronological age, I.Q., evidence of brain malfunction, seizure medication, and receptive and expressive language scores, 25 severely and profoundly retarded institutionalized persons (2-19 years old) were administered the Bayley Infant Scale Mental…

  3. The predictive roles of neural oscillations in speech motor adaptability.

    PubMed

    Sengupta, Ranit; Nasir, Sazzad M

    2016-06-01

    The human speech system exhibits a remarkable flexibility by adapting to alterations in speaking environments. While it is believed that speech motor adaptation under altered sensory feedback involves rapid reorganization of speech motor networks, the mechanisms by which different brain regions communicate and coordinate their activity to mediate adaptation remain unknown, and explanations of outcome differences in adaption remain largely elusive. In this study, under the paradigm of altered auditory feedback with continuous EEG recordings, the differential roles of oscillatory neural processes in motor speech adaptability were investigated. The predictive capacities of different EEG frequency bands were assessed, and it was found that theta-, beta-, and gamma-band activities during speech planning and production contained significant and reliable information about motor speech adaptability. It was further observed that these bands do not work independently but interact with each other suggesting an underlying brain network operating across hierarchically organized frequency bands to support motor speech adaptation. These results provide novel insights into both learning and disorders of speech using time frequency analysis of neural oscillations.

  4. Dynamic Forces in Spur Gears - Measurement, Prediction, and Code Validation

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Townsend, Dennis P.; Rebbechi, Brian; Lin, Hsiang Hsi

    1996-01-01

    Measured and computed values for dynamic loads in spur gears were compared to validate a new version of the NASA gear dynamics code DANST-PC. Strain gage data from six gear sets with different tooth profiles were processed to determine the dynamic forces acting between the gear teeth. Results demonstrate that the analysis code successfully simulates the dynamic behavior of the gears. Differences between analysis and experiment were less than 10 percent under most conditions.

  5. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  6. Application of Avco data analysis and prediction techniques (ADAPT) to prediction of sunspot activity

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.; Amato, R. A.

    1972-01-01

    The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.

  7. Development of a shock noise prediction code for high-speed helicopters - The subsonically moving shock

    NASA Technical Reports Server (NTRS)

    Tadghighi, H.; Holz, R.; Farassat, F.; Lee, Yung-Jang

    1991-01-01

    A previously defined airfoil subsonic shock-noise prediction formula whose result depends on a mapping of the time-dependent shock surface to a time-independent computational domain is presently coded and incorporated in the NASA-Langley rotor-noise prediction code, WOPWOP. The structure and algorithms used in the shock-noise prediction code are presented; special care has been taken to reduce computation time while maintaining accuracy. Numerical examples of shock-noise prediction are presented for hover and forward flight. It is confirmed that shock noise is an important component of the quadrupole source.

  8. Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.

  9. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  10. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  11. Adaptive bandwidth measurements of importance functions for speech intelligibility prediction.

    PubMed

    Whitmal, Nathaniel A; DeRoy, Kristina

    2011-12-01

    The Articulation Index (AI) and Speech Intelligibility Index (SII) predict intelligibility scores from measurements of speech and hearing parameters. One component in the prediction is the "importance function," a weighting function that characterizes contributions of particular spectral regions of speech to speech intelligibility. Previous work with SII predictions for hearing-impaired subjects suggests that prediction accuracy might improve if importance functions for individual subjects were available. Unfortunately, previous importance function measurements have required extensive intelligibility testing with groups of subjects, using speech processed by various fixed-bandwidth low-pass and high-pass filters. A more efficient approach appropriate to individual subjects is desired. The purpose of this study was to evaluate the feasibility of measuring importance functions for individual subjects with adaptive-bandwidth filters. In two experiments, ten subjects with normal-hearing listened to vowel-consonant-vowel (VCV) nonsense words processed by low-pass and high-pass filters whose bandwidths were varied adaptively to produce specified performance levels in accordance with the transformed up-down rules of Levitt [(1971). J. Acoust. Soc. Am. 49, 467-477]. Local linear psychometric functions were fit to resulting data and used to generate an importance function for VCV words. Results indicate that the adaptive method is reliable and efficient, and produces importance function data consistent with that of the corresponding AI/SII importance function.

  12. Optical image compression based on adaptive directional prediction discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Qiu, Bingchang

    2013-11-01

    The traditional lifting wavelet transform cannot effectively reconstruct the nonhorizontal and nonvertical high-frequency information of an image. In this paper, we present a new image compression method based on adaptive directional prediction discrete wavelet transform (ADP-DWT). We first design a directional prediction model to obtain the optimal transform direction of the lifting wavelet. Then, we execute the directional lifting transform along the optimal transform direction. The edge and texture energy can be reduced in the nonhorizontal and nonvertical directions of the high-frequency sub-bands. Finally, the wavelet coefficients are coded with the set partitioning in hierarchical trees (SPIHT) algorithm. The new method holds the advantages of both adaptive directional lifting (ADL) and direction-adaptive discrete wavelet transform (DA-DWT), and the computational complexity is far lower than that in these methods. For the images containing regular and fine textures or edges, the coding preformance of ADP-DWT is better than that of ADL and DA-DWT.

  13. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  14. GERMINAL — A computer code for predicting fuel pin behaviour

    NASA Astrophysics Data System (ADS)

    Melis, J. C.; Roche, L.; Piron, J. P.; Truffert, J.

    1992-06-01

    In the frame of the R and D on FBR fuels, CEA/DEC is developing the computer code GERMINAL to study the fuel pin thermal-mechanical behaviour during steady-state and incidental conditions. The development of GERMINAL is foreseen in two steps: (1) The GERMINAL 1 code designed as a "working horse" for immediate applications. The version 1 of GERMINAL 1 is presently delivered fully documented with a physical qualification guaranteed up to 8 at%. (2) The version 2 of GERMINAL 1, in addition to what is presently treated in GERMINAL 1 includes the treatment of high burnup effects on the fission gas release and the fuel-clad joint. This version, GERMINAL 1.2, is presently under testing and will be completed up to the end of 1991. The GERMINAL 2 code designed as a reference code for future applications will cover all the aspects of GERMINAL 1 (including high burnup effects) with a more general mechanical treatment, and a completely revised and advanced informatical structure.

  15. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  16. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  17. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Astrophysics Data System (ADS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-11-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  18. Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.

    2010-01-01

    Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.

  19. Towards feasible and effective predictive wavefront control for adaptive optics

    SciTech Connect

    Poyneer, L A; Veran, J

    2008-06-04

    We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.

  20. Assessment of 3D Codes for Predicting Liner Attenuation in Flow Ducts

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nark, D. M.; Jones, M. G.

    2008-01-01

    This paper presents comparisons of seven propagation codes for predicting liner attenuation in ducts with flow. The selected codes span the spectrum of methods available (finite element, parabolic approximation, and pseudo-time domain) and are collectively representative of the state-of-art in the liner industry. These codes are included because they have two-dimensional and three-dimensional versions and can be exported to NASA's Columbia Supercomputer. The basic assumptions, governing differential equations, boundary conditions, and numerical methods underlying each code are briefly reviewed and an assessment is performed based on two predefined metrics. The two metrics used in the assessment are the accuracy of the predicted attenuation and the amount of wall clock time to predict the attenuation. The assessment is performed over a range of frequencies, mean flow rates, and grazing flow liner impedances commonly used in the liner industry. The primary conclusions of the study are (1) predicted attenuations are in good agreement for rigid wall ducts, (2) the majority of codes compare well to each other and to approximate results from mode theory for soft wall ducts, (3) most codes compare well to measured data on a statistical basis, (4) only the finite element codes with cubic Hermite polynomials capture extremely large attenuations, and (5) wall clock time increases by an order of magnitude or more are observed for a three-dimensional code relative to the corresponding two-dimensional version of the same code.

  1. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  2. Adapting a Navier-Stokes code to the ICL-DAP

    NASA Technical Reports Server (NTRS)

    Grosch, C. E.

    1985-01-01

    The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.

  3. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  4. Curved Duct Noise Prediction Using the Fast Scattering Code

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Tinetti, Ana F.; Farassat, F.

    2007-01-01

    Results of a study to validate the Fast Scattering Code (FSC) as a duct noise predictor, including the effects of curvature, finite impedance on the walls, and uniform background flow, are presented in this paper. Infinite duct theory was used to generate the modal content of the sound propagating within the duct. Liner effects were incorporated via a sound absorbing boundary condition on the scattering surfaces. Simulations for a rectangular duct of constant cross-sectional area have been compared to analytical solutions and experimental data. Comparisons with analytical results indicate that the code can properly calculate a given dominant mode for hardwall surfaces. Simulated acoustic behavior in the presence of lined walls (using hardwall duct modes as incident sound) is consistent with expected trends. Duct curvature was found to enhance weaker modes and reduce pressure amplitude. Agreement between simulated and experimental results for a straight duct with hard walls (no flow) was excellent.

  5. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  6. Operation of the helicopter antenna radiation prediction code

    NASA Technical Reports Server (NTRS)

    Braeden, E. W.; Klevenow, F. T.; Newman, E. H.; Rojas, R. G.; Sampath, K. S.; Scheik, J. T.; Shamansky, H. T.

    1993-01-01

    HARP is a front end as well as a back end for the AMC and NEWAIR computer codes. These codes use the Method of Moments (MM) and the Uniform Geometrical Theory of Diffraction (UTD), respectively, to calculate the electromagnetic radiation patterns for antennas on aircraft. The major difficulty in using these codes is in the creation of proper input files for particular aircraft and in verifying that these files are, in fact, what is intended. HARP creates these input files in a consistent manner and allows the user to verify them for correctness using sophisticated 2 and 3D graphics. After antenna field patterns are calculated using either MM or UTD, HARP can display the results on the user's screen or provide hardcopy output. Because the process of collecting data, building the 3D models, and obtaining the calculated field patterns was completely automated by HARP, the researcher's productivity can be many times what it could be if these operations had to be done by hand. A complete, step by step, guide is provided so that the researcher can quickly learn to make use of all the capabilities of HARP.

  7. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  8. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  9. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    SciTech Connect

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.

  10. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  11. Impact of hierarchies of clinical codes on predicting future days in hospital.

    PubMed

    Yang Xie; Neubauer, Sandra; Schreier, Gunter; Redmond, Stephen J; Lovell, Nigel H

    2015-01-01

    Health insurance claims contain valuable information for predicting the future health of a population. Nowadays, with many mature machine learning algorithms, models can be implemented to predict future medical costs and hospitalizations. However, it is well-known that the way in which the data are represented significantly affects the performance of machine learning algorithms. In health insurance claims, key clinical information mainly comes from the associated clinical codes, such as diagnosis codes and procedure codes, which are hierarchically structured. In this study, it is investigated whether the hierarchies of such clinical codes can be utilized to improve predictive performance in the context of predicting future days in hospital. Empirical investigations were done on data sets of different sizes, considering that the frequency of the appearance of lower-level (more specific) clinical codes could vary significantly in populations of different sizes. The use of bagged trees with feature sets that include only basic demographic features, low-level, medium-level, high-level clinical codes, and a full feature set were compared. The main finding from this study is that different hierarchies of clinical codes do not have a significant impact on the predictive power. Some other findings include: 1) Sample size greatly affects the predictive outcome (more observations result in more stable and more accurate outcomes); 2) Combined use of enriched demographic features and clinical features give better performance as compared to using them separately.

  12. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  13. An adaptive prediction and detection algorithm for multistream syndromic surveillance

    PubMed Central

    Najmi, Amir-Homayoon; Magruder, Steve F

    2005-01-01

    Background Surveillance of Over-the-Counter pharmaceutical (OTC) sales as a potential early indicator of developing public health conditions, in particular in cases of interest to biosurvellance, has been suggested in the literature. This paper is a continuation of a previous study in which we formulated the problem of estimating clinical data from OTC sales in terms of optimal LMS linear and Finite Impulse Response (FIR) filters. In this paper we extend our results to predict clinical data multiple steps ahead using OTC sales as well as the clinical data itself. Methods The OTC data are grouped into a few categories and we predict the clinical data using a multichannel filter that encompasses all the past OTC categories as well as the past clinical data itself. The prediction is performed using FIR (Finite Impulse Response) filters and the recursive least squares method in order to adapt rapidly to nonstationary behaviour. In addition, we inject simulated events in both clinical and OTC data streams to evaluate the predictions by computing the Receiver Operating Characteristic curves of a threshold detector based on predicted outputs. Results We present all prediction results showing the effectiveness of the combined filtering operation. In addition, we compute and present the performance of a detector using the prediction output. Conclusion Multichannel adaptive FIR least squares filtering provides a viable method of predicting public health conditions, as represented by clinical data, from OTC sales, and/or the clinical data. The potential value to a biosurveillance system cannot, however, be determined without studying this approach in the presence of transient events (nonstationary events of relatively short duration and fast rise times). Our simulated events superimposed on actual OTC and clinical data allow us to provide an upper bound on that potential value under some restricted conditions. Based on our ROC curves we argue that a biosurveillance system can

  14. ICAN: A versatile code for predicting composite properties

    NASA Technical Reports Server (NTRS)

    Ginty, C. A.; Chamis, C. C.

    1986-01-01

    The Integrated Composites ANalyzer (ICAN), a stand-alone computer code, incorporates micromechanics equations and laminate theory to analyze/design multilayered fiber composite structures. Procedures for both the implementation of new data in ICAN and the selection of appropriate measured data are summarized for: (1) composite systems subject to severe thermal environments; (2) woven fabric/cloth composites; and (3) the selection of new composite systems including those made from high strain-to-fracture fibers. The comparisons demonstrate the versatility of ICAN as a reliable method for determining composite properties suitable for preliminary design.

  15. Predictive codes of familiarity and context during the perceptual learning of facial identities.

    PubMed

    Apps, Matthew A J; Tsakiris, Manos

    2013-01-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  16. Monte Carlo Predictions of Prompt Fission Neutrons and Photons: a Code Comparison

    NASA Astrophysics Data System (ADS)

    Talou, P.; Kawano, T.; Stetcu, I.; Vogt, R.; Randrup, J.

    2014-04-01

    This paper reports on initial comparisons between the LANL CGMF and LBNL/LLNL FREYA codes, which both aim at computing prompt fission neutrons and gammas. While the methodologies used in both codes are somewhat similar, the detailed implementations and physical assumptions are different. We are investigating how some of these differences impact predictions.

  17. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.

  18. Visual Bias Predicts Gait Adaptability in Novel Sensory Discordant Conditions

    NASA Technical Reports Server (NTRS)

    Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    We designed a gait training study that presented combinations of visual flow and support-surface manipulations to investigate the response of healthy adults to novel discordant sensorimotor conditions. We aimed to determine whether a relationship existed between subjects visual dependence and their postural stability and cognitive performance in a new discordant environment presented at the conclusion of training (Transfer Test). Our training system comprised a treadmill placed on a motion base facing a virtual visual scene that provided a variety of sensory challenges. Ten healthy adults completed 3 training sessions during which they walked on a treadmill at 1.1 m/s while receiving discordant support-surface and visual manipulations. At the first visit, in an analysis of normalized torso translation measured in a scene-movement-only condition, 3 of 10 subjects were classified as visually dependent. During the Transfer Test, all participants received a 2-minute novel exposure. In a combined measure of stride frequency and reaction time, the non-visually dependent subjects showed improved adaptation on the Transfer Test compared to their visually dependent counterparts. This finding suggests that individual differences in the ability to adapt to new sensorimotor conditions may be explained by individuals innate sensory biases. An accurate preflight assessment of crewmembers biases for visual dependence could be used to predict their propensities to adapt to novel sensory conditions. It may also facilitate the development of customized training regimens that could expedite adaptation to alternate gravitational environments.

  19. Adaptive three-dimensional motion-compensated wavelet transform for image sequence coding

    NASA Astrophysics Data System (ADS)

    Leduc, Jean-Pierre

    1994-09-01

    This paper describes a 3D spatio-temporal coding algorithm for the bit-rate compression of digital-image sequences. The coding scheme is based on different specificities namely, a motion representation with a four-parameter affine model, a motion-adapted temporal wavelet decomposition along the motion trajectories and a signal-adapted spatial wavelet transform. The motion estimation is performed on the basis of four-parameter affine transformation models also called similitude. This transformation takes into account translations, rotations and scalings. The temporal wavelet filter bank exploits bi-orthogonal linear-phase dyadic decompositions. The 2D spatial decomposition is based on dyadic signal-adaptive filter banks with either para-unitary or bi-orthogonal bases. The adaptive filtering is carried out according to a performance criterion to be optimized under constraints in order to eventually maximize the compression ratio at the expense of graceful degradations of the subjective image quality. The major principles of the present technique is, in the analysis process, to extract and to separate the motion contained in the sequences from the spatio-temporal redundancy and, in the compression process, to take into account of the rate-distortion function on the basis of the spatio-temporal psycho-visual properties to achieve the most graceful degradations. To complete this description of the coding scheme, the compression procedure is therefore composed of scalar quantizers which exploit the spatio-temporal 3D psycho-visual properties of the Human Visual System and of entropy coders which finalize the bit rate compression.

  20. Adaptive colour transformation of retinal images for stroke prediction.

    PubMed

    Unnikrishnan, Premith; Aliahmad, Behzad; Kawasaki, Ryo; Kumar, Dinesh

    2013-01-01

    Identifying lesions in the retinal vasculature using Retinal imaging is most often done on the green channel. However, the effect of colour and single channel analysis on feature extraction has not yet been studied. In this paper an adaptive colour transformation has been investigated and validated on retinal images associated with 10-year stroke prediction, using principle component analysis (PCA). Histogram analysis indicated that while each colour channel image had a uni-modal distribution, the second component of the PCA had a bimodal distribution, and showed significantly improved separation between the retinal vasculature and the background. The experiments showed that using adaptive colour transformation, the sensitivity and specificity were both higher (AUC 0.73) compared with when single green channel was used (AUC 0.63) for the same database and image features.

  1. Pipelined recurrent fuzzy neural networks for nonlinear adaptive speech prediction.

    PubMed

    Stavrakoudis, Dimitris G; Theocharis, John B

    2007-10-01

    A class of pipelined recurrent fuzzy neural networks (PRFNNs) is proposed in this paper for nonlinear adaptive speech prediction. The PRFNNs are modular structures comprising a number of modules that are interconnected in a chained form. Each module is implemented by a small-scale recurrent fuzzy neural network (RFNN) with internal dynamics. Due to module nesting, the PRFNNs offer a number of desirable attributes, including decomposition of the modeling task, enhanced temporal processing capabilities, and multistage dynamic fuzzy inference. Tuning of the PRFNN adaptable parameters is accomplished by a series of gradient descent methods with different weighting of the modules and the decoupled extended Kalman filter (DEKF) algorithm, based on weight grouping. Extensive experimentation is carried out to evaluate the performance of the PRFNNs on the speech prediction platform. Comparative analysis shows that the PRFNNs outperform the single-RFNN models in terms of the prediction gains that are obtained and computational efficiency. Furthermore, PRFNNs provide considerably better performance compared to pipelined recurrent neural networks, for models with similar model complexity.

  2. Error correction, sensory prediction, and adaptation in motor control.

    PubMed

    Shadmehr, Reza; Smith, Maurice A; Krakauer, John W

    2010-01-01

    Motor control is the study of how organisms make accurate goal-directed movements. Here we consider two problems that the motor system must solve in order to achieve such control. The first problem is that sensory feedback is noisy and delayed, which can make movements inaccurate and unstable. The second problem is that the relationship between a motor command and the movement it produces is variable, as the body and the environment can both change. A solution is to build adaptive internal models of the body and the world. The predictions of these internal models, called forward models because they transform motor commands into sensory consequences, can be used to both produce a lifetime of calibrated movements, and to improve the ability of the sensory system to estimate the state of the body and the world around it. Forward models are only useful if they produce unbiased predictions. Evidence shows that forward models remain calibrated through motor adaptation: learning driven by sensory prediction errors.

  3. Effects of selective adaptation on coding sugar and salt tastes in mixtures.

    PubMed

    Frank, Marion E; Goyert, Holly F; Formaker, Bradley K; Hettinger, Thomas P

    2012-10-01

    Little is known about coding of taste mixtures in complex dynamic stimulus environments. A protocol developed for odor stimuli was used to test whether rapid selective adaptation extracted sugar and salt component tastes from mixtures as it did component odors. Seventeen human subjects identified taste components of "salt + sugar" mixtures. In 4 sessions, 16 adapt-test stimulus pairs were presented as atomized, 150-μL "taste puffs" to the tongue tip to simulate odor sniffs. Stimuli were NaCl, sucrose, "NaCl + sucrose," and water. The sugar was 98% identified but the suppressed salt 65% identified in unadapted mixtures of 2 concentrations of NaCl, 0.1 or 0.05 M, and sucrose at 3 times those concentrations, 0.3 or 0.15 M. Rapid selective adaptation decreased identification of sugar and salt preadapted ambient components to 35%, well below the 74% self-adapted level, despite variation in stimulus concentration and adapting time (<5 or >10 s). The 96% identification of sugar and salt extra mixture components was as certain as identification of single compounds. The results revealed that salt-sugar mixture suppression, dependent on relative mixture-component concentration, was mutual. Furthermore, like odors, stronger and recent tastes are emphasized in dynamic experimental conditions replicating natural situations.

  4. Evolutionary modeling and prediction of non-coding RNAs in Drosophila.

    PubMed

    Bradley, Robert K; Uzilov, Andrew V; Skinner, Mitchell E; Bendaña, Yuri R; Barquist, Lars; Holmes, Ian

    2009-08-11

    We performed benchmarks of phylogenetic grammar-based ncRNA gene prediction, experimenting with eight different models of structural evolution and two different programs for genome alignment. We evaluated our models using alignments of twelve Drosophila genomes. We find that ncRNA prediction performance can vary greatly between different gene predictors and subfamilies of ncRNA gene. Our estimates for false positive rates are based on simulations which preserve local islands of conservation; using these simulations, we predict a higher rate of false positives than previous computational ncRNA screens have reported. Using one of the tested prediction grammars, we provide an updated set of ncRNA predictions for D. melanogaster and compare them to previously-published predictions and experimental data. Many of our predictions show correlations with protein-coding genes. We found significant depletion of intergenic predictions near the 3' end of coding regions and furthermore depletion of predictions in the first intron of protein-coding genes. Some of our predictions are colocated with larger putative unannotated genes: for example, 17 of our predictions showing homology to the RFAM family snoR28 appear in a tandem array on the X chromosome; the 4.5 Kbp spanned by the predicted tandem array is contained within a FlyBase-annotated cDNA.

  5. PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions

    NASA Technical Reports Server (NTRS)

    Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark

    1990-01-01

    Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.

  6. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  7. Ducted-Fan Engine Acoustic Predictions Using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor, temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier Stokes codes can be used both to generate and propagate rotor-stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  8. Ducted-Fan Engine Acoustic Predictions using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor; temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier-Stokes codes can be used both to generate and propagate rotor/stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  9. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  10. Performance predictions for the Keck telescope adaptive optics system

    SciTech Connect

    Gavel, D.T.; Olivier, S.S.

    1995-08-07

    The second Keck ten meter telescope (Keck-11) is slated to have an infrared-optimized adaptive optics system in the 1997--1998 time frame. This system will provide diffraction-limited images in the 1--3 micron region and the ability to use a diffraction-limited spectroscopy slit. The AO system is currently in the preliminary design phase and considerable analysis has been performed in order to predict its performance under various seeing conditions. In particular we have investigated the point-spread function, energy through a spectroscopy slit, crowded field contrast, object limiting magnitude, field of view, and sky coverage with natural and laser guide stars.

  11. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  12. A new hybrid coding for protein secondary structure prediction based on primary structure similarity.

    PubMed

    Li, Zhong; Wang, Jing; Zhang, Shunpu; Zhang, Qifeng; Wu, Wuming

    2017-03-16

    The coding pattern of protein can greatly affect the prediction accuracy of protein secondary structure. In this paper, a novel hybrid coding method based on the physicochemical properties of amino acids and tendency factors is proposed for the prediction of protein secondary structure. The principal component analysis (PCA) is first applied to the physicochemical properties of amino acids to construct a 3-bit-code, and then the 3 tendency factors of amino acids are calculated to generate another 3-bit-code. Two 3-bit-codes are fused to form a novel hybrid 6-bit-code. Furthermore, we make a geometry-based similarity comparison of the protein primary structure between the reference set and the test set before the secondary structure prediction. We finally use the support vector machine (SVM) to predict those amino acids which are not detected by the primary structure similarity comparison. Experimental results show that our method achieves a satisfactory improvement in accuracy in the prediction of protein secondary structure.

  13. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  14. An adaptive multigrid model for hurricane track prediction

    NASA Technical Reports Server (NTRS)

    Fulton, Scott R.

    1993-01-01

    This paper describes a simple numerical model for hurricane track prediction which uses a multigrid method to adapt the model resolution as the vortex moves. The model is based on the modified barotropic vorticity equation, discretized in space by conservative finite differences and in time by a Runge-Kutta scheme. A multigrid method is used to solve an elliptic problem for the streamfunction at each time step. Nonuniform resolution is obtained by superimposing uniform grids of different spatial extent; these grids move with the vortex as it moves. Preliminary numerical results indicate that the local mesh refinement allows accurate prediction of the hurricane track with substantially less computer time than required on a single uniform grid.

  15. Prediction of Conductivity by Adaptive Neuro-Fuzzy Model

    PubMed Central

    Akbarzadeh, S.; Arof, A. K.; Ramesh, S.; Khanmirzaei, M. H.; Nor, R. M.

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  16. Pilot-Assisted Adaptive Channel Estimation for Coded MC-CDMA with ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Yui, Tatsunori; Tomeba, Hiromichi; Adachi, Fumiyuki

    One of the promising wireless access techniques for the next generation mobile communications systems is multi-carrier code division multiple access (MC-CDMA). MC-CDMA can provide good transmission performance owing to the frequency diversity effect in a severe frequency-selective fading channel. However, the bit error rate (BER) performance of coded MC-CDMA is inferior to that of orthogonal frequency division multiplexing (OFDM) due to the residual inter-code interference (ICI) after frequency-domain equalization (FDE). Recently, we proposed a frequency-domain soft interference cancellation (FDSIC) to reduce the residual ICI and confirmed by computer simulation that the MC-CDMA with FDSIC provides better BER performance than OFDM. However, ideal channel estimation was assumed. In this paper, we propose adaptive decision-feedback channel estimation (ADFCE) and evaluate by computer simulation the average BER and throughput performances of turbo-coded MC-CDMA with FDSIC. We show that even if a practical channel estimation is used, MC-CDMA with FDSIC can still provide better performance than OFDM.

  17. Contour coding based rotating adaptive model for human detection and tracking in thermal catadioptric omnidirectional vision.

    PubMed

    Tang, Yazhe; Li, Youfu

    2012-09-20

    In this paper, we introduce a novel surveillance system based on thermal catadioptric omnidirectional (TCO) vision. The conventional contour-based methods are difficult to be applied to the TCO sensor for detection or tracking purposes due to the distortion of TCO vision. To solve this problem, we propose a contour coding based rotating adaptive model (RAM) that can extract the contour feature from the TCO vision directly as it takes advantage of the relative angle based on the characteristics of TCO vision to change the sequence of sampling automatically. A series of experiments and quantitative analyses verify that the performance of the proposed RAM-based contour coding feature for human detection and tracking are satisfactory in TCO vision.

  18. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  19. Predictive coding under the free-energy principle.

    PubMed

    Friston, Karl; Kiebel, Stefan

    2009-05-12

    This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs.

  20. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  1. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  2. Generating Adaptive Behaviour within a Memory-Prediction Framework

    PubMed Central

    Rawlinson, David; Kowadlo, Gideon

    2012-01-01

    The Memory-Prediction Framework (MPF) and its Hierarchical-Temporal Memory implementation (HTM) have been widely applied to unsupervised learning problems, for both classification and prediction. To date, there has been no attempt to incorporate MPF/HTM in reinforcement learning or other adaptive systems; that is, to use knowledge embodied within the hierarchy to control a system, or to generate behaviour for an agent. This problem is interesting because the human neocortex is believed to play a vital role in the generation of behaviour, and the MPF is a model of the human neocortex. We propose some simple and biologically-plausible enhancements to the Memory-Prediction Framework. These cause it to explore and interact with an external world, while trying to maximize a continuous, time-varying reward function. All behaviour is generated and controlled within the MPF hierarchy. The hierarchy develops from a random initial configuration by interaction with the world and reinforcement learning only. Among other demonstrations, we show that a 2-node hierarchy can learn to successfully play “rocks, paper, scissors” against a predictable opponent. PMID:22272231

  3. Predictive Simulation Generates Human Adaptations during Loaded and Inclined Walking

    PubMed Central

    Hicks, Jennifer L.; Delp, Scott L.

    2015-01-01

    Predictive simulation is a powerful approach for analyzing human locomotion. Unlike techniques that track experimental data, predictive simulations synthesize gaits by minimizing a high-level objective such as metabolic energy expenditure while satisfying task requirements like achieving a target velocity. The fidelity of predictive gait simulations has only been systematically evaluated for locomotion data on flat ground. In this study, we construct a predictive simulation framework based on energy minimization and use it to generate normal walking, along with walking with a range of carried loads and up a range of inclines. The simulation is muscle-driven and includes controllers based on muscle force and stretch reflexes and contact state of the legs. We demonstrate how human-like locomotor strategies emerge from adapting the model to a range of environmental changes. Our simulation dynamics not only show good agreement with experimental data for normal walking on flat ground (92% of joint angle trajectories and 78% of joint torque trajectories lie within 1 standard deviation of experimental data), but also reproduce many of the salient changes in joint angles, joint moments, muscle coordination, and metabolic energy expenditure observed in experimental studies of loaded and inclined walking. PMID:25830913

  4. Validation of finite-element codes for prediction of machining distortions in forgings

    NASA Astrophysics Data System (ADS)

    Chandra, U.

    1993-06-01

    When a forging is machined to its net shape, unacceptably large distortions can occur if the final part shape is complex. Such distortions can be predicted with the application of the finite-element method. However, numerical errors associated with the finite element technique can render such predictions unreliable. This paper presents benchmark problems for verifying the accuracy of machining distortions predicted by any prospective finite-element code. Also, a comparison between two industry-standard general-purpose codes, ANSYS and ABAQUS, is presented.

  5. Maneuvering Rotorcraft Noise Prediction: A New Code for a New Problem

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Bres, Guillaume A.; Perez, Guillaume; Jones, Henry E.

    2002-01-01

    This paper presents the unique aspects of the development of an entirely new maneuver noise prediction code called PSU-WOPWOP. The main focus of the code is the aeroacoustic aspects of the maneuver noise problem, when the aeromechanical input data are provided (namely aircraft and blade motion, blade airloads). The PSU-WOPWOP noise prediction capability was developed for rotors in steady and transient maneuvering flight. Featuring an object-oriented design, the code allows great flexibility for complex rotor configuration and motion (including multiple rotors and full aircraft motion). The relative locations and number of hinges, flexures, and body motions can be arbitrarily specified to match the any specific rotorcraft. An analysis of algorithm efficiency is performed for maneuver noise prediction along with a description of the tradeoffs made specifically for the maneuvering noise problem. Noise predictions for the main rotor of a rotorcraft in steady descent, transient (arrested) descent, hover and a mild "pop-up" maneuver are demonstrated.

  6. Orbitofrontal Cortex Signals Expected Outcomes with Predictive Codes When Stable Contingencies Promote the Integration of Reward History.

    PubMed

    Riceberg, Justin S; Shapiro, Matthew L

    2017-02-22

    Memory can inform goal-directed behavior by linking current opportunities to past outcomes. The orbitofrontal cortex (OFC) may guide value-based responses by integrating the history of stimulus-reward associations into expected outcomes, representations of predicted hedonic value and quality. Alternatively, the OFC may rapidly compute flexible "online" reward predictions by associating stimuli with the latest outcome. OFC neurons develop predictive codes when rats learn to associate arbitrary stimuli with outcomes, but the extent to which predictive coding depends on most recent events and the integrated history of rewards is unclear. To investigate how reward history modulates OFC activity, we recorded OFC ensembles as rats performed spatial discriminations that differed only in the number of rewarded trials between goal reversals. The firing rate of single OFC neurons distinguished identical behaviors guided by different goals. When >20 rewarded trials separated goal switches, OFC ensembles developed stable and anticorrelated population vectors that predicted overall choice accuracy and the goal selected in single trials. When <10 rewarded trials separated goal switches, OFC population vectors decorrelated rapidly after each switch, but did not develop anticorrelated firing patterns or predict choice accuracy. The results show that, whereas OFC signals respond rapidly to contingency changes, they predict choices only when reward history is relatively stable, suggesting that consecutive rewarded episodes are needed for OFC computations that integrate reward history into expected outcomes.SIGNIFICANCE STATEMENT Adapting to changing contingencies and making decisions engages the orbitofrontal cortex (OFC). Previous work shows that OFC function can either improve or impair learning depending on reward stability, suggesting that OFC guides behavior optimally when contingencies apply consistently. The mechanisms that link reward history to OFC computations remain

  7. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.

  8. A Predictive Approach to Eliminating Errors in Software Code

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA s Metrics Data Program Data Repository is a database that stores problem, product, and metrics data. The primary goal of this data repository is to provide project data to the software community. In doing so, the Metrics Data Program collects artifacts from a large NASA dataset, generates metrics on the artifacts, and then generates reports that are made available to the public at no cost. The data that are made available to general users have been sanitized and authorized for publication through the Metrics Data Program Web site by officials representing the projects from which the data originated. The data repository is operated by NASA s Independent Verification and Validation (IV&V) Facility, which is located in Fairmont, West Virginia, a high-tech hub for emerging innovation in the Mountain State. The IV&V Facility was founded in 1993, under the NASA Office of Safety and Mission Assurance, as a direct result of recommendations made by the National Research Council and the Report of the Presidential Commission on the Space Shuttle Challenger Accident. Today, under the direction of Goddard Space Flight Center, the IV&V Facility continues its mission to provide the highest achievable levels of safety and cost-effectiveness for mission-critical software. By extending its data to public users, the facility has helped improve the safety, reliability, and quality of complex software systems throughout private industry and other government agencies. Integrated Software Metrics, Inc., is one of the organizations that has benefited from studying the metrics data. As a result, the company has evolved into a leading developer of innovative software-error prediction tools that help organizations deliver better software, on time and on budget.

  9. Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong

    2015-02-01

    Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.

  10. Flexible Coding of Task Rules in Frontoparietal Cortex: An Adaptive System for Flexible Cognitive Control.

    PubMed

    Woolgar, Alexandra; Afshar, Soheil; Williams, Mark A; Rich, Anina N

    2015-10-01

    How do our brains achieve the cognitive control that is required for flexible behavior? Several models of cognitive control propose a role for frontoparietal cortex in the structure and representation of task sets or rules. For behavior to be flexible, however, the system must also rapidly reorganize as mental focus changes. Here we used multivoxel pattern analysis of fMRI data to demonstrate adaptive reorganization of frontoparietal activity patterns following a change in the complexity of the task rules. When task rules were relatively simple, frontoparietal cortex did not hold detectable information about these rules. In contrast, when the rules were more complex, frontoparietal cortex showed clear and decodable rule discrimination. Our data demonstrate that frontoparietal activity adjusts to task complexity, with better discrimination of rules that are behaviorally more confusable. The change in coding was specific to the rule element of the task and was not mirrored in more specialized cortex (early visual cortex) where coding was independent of difficulty. In line with an adaptive view of frontoparietal function, the data suggest a system that rapidly reconfigures in accordance with the difficulty of a behavioral task. This system may provide a neural basis for the flexible control of human behavior.

  11. Predicting non-coding RNA genes in Escherichia coli with boosted genetic programming.

    PubMed

    Saetrom, Pål; Sneve, Ragnhild; Kristiansen, Knut I; Snøve, Ola; Grünfeld, Thomas; Rognes, Torbjørn; Seeberg, Erling

    2005-01-01

    Several methods exist for predicting non-coding RNA (ncRNA) genes in Escherichia coli (E.coli). In addition to about sixty known ncRNA genes excluding tRNAs and rRNAs, various methods have predicted more than thousand ncRNA genes, but only 95 of these candidates were confirmed by more than one study. Here, we introduce a new method that uses automatic discovery of sequence patterns to predict ncRNA genes. The method predicts 135 novel candidates. In addition, the method predicts 152 genes that overlap with predictions in the literature. We test sixteen predictions experimentally, and show that twelve of these are actual ncRNA transcripts. Six of the twelve verified candidates were novel predictions. The relatively high confirmation rate indicates that many of the untested novel predictions are also ncRNAs, and we therefore speculate that E.coli contains more ncRNA genes than previously estimated.

  12. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder

    NASA Astrophysics Data System (ADS)

    Lee, Szu-Wei; Kuo, C.-C. Jay

    2007-09-01

    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  13. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  14. Predicting the influence of the electronic health record on clinical coding practice in hospitals.

    PubMed

    Robinson, Kerin; Shepheard, Jennie

    2004-01-01

    The key drivers of change to clinical coding practice are identified and examined, and a major shift is predicted. The traditional purposes of the coding function have been the provision of data for research and epidemiology, in morbidity data reporting and, latterly, for casemix-based funding. It is contended that, as the development of electronic health records progresses, the need for an embedded nomenclature will force major change in clinical coding practice. Clinical coders must become expert in information technology and analysis, change their work practices, and become an integral part of the clinical team.

  15. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  16. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  17. EMdeCODE: a novel algorithm capable of reading words of epigenetic code to predict enhancers and retroviral integration sites and to identify H3R2me1 as a distinctive mark of coding versus non-coding genes

    PubMed Central

    Santoni, Federico Andrea

    2013-01-01

    Existence of some extra-genetic (epigenetic) codes has been postulated since the discovery of the primary genetic code. Evident effects of histone post-translational modifications or DNA methylation over the efficiency and the regulation of DNA processes are supporting this postulation. EMdeCODE is an original algorithm that approximate the genomic distribution of given DNA features (e.g. promoter, enhancer, viral integration) by identifying relevant ChIPSeq profiles of post-translational histone marks or DNA binding proteins and combining them in a supermark. EMdeCODE kernel is essentially a two-step procedure: (i) an expectation-maximization process calculates the mixture of epigenetic factors that maximize the Sensitivity (recall) of the association with the feature under study; (ii) the approximated density is then recursively trimmed with respect to a control dataset to increase the precision by reducing the number of false positives. EMdeCODE densities improve significantly the prediction of enhancer loci and retroviral integration sites with respect to previous methods. Importantly, it can also be used to extract distinctive factors between two arbitrary conditions. Indeed EMdeCODE identifies unexpected epigenetic profiles specific for coding versus non-coding RNA, pointing towards a new role for H3R2me1 in coding regions. PMID:23234700

  18. Neural markers of predictive coding under perceptual uncertainty revealed with Hierarchical Frequency Tagging

    PubMed Central

    Gordon, Noam; Koenig-Robert, Roger; Tsuchiya, Naotsugu; van Boxtel, Jeroen JA; Hohwy, Jakob

    2017-01-01

    There is a growing understanding that both top-down and bottom-up signals underlie perception. But it is not known how these signals integrate with each other and how this depends on the perceived stimuli’s predictability. ‘Predictive coding’ theories describe this integration in terms of how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy. Importantly, we examined intermodulation components as a measure of integration between these signals. Our results link the different signals to core aspects of predictive coding, and suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input, providing evidence for predictive coding and opening new avenues to studying such interactions in perception. DOI: http://dx.doi.org/10.7554/eLife.22749.001 PMID:28244874

  19. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  20. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  1. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  2. Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.

    PubMed

    Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre

    2008-12-01

    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

  3. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  4. TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.

  5. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  6. Application of TURBO-AE to Flutter Prediction: Aeroelastic Code Development

    NASA Technical Reports Server (NTRS)

    Hoyniak, Daniel; Simons, Todd A.; Stefko, George (Technical Monitor)

    2001-01-01

    The TURBO-AE program has been evaluated by comparing the obtained results to cascade rig data and to prediction made from various in-house programs. A high-speed fan cascade, a turbine cascade, a turbine cascade and a fan geometry that shower flutter in torsion mode were analyzed. The steady predictions for the high-speed fan cascade showed the TURBO-AE predictions to match in-house codes. However, the predictions did not match the measured blade surface data. Other researchers also reported similar disagreement with these data set. Unsteady runs for the fan configuration were not successful using TURBO-AE .

  7. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    PubMed

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  8. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  9. Flexible Radiation Codes for Numerical Weather Prediction Across Space and Time Scales

    DTIC Science & Technology

    2013-09-30

    time and space scales, especially from regional models to global models. OBJECTIVES We are adapting radiation codes developed for climate ...PSrad is now complete, thorougly tested and debugged, is functioning as the radiation scheme in the climate model ECHAM 6.2 developed at the Max Planck...statiically significant change at most stations, indicating that errors in most places are not primarily driven by radiation errors. We are working

  10. Adaptive coded spreading OFDM signal for dynamic-λ optical access network

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-12-01

    This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.

  11. Image sensor system with bio-inspired efficient coding and adaptation.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  12. MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo

    2010-11-01

    A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.

  13. TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments

    NASA Astrophysics Data System (ADS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.

  14. Development of a framework and coding system for modifications and adaptations of evidence-based interventions

    PubMed Central

    2013-01-01

    Background Evidence-based interventions are frequently modified or adapted during the implementation process. Changes may be made to protocols to meet the needs of the target population or address differences between the context in which the intervention was originally designed and the one into which it is implemented [Addict Behav 2011, 36(6):630–635]. However, whether modification compromises or enhances the desired benefits of the intervention is not well understood. A challenge to understanding the impact of specific types of modifications is a lack of attention to characterizing the different types of changes that may occur. A system for classifying the types of modifications that are made when interventions and programs are implemented can facilitate efforts to understand the nature of modifications that are made in particular contexts as well as the impact of these modifications on outcomes of interest. Methods We developed a system for classifying modifications made to interventions and programs across a variety of fields and settings. We then coded 258 modifications identified in 32 published articles that described interventions implemented in routine care or community settings. Results We identified modifications made to the content of interventions, as well as to the context in which interventions are delivered. We identified 12 different types of content modifications, and our coding scheme also included ratings for the level at which these modifications were made (ranging from the individual patient level up to a hospital network or community). We identified five types of contextual modifications (changes to the format, setting, or patient population that do not in and of themselves alter the actual content of the intervention). We also developed codes to indicate who made the modifications and identified a smaller subset of modifications made to the ways that training or evaluations occur when evidence-based interventions are implemented. Rater

  15. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  16. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    SciTech Connect

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  17. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  18. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  19. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  20. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  1. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  2. Coding and adaptation during mechanical stimulation in the leech nervous system.

    PubMed

    Pinato, G; Torre, V

    2000-12-15

    The experiments described here were designed to characterise sensory coding and adaptation during mechanical stimulation in the leech (Hirudo medicinalis). A chain of three ganglia and a segment of the body wall connected to the central ganglion were used. Eight extracellular suction pipettes and one or two intracellular electrodes were used to record action potentials from all mechanosensory neurones of the three ganglia. When the skin of the body wall was briefly touched with a filament exerting a force of about 2 mN, touch (T) cells in the central ganglion, but also those in adjacent ganglia (i.e. anterior and posterior), fired one or two action potentials. However, the threshold for action potential initiation was lower for T cells in the central ganglion than for those in adjacent ganglia. The timing of the first evoked action potential in a T cell was very reproducible with a jitter often lower than 100 us. Action potentials in T cells were not significantly correlated. When the force exerted by the filament was increased above 20 mN, pressure (P) cells in the central and neighbouring ganglia fired action potentials. Action potentials in P cells usually followed those evoked in T cells with a delay of about 20 ms and had a larger jitter of 0.5-10 ms. With stronger stimulations exceeding 50 mN, noxious (N) cells also fired action potentials. With such stimulations the majority of mechanosensory neurones in the three ganglia fired action potentials. The spatial properties of the whole receptive field of the mechanosensory neurones were explored by touching different parts of the skin. When the mechanical stimulation was applied for a longer time, i.e. 1 s, only P cells in the central ganglion continued to fire action potentials. P cells in neighbouring ganglia fully adapted after firing two or three action potentials.P cells in adjacent ganglia, having fully adapted to a steady mechanical stimulation of one part of the skin, fired action potentials following

  3. Genome-environment associations in sorghum landraces predict adaptive traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these ...

  4. Validation of Framework Code Approach to a Life Prediction System for Fiber Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Gravett, Phillip

    1997-01-01

    The grant was conducted by the MMC Life Prediction Cooperative, an industry/government collaborative team, Ohio Aerospace Institute (OAI) acted as the prime contractor on behalf of the Cooperative for this grant effort. See Figure I for the organization and responsibilities of team members. The technical effort was conducted during the period August 7, 1995 to June 30, 1996 in cooperation with Erwin Zaretsky, the LERC Program Monitor. Phil Gravett of Pratt & Whitney was the principal technical investigator. Table I documents all meeting-related coordination memos during this period. The effort under this grant was closely coordinated with an existing USAF sponsored program focused on putting into practice a life prediction system for turbine engine components made of metal matrix composites (MMC). The overall architecture of the NMC life prediction system was defined in the USAF sponsored program (prior to this grant). The efforts of this grant were focussed on implementing and tailoring of the life prediction system, the framework code within it and the damage modules within it to meet the specific requirements of the Cooperative. T'he tailoring of the life prediction system provides the basis for pervasive and continued use of this capability by the industry/government cooperative. The outputs of this grant are: 1. Definition of the framework code to analysis modules interfaces, 2. Definition of the interface between the materials database and the finite element model, and 3. Definition of the integration of the framework code into an FEM design tool.

  5. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    PubMed Central

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  6. Prediction error as a linear function of reward probability is coded in human nucleus accumbens.

    PubMed

    Abler, Birgit; Walter, Henrik; Erk, Susanne; Kammerer, Hannes; Spitzer, Manfred

    2006-06-01

    Reward probability has been shown to be coded by dopamine neurons in monkeys. Phasic neuronal activation not only increased linearly with reward probability upon expectation of reward, but also varied monotonically across the range of probabilities upon omission or receipt of rewards, therefore modeling discrepancies between expected and received rewards. Such a discrete coding of prediction error has been suggested to be one of the basic principles of learning. We used functional magnetic resonance imaging (fMRI) to show that the human dopamine system codes reward probability and prediction error in a similar way. We used a simple delayed incentive task with a discrete range of reward probabilities from 0%-100%. Activity in the nucleus accumbens of human subjects strongly resembled the phasic responses found in monkey neurons. First, during the expectation period of the task, the fMRI signal in the human nucleus accumbens (NAc) increased linearly with the probability of the reward. Second, during the outcome phase, activity in the NAc coded the prediction error as a linear function of reward probabilities. Third, we found that the Nac signal was correlated with individual differences in sensation seeking and novelty seeking, indicating a link between individual fMRI activation of the dopamine system in a probabilistic paradigm and personality traits previously suggested to be linked with reward processing. We therefore identify two different covariates that model activity in the Nac: specific properties of a psychological task and individual character traits.

  7. A comparison of shuttle vernier engine plume contamination with CONTAM 3.4 code predictions

    NASA Technical Reports Server (NTRS)

    Maag, Carl R.; Jones, Thomas M.; Rao, Shankar M.; Linder, W. Kelly

    1992-01-01

    In 1985, using the CONTAM 3.2 code, it was predicted that the shuttle Primary Reaction Control System (PRCS) and Vernier Reaction Control System (VRCS) engines could be potential contamination sources to sensitive surfaces located within the shuttle payload bay. Spaceflight test data on these engines is quite limited. Shuttle mission STS-32, the Long Duration Exposure Facility retrieval mission, was instrumented with an experiment that provided the design engineer with evidence that contaminant species from the VRCS engines can enter the payload bay. More recently, the most recent version of the analysis code, CONTAM 3.4, has re-examined the contamination potential of these engines.

  8. Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code

    NASA Astrophysics Data System (ADS)

    Shprits, Yuri; Kellerman, Adam

    2016-07-01

    We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.

  9. The basal ganglia select the expected sensory input used for predictive coding.

    PubMed

    Colder, Brian

    2015-01-01

    While considerable evidence supports the notion that lower-level interpretation of incoming sensory information is guided by top-down sensory expectations, less is known about the source of the sensory expectations or the mechanisms by which they are spread. Predictive coding theory proposes that sensory expectations flow down from higher-level association areas to lower-level sensory cortex. A separate theory of the role of prediction in cognition describes "emulations" as linked representations of potential actions and their associated expected sensation that are hypothesized to play an important role in many aspects of cognition. The expected sensations in active emulations are proposed to be the top-down expectation used in predictive coding. Representations of the potential action and expected sensation in emulations are claimed to be instantiated in distributed cortical networks. Combining predictive coding with emulations thus provides a theoretical link between the top-down expectations that guide sensory expectations and the cortical networks representing potential actions. Now moving to theories of action selection, the basal ganglia has long been proposed to select between potential actions by reducing inhibition to the cortical network instantiating the desired action plan. Integration of these isolated theories leads to the novel hypothesis that reduction in inhibition from the basal ganglia selects not just action plans, but entire emulations, including the sensory input expected to result from the action. Basal ganglia disinhibition is hypothesized to both initiate an action and also allow propagation of the action's associated sensory expectation down towards primary sensory cortex. This is a novel proposal for the role of the basal ganglia in biasing perception by selecting the expected sensation, and initiating the top-down transmission of those expectations in predictive coding.

  10. Time-and-Spatially Adapting Simulations for Efficient Dynamic Stall Predictions

    DTIC Science & Technology

    2015-09-01

    SIMULATIONS FOR EFFICIENTDYNAMIC STALL PREDICTIONS The ability to accurately and efficiently predict the occurrence and severity of dynamic stall...The ability to accurately and efficiently predict the occurrence and severity of dynamic stall remains a major roadblock in the design and analysis...SPATIALLY ADAPTING SIMULATIONS FOR EFFICIENT DYNAMIC STALL PREDICTIONS Marilyn J. Smith Professor Georgia Tech Rohit Jain Aerospace Engineer US Army

  11. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang

    2005-12-01

    A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF) of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array ([InlineEquation not available: see fulltext.]), the rank of the MWF ([InlineEquation not available: see fulltext.]), the system processing gain ([InlineEquation not available: see fulltext.]), and the number of samples in a chip interval ([InlineEquation not available: see fulltext.]), that is,[InlineEquation not available: see fulltext.]. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE) or the subspace-based eigenstructure analysis is a function of[InlineEquation not available: see fulltext.]. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the[InlineEquation not available: see fulltext.]-element antenna array, the amount of the[InlineEquation not available: see fulltext.]-sample support, and the rank of the[InlineEquation not available: see fulltext.]-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  12. Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM

    NASA Astrophysics Data System (ADS)

    Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.

    2004-01-01

    In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a

  13. Hyperbolic Space Sparse Coding with Its Application on Prediction of Alzheimer's Disease in Mild Cognitive Impairment.

    PubMed

    Zhang, Jie; Shi, Jie; Stonnington, Cynthia; Li, Qingyang; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Caselli, Richard J; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2016-10-01

    Mild Cognitive Impairment (MCI) is a transitional stage between normal age-related cognitive decline and Alzheimer's disease (AD). Here we introduce a hyperbolic space sparse coding method to predict impending decline of MCI patients to dementia using surface measures of ventricular enlargement. First, we compute diffeomorphic mappings between ventricular surfaces using a canonical hyperbolic parameter space with consistent boundary conditions and surface tensor-based morphometry is computed to measure local surface deformations. Second, ring-shaped patches of TBM features are selected according to the geometric structure of the hyperbolic parameter space to initialize a dictionary. Sparse coding is then applied on the patch features to learn sparse codes and update the dictionary. Finally, we adopt max-pooling to reduce the feature dimensions and apply Adaboost to predict AD in MCI patients (N = 133) from the Alzheimer's Disease Neuroimaging Initiative baseline dataset. Our work achieved an accuracy rate of 96.7% and outperformed some other morphometry measures. The hyperbolic space sparse coding method may offer a more sensitive tool to study AD and its early symptom.

  14. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  15. Performance of an adaptive coding scheme in a fixed wireless cellular system working in millimeter-wave bands

    NASA Astrophysics Data System (ADS)

    Farahvash, Shayan; Akhavan, Koorosh; Kavehrad, Mohsen

    1999-12-01

    This paper presents a solution to problem of providing bit- error rate performance guarantees in a fixed millimeter-wave wireless system, such as local multi-point distribution system in line-of-sight or nearly line-of-sight applications. The basic concept is to take advantage of slow-fading behavior of fixed wireless channel by changing the transmission code rate. Rate compatible punctured convolutional codes are used to implement adaptive coding. Cochannel interference analysis is carried out for downlink direction; from base station to subscriber premises. Cochannel interference is treated as a noise-like random process with a power equal to the sum of the power from finite number of interfering base stations. Two different cellular architectures based on using single or dual polarizations are investigated. Average spectral efficiency of the proposed adaptive rate system is found to be at least 3 times larger than a fixed rate system with similar outage requirements.

  16. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  17. A Cerebellar Framework for Predictive Coding and Homeostatic Regulation in Depressive Disorder.

    PubMed

    Schutter, Dennis J L G

    2016-02-01

    Depressive disorder is associated with abnormalities in the processing of reward and punishment signals and disturbances in homeostatic regulation. These abnormalities are proposed to impair error minimization routines for reducing uncertainty. Several lines of research point towards a role of the cerebellum in reward- and punishment-related predictive coding and homeostatic regulatory function in depressive disorder. Available functional and anatomical evidence suggests that in addition to the cortico-limbic networks, the cerebellum is part of the dysfunctional brain circuit in depressive disorder as well. It is proposed that impaired cerebellar function contributes to abnormalities in predictive coding and homeostatic dysregulation in depressive disorder. Further research on the role of the cerebellum in depressive disorder may further extend our knowledge on the functional and neural mechanisms of depressive disorder and development of novel antidepressant treatments strategies targeting the cerebellum.

  18. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  19. Beyond Reactive Planning: Self Adaptive Software and Self Modeling Software in Predictive Deliberation Management

    DTIC Science & Technology

    2008-06-01

    13th ICCRTS “C2 for Complex Endeavors” Title of Paper: Beyond Reactive Planning: Self Adaptive Software and Self Modeling Software in...Adaptive Software and Self Modeling Software in Predictive Deliberation Management 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...space. We present the following hypothesis: predictive deliberation management using self adapting and self modeling software will be required to provide

  20. Users Manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE)

    NASA Technical Reports Server (NTRS)

    Ruff, Gary A.; Berkowitz, Brian M.

    1990-01-01

    LEWICE is an ice accretion prediction code that applies a time-stepping procedure to calculate the shape of an ice accretion. The potential flow field is calculated in LEWICE using the Douglas Hess-Smith 2-D panel code (S24Y). This potential flow field is then used to calculate the trajectories of particles and the impingement points on the body. These calculations are performed to determine the distribution of liquid water impinging on the body, which then serves as input to the icing thermodynamic code. The icing thermodynamic model is based on the work of Messinger, but contains several major modifications and improvements. This model is used to calculate the ice growth rate at each point on the surface of the geometry. By specifying an icing time increment, the ice growth rate can be interpreted as an ice thickness which is added to the body, resulting in the generation of new coordinates. This procedure is repeated, beginning with the potential flow calculations, until the desired icing time is reached. The operation of LEWICE is illustrated through the use of five examples. These examples are representative of the types of applications expected for LEWICE. All input and output is discussed, along with many of the diagnostic messages contained in the code. Several error conditions that may occur in the code for certain icing conditions are identified, and a course of action is recommended. LEWICE has been used to calculate a variety of ice shapes, but should still be considered a research code. The code should be exercised further to identify any shortcomings and inadequacies. Any modifications identified as a result of these cases, or of additional experimental results, should be incorporated into the model. Using it as a test bed for improvements to the ice accretion model is one important application of LEWICE.

  1. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  2. Results from baseline tests of the SPRE I and comparison with code model predictions

    SciTech Connect

    Cairelli, J.E.; Geng, S.M.; Skupinski, R.C.

    1994-09-01

    The Space Power Research Engine (SPRE), a free-piston Stirling engine with linear alternator, is being tested at the NASA Lewis Research Center as part of the Civil Space Technology Initiative (CSTI) as a candidate for high capacity space power. This paper presents results of base-line engine tests at design and off-design operating conditions. The test results are compared with code model predictions.

  3. Predicting multi-wall structural response to hypervelocity impact using the hull code

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.

    1993-01-01

    Previously, multi-wall structures have been analyzed extensively, primarily through experiment, as a means of increasing the meteoroid/space debris impact protection of spacecraft. As structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative to experimental testing, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under different impact loading conditions. The results of comparing experimental tests to Hull Hydrodynamic Computer Code predictions are reported. Also, the results of a numerical parametric study of multi-wall structural response to hypervelocity cylindrical projectile impact are presented.

  4. Design of signal-adapted multidimensional lifting scheme for lossy coding.

    PubMed

    Gouze, Annabelle; Antonini, Marc; Barlaud, Michel; Macq, Benoît

    2004-12-01

    This paper proposes a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. Our approach is stated in the general case, and is illustrated for the 2-D separable and for the quincunx images. Results are shown for the JPEG2000 database and for satellite images acquired on a quincunx sampling grid. The design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. Our approach enables the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. It is based on a two-step lifting scheme and joins the lifting theory with Wiener's optimization. The prediction step is designed in order to minimize the variance of the signal, and the update step is designed in order to minimize a reconstruction error. Application for lossy compression shows the performances of the method.

  5. Memristor fabrication and characterization: an adaptive coded aperture imaging and sensing opportunity

    NASA Astrophysics Data System (ADS)

    Yakopcic, Chris; Taha, Tarek M.; Shin, Eunsung; Subramanyam, Guru; Murray, P. Terrence; Rogers, Stanley

    2010-08-01

    The memristor, experimentally verified for the first time in 2008, is one of four fundamental passive circuit elements (the others being resistors, capacitors, and inductors). Development and characterization of memristor devices and the design of novel computing architectures based on these devices can potentially provide significant advances in intelligence processing systems for a variety of applications including image processing, robotics, and machine learning. In particular, adaptive coded aperture (diffraction) sensing, an emerging technology enabling real-time, wide-area IR/visible sensing and imaging, could benefit from new high performance biologically inspired image processing architectures based on memristors. In this paper, we present results from the fabrication and characterization of memristor devices utilizing titanium oxide dielectric layers in a parallel plate conuration. Two versions of memristor devices have been fabricated at the University of Dayton and the Air Force Research Laboratory utilizing varying thicknesses of the TiO2 dielectric layers. Our results show that the devices do exhibit the characteristic hysteresis loop in their I-V plots.

  6. Transforming the sensing and numerical prediction of high-impact local weather through dynamic adaptation.

    PubMed

    Droegemeier, Kelvin K

    2009-03-13

    Mesoscale weather, such as convective systems, intense local rainfall resulting in flash floods and lake effect snows, frequently is characterized by unpredictable rapid onset and evolution, heterogeneity and spatial and temporal intermittency. Ironically, most of the technologies used to observe the atmosphere, predict its evolution and compute, transmit or store information about it, operate in a static pre-scheduled framework that is fundamentally inconsistent with, and does not accommodate, the dynamic behaviour of mesoscale weather. As a result, today's weather technology is highly constrained and far from optimal when applied to any particular situation. This paper describes a new cyberinfrastructure framework, in which remote and in situ atmospheric sensors, data acquisition and storage systems, assimilation and prediction codes, data mining and visualization engines, and the information technology frameworks within which they operate, can change configuration automatically, in response to evolving weather. Such dynamic adaptation is designed to allow system components to achieve greater overall effectiveness, relative to their static counterparts, for any given situation. The associated service-oriented architecture, known as Linked Environments for Atmospheric Discovery (LEAD), makes advanced meteorological and cyber tools as easy to use as ordering a book on the web. LEAD has been applied in a variety of settings, including experimental forecasting by the US National Weather Service, and allows users to focus much more attention on the problem at hand and less on the nuances of data formats, communication protocols and job execution environments.

  7. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  8. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Astrophysics Data System (ADS)

    McGaw, Michael A.; Saltsman, James F.

    1993-10-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  9. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1993-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  10. Thought Insertion as a Self-Disturbance: An Integration of Predictive Coding and Phenomenological Approaches

    PubMed Central

    Sterzer, Philipp; Mishara, Aaron L.; Voss, Martin; Heinz, Andreas

    2016-01-01

    Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from “nowhere”, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: “Ichstörungen”) of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer) first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a “becoming sensory” of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, “internal” events such as thoughts (including volitions, emotions and memories) can also be associated with increased prediction-error signaling and are thus imbued

  11. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  12. Simulation study of HL-2A-like plasma using integrated predictive modeling code

    SciTech Connect

    Poolyarat, N.; Onjun, T.; Promping, J.

    2009-11-15

    Self-consistent simulations of HL-2A-like plasma are carried out using 1.5D BALDUR integrated predictive modeling code. In these simulations, the core transport is predicted using the combination of Multi-mode (MMM95) anomalous core transport model and NCLASS neoclassical transport model. The evolution of plasma current, temperature and density is carried out. Consequently, the plasma current, temperature and density profiles, as well as other plasma parameters, are obtained as the predictions in each simulation. It is found that temperature and density profiles in these simulations are peak near the plasma center. In addition, the sawtooth period is studied using the Porcilli model and is found that before, during, and after the electron cyclotron resonance heating (ECRH) operation the sawtooth period are approximately the same. It is also observed that the mixing radius of sawtooth crashes is reduced during the ECRH operation.

  13. A modified prediction scheme of the H.264 multiview video coding to improve the decoder performance

    NASA Astrophysics Data System (ADS)

    Hamadan, Ayman M.; Aly, Hussein A.; Fouad, Mohamed M.; Dansereau, Richard M.

    2013-02-01

    In this paper, we present a modified inter-view prediction scheme for the multiview video coding (MVC).With more inter-view prediction, the number of reference frames required to decode a single view increase. Consequently, the data size of decoding a single view increases, thus impacting the decoder performance. In this paper, we propose an MVC scheme that requires less inter-view prediction than that of the MVC standard scheme. The proposed scheme is implemented and tested on real multiview video sequences. Improvements are shown using the proposed scheme in terms of average data size required either to decode a single view, or to access any frame (i.e., random access), with comparable rate-distortion. It is compared to the MVC standard scheme and another improved techniques from the literature.

  14. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    NASA Technical Reports Server (NTRS)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  15. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    NASA Technical Reports Server (NTRS)

    Schwab, J. R.; Povinelli, L. A.

    1983-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modeling of the vortex development near the suction surface.

  16. Life Prediction for a CMC Component Using the NASALIFE Computer Code

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John Z.; Murthy, Pappu L. N.; Mital, Subodh K.

    2005-01-01

    The computer code, NASALIFE, was used to provide estimates for life of an SiC/SiC stator vane under varying thermomechanical loading conditions. The primary intention of this effort is to show how the computer code NASALIFE can be used to provide reasonable estimates of life for practical propulsion system components made of advanced ceramic matrix composites (CMC). Simple loading conditions provided readily observable and acceptable life predictions. Varying the loading conditions such that low cycle fatigue and creep were affected independently provided expected trends in the results for life due to varying loads and life due to creep. Analysis was based on idealized empirical data for the 9/99 Melt Infiltrated SiC fiber reinforced SiC.

  17. Adaptive evolution: evaluating empirical support for theoretical predictions

    PubMed Central

    Olson-Manning, Carrie F.; Wagner, Maggie R.; Mitchell-Olds, Thomas

    2013-01-01

    Adaptive evolution is shaped by the interaction of population genetics, natural selection and underlying network and biochemical constraints. Variation created by mutation, the raw material for evolutionary change, is translated into phenotypes by flux through metabolic pathways and by the topography and dynamics of molecular networks. Finally, the retention of genetic variation and the efficacy of selection depend on population genetics and demographic history. Emergent high-throughput experimental methods and sequencing technologies allow us to gather more evidence and to move beyond the theory in different systems and populations. Here we review the extent to which recent evidence supports long-established theoretical principles of adaptation. PMID:23154809

  18. Reduced adaptability, but no fundamental disruption, of norm-based face-coding mechanisms in cognitively able children and adolescents with autism.

    PubMed

    Rhodes, Gillian; Ewing, Louise; Jeffery, Linda; Avard, Eleni; Taylor, Libby

    2014-09-01

    Faces are adaptively coded relative to visual norms that are updated by experience. This coding is compromised in autism and the broader autism phenotype, suggesting that atypical adaptive coding of faces may be an endophenotype for autism. Here we investigate the nature of this atypicality, asking whether adaptive face-coding mechanisms are fundamentally altered, or simply less responsive to experience, in autism. We measured adaptive coding, using face identity aftereffects, in cognitively able children and adolescents with autism and neurotypical age- and ability-matched participants. We asked whether these aftereffects increase with adaptor identity strength as in neurotypical populations, or whether they show a different pattern indicating a more fundamental alteration in face-coding mechanisms. As expected, face identity aftereffects were reduced in the autism group, but they nevertheless increased with adaptor strength, like those of our neurotypical participants, consistent with norm-based coding of face identity. Moreover, their aftereffects correlated positively with face recognition ability, consistent with an intact functional role for adaptive coding in face recognition ability. We conclude that adaptive norm-based face-coding mechanisms are basically intact in autism, but are less readily calibrated by experience.

  19. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  20. A Predictive Coding Perspective on Beta Oscillations during Sentence-Level Language Comprehension

    PubMed Central

    Lewis, Ashley G.; Schoffelen, Jan-Mathijs; Schriefers, Herbert; Bastiaansen, Marcel

    2016-01-01

    Oscillatory neural dynamics have been steadily receiving more attention as a robust and temporally precise signature of network activity related to language processing. We have recently proposed that oscillatory dynamics in the beta and gamma frequency ranges measured during sentence-level comprehension might be best explained from a predictive coding perspective. Under our proposal we related beta oscillations to both the maintenance/change of the neural network configuration responsible for the construction and representation of sentence-level meaning, and to top–down predictions about upcoming linguistic input based on that sentence-level meaning. Here we zoom in on these particular aspects of our proposal, and discuss both old and new supporting evidence. Finally, we present some preliminary magnetoencephalography data from an experiment comparing Dutch subject- and object-relative clauses that was specifically designed to test our predictive coding framework. Initial results support the first of the two suggested roles for beta oscillations in sentence-level language comprehension. PMID:26973500

  1. Predicting Adaptive Functioning of Mentally Retarded Persons in Community Settings.

    ERIC Educational Resources Information Center

    Hull, John T.; Thompson, Joy C.

    1980-01-01

    The impact of a variety of individual, residential, and community variables on adaptive functioning of 369 retarded persons (18 to 73 years old) was examined using a multiple regression analysis. Individual characteristics (especially IQ) accounted for 21 percent of the variance, while environmental variables, primarily those related to…

  2. Predicting Early Adolescent Gang Involvement from Middle School Adaptation

    ERIC Educational Resources Information Center

    Dishion, Thomas J.; Nelson, Sarah E.; Yasui, Miwa

    2005-01-01

    This study examined the role of adaptation in the first year of middle school (Grade 6, age 11) to affiliation with gangs by the last year of middle school (Grade 8, age 13). The sample consisted of 714 European American (EA) and African American (AA) boys and girls. Specifically, academic grades, reports of antisocial behavior, and peer relations…

  3. Genomic islands predict functional adaptation in marine actinobacteria

    SciTech Connect

    Penn, Kevin; Jenkins, Caroline; Nett, Markus; Udwary, Daniel; Gontang, Erin; McGlinchey, Ryan; Foster, Brian; Lapidus, Alla; Podell, Sheila; Allen, Eric; Moore, Bradley; Jensen, Paul

    2009-04-01

    Linking functional traits to bacterial phylogeny remains a fundamental but elusive goal of microbial ecology 1. Without this information, it becomes impossible to resolve meaningful units of diversity and the mechanisms by which bacteria interact with each other and adapt to environmental change. Ecological adaptations among bacterial populations have been linked to genomic islands, strain-specific regions of DNA that house functionally adaptive traits 2. In the case of environmental bacteria, these traits are largely inferred from bioinformatic or gene expression analyses 2, thus leaving few examples in which the functions of island genes have been experimentally characterized. Here we report the complete genome sequences of Salinispora tropica and S. arenicola, the first cultured, obligate marine Actinobacteria 3. These two species inhabit benthic marine environments and dedicate 8-10percent of their genomes to the biosynthesis of secondary metabolites. Despite a close phylogenetic relationship, 25 of 37 secondary metabolic pathways are species-specific and located within 21 genomic islands, thus providing new evidence linking secondary metabolism to ecological adaptation. Species-specific differences are also observed in CRISPR sequences, suggesting that variations in phage immunity provide fitness advantages that contribute to the cosmopolitan distribution of S. arenicola 4. The two Salinispora genomes have evolved by complex processes that include the duplication and acquisition of secondary metabolite genes, the products of which provide immediate opportunities for molecular diversification and ecological adaptation. Evidence that secondary metabolic pathways are exchanged by Horizontal Gene Transfer (HGT) yet are fixed among globally distributed populations 5 supports a functional role for their products and suggests that pathway acquisition represents a previously unrecognized force driving bacterial diversification

  4. The prediction of EEG signals using a feedback-structured adaptive rational function filter.

    PubMed

    Kim, H S; Kim, T S; Choi, Y H; Park, S H

    2000-08-01

    In this article, we present a feedback-structured adaptive rational function filter based on a recursive modified Gram-Schmidt algorithm and apply it to the prediction of an EEG signal that has nonlinear and nonstationary characteristics. For the evaluation of the prediction performance, the proposed filter is compared with other methods, where a single-step prediction and a multi-step prediction are considered for a short-term prediction, and the prediction performance is assessed in normalized mean square error. The experimental results show that the proposed filter shows better performance than other methods considered for the short-term prediction of EEG signals.

  5. Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter.

    PubMed

    Zheng, Dongliang; Da, Feipeng; Kemao, Qian; Seah, Hock Soon

    2017-03-06

    Phase-shifting profilometry combined with Gray-code patterns projection has been widely used for 3D measurement. In this technique, a phase-shifting algorithm is used to calculate the wrapped phase, and a set of Gray-code binary patterns is used to determine the unwrapped phase. In the real measurement, the captured Gray-code patterns are no longer binary, resulting in phase unwrapping errors at a large number of erroneous pixels. Although this problem has been attended and well resolved by a few methods, it remains challenging when a measured object has step-heights and the captured patterns contain invalid pixels. To effectively remove unwrapping errors and simultaneously preserve step-heights, in this paper, an effective method using an adaptive median filter is proposed. Both simulations and experiments can demonstrate its effectiveness.

  6. Fast Prediction of HCCI Combustion with an Artificial Neural Network Linked to a Fluid Mechanics Code

    SciTech Connect

    Aceves, S M; Flowers, D L; Chen, J; Babaimopoulos, A

    2006-08-29

    We have developed an artificial neural network (ANN) based combustion model and have integrated it into a fluid mechanics code (KIVA3V) to produce a new analysis tool (titled KIVA3V-ANN) that can yield accurate HCCI predictions at very low computational cost. The neural network predicts ignition delay as a function of operating parameters (temperature, pressure, equivalence ratio and residual gas fraction). KIVA3V-ANN keeps track of the time history of the ignition delay during the engine cycle to evaluate the ignition integral and predict ignition for each computational cell. After a cell ignites, chemistry becomes active, and a two-step chemical kinetic mechanism predicts composition and heat generation in the ignited cells. KIVA3V-ANN has been validated by comparison with isooctane HCCI experiments in two different engines. The neural network provides reasonable predictions for HCCI combustion and emissions that, although typically not as good as obtained with the more physically representative multi-zone model, are obtained at a much reduced computational cost. KIVA3V-ANN can perform reasonably accurate HCCI calculations while requiring only 10% more computational effort than a motored KIVA3V run. It is therefore considered a valuable tool for evaluation of engine maps or other performance analysis tasks requiring multiple individual runs.

  7. Adaptive remeshing for ductile fracture prediction in metal forming

    NASA Astrophysics Data System (ADS)

    Borouchaki, Houman; Cherouat, Abdelhakim; Laug, Patrick; Saanouni, Khemais

    2002-10-01

    The analysis of mechanical structures using the Finite Element Method in the framework of large elastoplastic strain, needs frequent remeshing of the deformed domain during computation. Indeed, the remeshing is due to the large geometrical distortion of finite elements and the adaptation to the physical behavior of the solution. This paper gives the necessary steps to remesh a mechanical structure during large elastoplastic deformations with damage. An important part of this process is constituted by geometrical and physical error estimates. The proposed method is integrated in a computational environment using the ABAQUS/Explicit solver and the BL2D-V2 adaptive mesher. To cite this article: H. Borouchaki et al., C. R. Mecanique 330 (2002) 709-716.

  8. Almost Sure Convergence of Adaptive Identification Prediction and Control Algorithms.

    DTIC Science & Technology

    1981-03-01

    achievable with known plant parameters, in the Cesaro sense. An additional regularity assumption on the signal model establishes the result that the...the Cesaro sense. Under an additional regularity assumption, the convergence of these errors and also that of the tracking error for the adaptive con...The 4- convergence in all these references is established in the Cesaro sense. The above schemes of [7-10] leave the question unanswered as to

  9. Episodic memories predict adaptive value-based decision-making.

    PubMed

    Murty, Vishnu P; FeldmanHall, Oriel; Hunter, Lindsay E; Phelps, Elizabeth A; Davachi, Lila

    2016-05-01

    Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory-specifically item versus associative memory-in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to reengage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to reengage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations-such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior.

  10. Adaptive and predictive control of a simulated robot arm.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo

    2013-06-01

    In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs).

  11. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    SciTech Connect

    Geffraye, G.; Bazin, P.; Pichon, P.

    1995-09-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit.

  12. Improved inter-layer prediction for light field content coding with display scalability

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo

    2016-09-01

    Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.

  13. IN-MACA-MCC: Integrated Multiple Attractor Cellular Automata with Modified Clonal Classifier for Human Protein Coding and Promoter Prediction.

    PubMed

    Pokkuluri, Kiran Sree; Inampudi, Ramesh Babu; Nedunuri, S S S N Usha Devi

    2014-01-01

    Protein coding and promoter region predictions are very important challenges of bioinformatics (Attwood and Teresa, 2000). The identification of these regions plays a crucial role in understanding the genes. Many novel computational and mathematical methods are introduced as well as existing methods that are getting refined for predicting both of the regions separately; still there is a scope for improvement. We propose a classifier that is built with MACA (multiple attractor cellular automata) and MCC (modified clonal classifier) to predict both regions with a single classifier. The proposed classifier is trained and tested with Fickett and Tung (1992) datasets for protein coding region prediction for DNA sequences of lengths 54, 108, and 162. This classifier is trained and tested with MMCRI datasets for protein coding region prediction for DNA sequences of lengths 252 and 354. The proposed classifier is trained and tested with promoter sequences from DBTSS (Yamashita et al., 2006) dataset and nonpromoters from EID (Saxonov et al., 2000) and UTRdb (Pesole et al., 2002) datasets. The proposed model can predict both regions with an average accuracy of 90.5% for promoter and 89.6% for protein coding region predictions. The specificity and sensitivity values of promoter and protein coding region predictions are 0.89 and 0.92, respectively.

  14. Predictability is necessary for closed-loop visual feedback delay adaptation.

    PubMed

    Rohde, Marieke; van Dam, Loes C J; Ernst, Marc O

    2014-03-05

    In case of delayed visual feedback during visuomotor tasks, like in some sluggish computer games, humans can modulate their behavior to compensate for the delay. However, opinions on the nature of this compensation diverge. Some studies suggest that humans adapt to feedback delays with lasting changes in motor behavior (aftereffects) and a recalibration of time perception. Other studies have shown little or no evidence for such semipermanent recalibration in the temporal domain. We hypothesize that predictability of the reference signal (target to be tracked) is necessary for semipermanent delay adaptation. To test this hypothesis, we trained participants with a 200 ms visual feedback delay in a visually guided manual tracking task, varying the predictability of the reference signal between conditions, but keeping reference motion and feedback delay constant. In Experiment 1, we focused on motor behavior. Only training in the predictable condition brings about all of the adaptive changes and aftereffects expected from delay adaptation. In Experiment 2, we used a synchronization task to investigate perceived simultaneity (perceptuomotor learning). Supporting the hypothesis, participants recalibrated subjective visuomotor simultaneity only when trained in the predictable condition. Such a shift in perceived simultaneity was also observed in Experiment 3, using an interval estimation task. These results show that delay adaptation in motor control can modulate the perceived temporal alignment of vision and kinesthetically sensed movement. The coadaptation of motor prediction and target prediction (reference extrapolation) seems necessary for such genuine delay adaptation. This offers an explanation for divergent results in the literature.

  15. Neural evidence for predictive coding in auditory cortex during speech production.

    PubMed

    Okada, Kayoko; Matchin, William; Hickok, Gregory

    2017-04-10

    Recent models of speech production suggest that motor commands generate forward predictions of the auditory consequences of those commands, that these forward predications can be used to monitor and correct speech output, and that this system is hierarchically organized (Hickok, Houde, & Rong, Neuron, 69(3), 407--422, 2011; Pickering & Garrod, Behavior and Brain Sciences, 36(4), 329--347, 2013). Recent psycholinguistic research has shown that internally generated speech (i.e., imagined speech) produces different types of errors than does overt speech (Oppenheim & Dell, Cognition, 106(1), 528--537, 2008; Oppenheim & Dell, Memory & Cognition, 38(8), 1147-1160, 2010). These studies suggest that articulated speech might involve predictive coding at additional levels than imagined speech. The current fMRI experiment investigates neural evidence of predictive coding in speech production. Twenty-four participants from UC Irvine were recruited for the study. Participants were scanned while they were visually presented with a sequence of words that they reproduced in sync with a visual metronome. On each trial, they were cued to either silently articulate the sequence or to imagine the sequence without overt articulation. As expected, silent articulation and imagined speech both engaged a left hemisphere network previously implicated in speech production. A contrast of silent articulation with imagined speech revealed greater activation for articulated speech in inferior frontal cortex, premotor cortex and the insula in the left hemisphere, consistent with greater articulatory load. Although both conditions were silent, this contrast also produced significantly greater activation in auditory cortex in dorsal superior temporal gyrus in both hemispheres. We suggest that these activations reflect forward predictions arising from additional levels of the perceptual/motor hierarchy that are involved in monitoring the intended speech output.

  16. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder

    PubMed Central

    Gonzalez-Gadea, Maria Luz; Chennu, Srivas; Bekinschtein, Tristan A.; Rattazzi, Alexia; Beraudi, Ana; Tripicchio, Paula; Moyano, Beatriz; Soffita, Yamila; Steinberg, Laura; Adolfi, Federico; Sigman, Mariano; Marino, Julian; Manes, Facundo

    2015-01-01

    Predictive coding has been proposed as a framework to understand neural processes in neuropsychiatric disorders. We used this approach to describe mechanisms responsible for attentional abnormalities in autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). We monitored brain dynamics of 59 children (8–15 yr old) who had ASD or ADHD or who were control participants via high-density electroencephalography. We performed analysis at the scalp and source-space levels while participants listened to standard and deviant tone sequences. Through task instructions, we manipulated top-down expectation by presenting expected and unexpected deviant sequences. Children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events. Moreover, neural abnormalities were associated with specific control mechanisms, namely, inhibitory control in ASD and set-shifting in ADHD. Based on the predictive coding account, top-down expectation abnormalities could be attributed to a disproportionate reliance (precision) allocated to prior beliefs in ASD and to sensory input in ADHD. PMID:26311184

  17. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder.

    PubMed

    Gonzalez-Gadea, Maria Luz; Chennu, Srivas; Bekinschtein, Tristan A; Rattazzi, Alexia; Beraudi, Ana; Tripicchio, Paula; Moyano, Beatriz; Soffita, Yamila; Steinberg, Laura; Adolfi, Federico; Sigman, Mariano; Marino, Julian; Manes, Facundo; Ibanez, Agustin

    2015-11-01

    Predictive coding has been proposed as a framework to understand neural processes in neuropsychiatric disorders. We used this approach to describe mechanisms responsible for attentional abnormalities in autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). We monitored brain dynamics of 59 children (8-15 yr old) who had ASD or ADHD or who were control participants via high-density electroencephalography. We performed analysis at the scalp and source-space levels while participants listened to standard and deviant tone sequences. Through task instructions, we manipulated top-down expectation by presenting expected and unexpected deviant sequences. Children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events. Moreover, neural abnormalities were associated with specific control mechanisms, namely, inhibitory control in ASD and set-shifting in ADHD. Based on the predictive coding account, top-down expectation abnormalities could be attributed to a disproportionate reliance (precision) allocated to prior beliefs in ASD and to sensory input in ADHD.

  18. Computer code to predict the heat of explosion of high energy materials.

    PubMed

    Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B

    2009-01-30

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials.

  19. Affinity regression predicts the recognition code of nucleic acid binding proteins

    PubMed Central

    Pelossof, Raphael; Singh, Irtisha; Yang, Julie L.; Weirauch, Matthew T.; Hughes, Timothy R.; Leslie, Christina S.

    2016-01-01

    Predicting the affinity profiles of nucleic acid-binding proteins directly from the protein sequence is a major unsolved problem. We present a statistical approach for learning the recognition code of a family of transcription factors (TFs) or RNA-binding proteins (RBPs) from high-throughput binding assays. Our method, called affinity regression, trains on protein binding microarray (PBM) or RNA compete experiments to learn an interaction model between proteins and nucleic acids, using only protein domain and probe sequences as inputs. By training on mouse homeodomain PBM profiles, our model correctly identifies residues that confer DNA-binding specificity and accurately predicts binding motifs for an independent set of divergent homeodomains. Similarly, learning from RNA compete profiles for diverse RBPs, our model can predict the binding affinities of held-out proteins and identify key RNA-binding residues. More broadly, we envision applying our method to model and predict biological interactions in any setting where there is a high-throughput ‘affinity’ readout. PMID:26571099

  20. Model-Biased, Data-Driven Adaptive Failure Prediction

    NASA Technical Reports Server (NTRS)

    Leen, Todd K.

    2004-01-01

    This final report, which contains a research summary and a viewgraph presentation, addresses clustering and data simulation techniques for failure prediction. The researchers applied their techniques to both helicopter gearbox anomaly detection and segmentation of Earth Observing System (EOS) satellite imagery.

  1. Reading the second code: mapping epigenomes to understand plant growth, development, and adaptation to the environment.

    PubMed

    2012-06-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution.

  2. Development of an Adaptive Boundary-Fitted Coordinate Code for Use in Coastal and Estuarine Areas.

    DTIC Science & Technology

    1985-09-01

    34 Miscellaneous Paper HL-80-3, US Army Engineer Waterways Experiment Station, Vicksburg, Miss. Johnson, B. H., Thompson , J . F ., and Baker, A. J. 1984. "A...34 prepared for CERC, US Army Engineer Water- ways Experiment Station, Vicksburg, Miss. Thompson , J . F . 1983. "A Boundary-Fitted Coordinate Code for...Vol 1. Thompson , J . F ., Thames, F. C., and Mastin, C. W. 1977. "TOMCAT - A Code for Numerical Generation Systems on Fields Containing Any Number of

  3. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli.

    PubMed

    Kumar, Neeraj; Mutha, Pratik K

    2016-03-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function-perception-are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the "natural" sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.

  4. Evaluation of damage-induced permeability using a three-dimensional Adaptive Continuum/Discontinuum Code (AC/DC)

    NASA Astrophysics Data System (ADS)

    Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger

    Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency

  5. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  6. Estimation and prediction of noise power based on variational Bayesian and adaptive ARMA time series

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyi; Li, Yonggui; Zhu, Yonggang; Li, Binwu

    2014-04-01

    Estimation and prediction of noise power are very important for communication anti-jamming and efficient allocation of spectrum resources in adaptive wireless communication and cognitive radio. In order to estimate and predict the time-varying noise power caused by natural factors and jamming in the high frequency channel, Variational Bayesian algorithm and adaptive ARMA time series are proposed. Through establishing the time-varying noise power model, which controlled by the noise variance rate, the noise power can be estimated with Variational Bayesian algorithm, and the results show that the estimation error is related to observation interval. What's more, through the analysis of the correlation characteristics of the estimation power, noise power can be predicted based on adaptive ARMA time series, and the results show that it will be available to predict the noise power in next 5 intervals with the proportional error less than 0.2.

  7. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music.

    PubMed

    Vuust, Peter; Witek, Maria A G

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain's Bayesian minimization of the error between the input to the brain and the brain's prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard ("rhythm") and the brain's anticipatory structuring of music ("meter"). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain's general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  8. Brain responses to regular and octave-scrambled melodies: A case of predictive-coding?

    PubMed

    Globerson, Eitan; Granot, Roni; Tal, Idan; Harpaz, Yuval; Zeev-Wolf, Maor; Golstein, Abraham

    2017-03-01

    Melody recognition is an online process of evaluating incoming information and comparing this information to an existing internal corpus, thereby reducing prediction error. The predictive-coding model postulates top-down control on sensory processing accompanying reduction in prediction error. To investigate the relevancy of this model to melody processing, the current study examined early magnetoencephalogram (MEG) auditory responses to familiar and unfamiliar melodies in 25 participants. The familiar melodies followed and primed an octave-scrambled version of the same melody. The retrograde version of theses melodies served as the unfamiliar control condition. Octave-transposed melodies were included to examine the influence of pitch representation (pitch-height/pitch-chroma representation) on brain responses to melody recognition. Results demonstrate a reduction of the M100 auditory response to familiar, as compared with unfamiliar, melodies regardless of their form of presentation (condensed vs. octave-scrambled). This trend appeared to begin after the third tone of the melody. An additional behavioral study with the same melody corpus showed a similar trend-namely, a significant difference between familiarity rating for familiar and unfamiliar melodies, beginning with the third tone of the melody. These results may indicate a top-down inhibition of early auditory responses to melodies that is influenced by pitch representation. (PsycINFO Database Record

  9. Frontal theta links prediction errors to behavioral adaptation in reinforcement learning.

    PubMed

    Cavanagh, James F; Frank, Michael J; Klein, Theresa J; Allen, John J B

    2010-02-15

    Investigations into action monitoring have consistently detailed a frontocentral voltage deflection in the event-related potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the feedback-related negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single-trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single-trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Mediofrontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single-trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations, with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice.

  10. Transitional states in marine fisheries: adapting to predicted global change.

    PubMed

    MacNeil, M Aaron; Graham, Nicholas A J; Cinner, Joshua E; Dulvy, Nicholas K; Loring, Philip A; Jennings, Simon; Polunin, Nicholas V C; Fisk, Aaron T; McClanahan, Tim R

    2010-11-27

    Global climate change has the potential to substantially alter the production and community structure of marine fisheries and modify the ongoing impacts of fishing. Fish community composition is already changing in some tropical, temperate and polar ecosystems, where local combinations of warming trends and higher environmental variation anticipate the changes likely to occur more widely over coming decades. Using case studies from the Western Indian Ocean, the North Sea and the Bering Sea, we contextualize the direct and indirect effects of climate change on production and biodiversity and, in turn, on the social and economic aspects of marine fisheries. Climate warming is expected to lead to (i) yield and species losses in tropical reef fisheries, driven primarily by habitat loss; (ii) community turnover in temperate fisheries, owing to the arrival and increasing dominance of warm-water species as well as the reduced dominance and departure of cold-water species; and (iii) increased diversity and yield in Arctic fisheries, arising from invasions of southern species and increased primary production resulting from ice-free summer conditions. How societies deal with such changes will depend largely on their capacity to adapt--to plan and implement effective responses to change--a process heavily influenced by social, economic, political and cultural conditions.

  11. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  12. Circulating long non-coding RNAs NRON and MHRT as novel predictive biomarkers of heart failure.

    PubMed

    Xuan, Lina; Sun, Lihua; Zhang, Ying; Huang, Yuechao; Hou, Yan; Li, Qingqi; Guo, Ying; Feng, Bingbing; Cui, Lina; Wang, Xiaoxue; Wang, Zhiguo; Tian, Ye; Yu, Bo; Wang, Shu; Xu, Chaoqian; Zhang, Mingyu; Du, Zhimin; Lu, Yanjie; Yang, Bao Feng

    2017-03-14

    This study sought to evaluate the potential of circulating long non-coding RNAs (lncRNAs) as biomarkers for heart failure (HF). We measured the circulating levels of 13 individual lncRNAs which are known to be relevant to cardiovascular disease in the plasma samples from 72 HF patients and 60 non-HF control participants using real-time reverse transcription-polymerase chain reaction (real-time RT-PCR) methods. We found that out of the 13 lncRNAs tested, non-coding repressor of NFAT (NRON) and myosin heavy-chain-associated RNA transcripts (MHRT) had significantly higher plasma levels in HF than in non-HF subjects: 3.17 ± 0.30 versus 1.0 ± 0.07 for NRON (P < 0.0001) and 1.66 ± 0.14 versus 1.0 ± 0.12 for MHRT (P < 0.0001). The area under the ROC curve was 0.865 for NRON and 0.702 for MHRT. Univariate and multivariate analyses identified NRON and MHRT as independent predictors for HF. Spearman's rank correlation analysis showed that NRON was negatively correlated with HDL and positively correlated with LDH, whereas MHRT was positively correlated with AST and LDH. Hence, elevation of circulating NRON and MHRT predicts HF and may be considered as novel biomarkers of HF.

  13. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females.

  14. A computer code (SKINTEMP) for predicting transient missile and aircraft heat transfer characteristics

    NASA Astrophysics Data System (ADS)

    Cummings, Mary L.

    1994-09-01

    A FORTRAN computer code (SKINTEMP) has been developed to calculate transient missile/aircraft aerodynamic heating parameters utilizing basic flight parameters such as altitude, Mach number, and angle of attack. The insulated skin temperature of a vehicle surface on either the fuselage (axisymmetric body) or wing (two-dimensional body) is computed from a basic heat balance relationship throughout the entire spectrum (subsonic, transonic, supersonic, hypersonic) of flight. This calculation method employs a simple finite difference procedure which considers radiation, forced convection, and non-reactive chemistry. Surface pressure estimates are based on a modified Newtonian flow model. Eckert's reference temperature method is used as the forced convection heat transfer model. SKINTEMP predictions are compared with a limited number of test cases. SKINTEMP was developed as a tool to enhance the conceptual design process of high speed missiles and aircraft. Recommendations are made for possible future development of SKINTEMP to further support the design process.

  15. Prediction of explosive cylinder tests using equations of state from the PANDA code

    SciTech Connect

    Kerley, G.I.; Christian-Frear, T.L.

    1993-09-28

    The PANDA code is used to construct tabular equations of state (EOS) for the detonation products of 24 explosives having CHNO compositions. These EOS, together with a reactive burn model, are used in numerical hydrocode calculations of cylinder tests. The predicted detonation properties and cylinder wall velocities are found to give very good agreement with experimental data. Calculations of flat plate acceleration tests for the HMX-based explosive LX14 are also made and shown to agree well with the measurements. The effects of the reaction zone on both the cylinder and flat plate tests are discussed. For TATB-based explosives, the differences between experiment and theory are consistently larger than for other compositions and may be due to nonideal (finite dimameter) behavior.

  16. Contribution to the Prediction of the Fold Code: Application to Immunoglobulin and Flavodoxin Cases

    PubMed Central

    Banach, Mateusz; Prudhomme, Nicolas; Carpentier, Mathilde; Duprat, Elodie; Papandreou, Nikolaos; Kalinowska, Barbara; Chomilier, Jacques; Roterman, Irena

    2015-01-01

    Background Folding nucleus of globular proteins formation starts by the mutual interaction of a group of hydrophobic amino acids whose close contacts allow subsequent formation and stability of the 3D structure. These early steps can be predicted by simulation of the folding process through a Monte Carlo (MC) coarse grain model in a discrete space. We previously defined MIRs (Most Interacting Residues), as the set of residues presenting a large number of non-covalent neighbour interactions during such simulation. MIRs are good candidates to define the minimal number of residues giving rise to a given fold instead of another one, although their proportion is rather high, typically [15-20]% of the sequences. Having in mind experiments with two sequences of very high levels of sequence identity (up to 90%) but different folds, we combined the MIR method, which takes sequence as single input, with the “fuzzy oil drop” (FOD) model that requires a 3D structure, in order to estimate the residues coding for the fold. FOD assumes that a globular protein follows an idealised 3D Gaussian distribution of hydrophobicity density, with the maximum in the centre and minima at the surface of the “drop”. If the actual local density of hydrophobicity around a given amino acid is as high as the ideal one, then this amino acid is assigned to the core of the globular protein, and it is assumed to follow the FOD model. Therefore one obtains a distribution of the amino acids of a protein according to their agreement or rejection with the FOD model. Results We compared and combined MIR and FOD methods to define the minimal nucleus, or keystone, of two populated folds: immunoglobulin-like (Ig) and flavodoxins (Flav). The combination of these two approaches defines some positions both predicted as a MIR and assigned as accordant with the FOD model. It is shown here that for these two folds, the intersection of the predicted sets of residues significantly differs from random selection

  17. The Cerebellum: Adaptive Prediction for Movement and Cognition.

    PubMed

    Sokolov, Arseny A; Miall, R Chris; Ivry, Richard B

    2017-04-03

    Over the past 30 years, cumulative evidence has indicated that cerebellar function extends beyond sensorimotor control. This view has emerged from studies of neuroanatomy, neuroimaging, neuropsychology, and brain stimulation, with the results implicating the cerebellum in domains as diverse as attention, language, executive function, and social cognition. Although the literature provides sophisticated models of how the cerebellum helps refine movements, it remains unclear how the core mechanisms of these models can be applied when considering a broader conceptualization of cerebellar function. In light of recent multidisciplinary findings, we examine how two key concepts that have been suggested as general computational principles of cerebellar function- prediction and error-based learning- might be relevant in the operation of cognitive cerebro-cerebellar loops.

  18. Adaptive DFT-based fringe tracking and prediction at IOTA

    NASA Astrophysics Data System (ADS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-10-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  19. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  20. Clinal adaptation and adaptive plasticity in Artemisia californica: implications for the response of a foundation species to predicted climate change.

    PubMed

    Pratt, Jessica D; Mooney, Kailen A

    2013-08-01

    Local adaptation and plasticity pose significant obstacles to predicting plant responses to future climates. Although local adaptation and plasticity in plant functional traits have been documented for many species, less is known about population-level variation in plasticity and whether such variation is driven by adaptation to environmental variation. We examined clinal variation in traits and performance - and plastic responses to environmental change - for the shrub Artemisia californica along a 700 km gradient characterized (from south to north) by a fourfold increase in precipitation and a 61% decrease in interannual precipitation variation. Plants cloned from five populations along this gradient were grown for 3 years in treatments approximating the precipitation regimes of the north and south range margins. Most traits varying among populations did so clinally; northern populations (vs. southern) had higher water-use efficiencies and lower growth rates, C : N ratios and terpene concentrations. Notably, there was variation in plasticity for plant performance that was strongly correlated with source site interannual precipitation variability. The high-precipitation treatment (vs. low) increased growth and flower production more for plants from southern populations (181% and 279%, respectively) than northern populations (47% and 20%, respectively). Overall, precipitation variability at population source sites predicted 86% and 99% of variation in plasticity in growth and flowering, respectively. These striking, clinal patterns in plant traits and plasticity are indicative of adaptation to both the mean and variability of environmental conditions. Furthermore, our analysis of long-term coastal climate data in turn indicates an increase in interannual precipitation variation consistent with most global change models and, unexpectedly, this increased variation is especially pronounced at historically stable, northern sites. Our findings demonstrate the

  1. On the efficiency of image completion methods for intra prediction in video coding with large block structures

    NASA Astrophysics Data System (ADS)

    Doshkov, Dimitar; Jottrand, Oscar; Wiegand, Thomas; Ndjiki-Nya, Patrick

    2013-02-01

    Intra prediction is a fundamental tool in video coding with hybrid block-based architecture. Recent investigations have shown that one of the most beneficial elements for a higher compression performance in high-resolution videos is the incorporation of larger block structures. Thus in this work, we investigate the performance of novel intra prediction modes based on different image completion techniques in a new video coding scheme with large block structures. Image completion methods exploit the fact that high frequency image regions yield high coding costs when using classical H.264/AVC prediction modes. This problem is tackled by investigating the incorporation of several intra predictors using the concept of Laplace partial differential equation (PDE), Least Square (LS) based linear prediction and the Auto Regressive model. A major aspect of this article is the evaluation of the coding performance in a qualitative (i.e. coding efficiency) manner. Experimental results show significant improvements in compression (up to 7.41 %) by integrating the LS-based linear intra prediction.

  2. The WISGSK: A computer code for the prediction of a multistage axial compressor performance with water ingestion

    NASA Technical Reports Server (NTRS)

    Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.

  3. Predicting respiratory tumor motion with multi-dimensional adaptive filters and support vector regression.

    PubMed

    Riaz, Nadeem; Shanker, Piyush; Wiersma, Rodney; Gudmundsson, Olafur; Mao, Weihua; Widrow, Bernard; Xing, Lei

    2009-10-07

    Intra-fraction tumor tracking methods can improve radiation delivery during radiotherapy sessions. Image acquisition for tumor tracking and subsequent adjustment of the treatment beam with gating or beam tracking introduces time latency and necessitates predicting the future position of the tumor. This study evaluates the use of multi-dimensional linear adaptive filters and support vector regression to predict the motion of lung tumors tracked at 30 Hz. We expand on the prior work of other groups who have looked at adaptive filters by using a general framework of a multiple-input single-output (MISO) adaptive system that uses multiple correlated signals to predict the motion of a tumor. We compare the performance of these two novel methods to conventional methods like linear regression and single-input, single-output adaptive filters. At 400 ms latency the average root-mean-square-errors (RMSEs) for the 14 treatment sessions studied using no prediction, linear regression, single-output adaptive filter, MISO and support vector regression are 2.58, 1.60, 1.58, 1.71 and 1.26 mm, respectively. At 1 s, the RMSEs are 4.40, 2.61, 3.34, 2.66 and 1.93 mm, respectively. We find that support vector regression most accurately predicts the future tumor position of the methods studied and can provide a RMSE of less than 2 mm at 1 s latency. Also, a multi-dimensional adaptive filter framework provides improved performance over single-dimension adaptive filters. Work is underway to combine these two frameworks to improve performance.

  4. Potential prognostic long non-coding RNA identification and their validation in predicting survival of patients with multiple myeloma.

    PubMed

    Hu, Ai-Xin; Huang, Zhi-Yong; Zhang, Lin; Shen, Jian

    2017-04-01

    Multiple myeloma, a typical hematological malignancy, is characterized by malignant proliferation of plasma cells. This study was to identify differently expressed long non-coding RNAs to predict the survival of patients with multiple myeloma efficiently. Gene expressing profiles of diagnosed patients with multiple myeloma, GSE24080 (559 samples) and GSE57317 (55 samples), were downloaded from Gene Expression Omnibus database. After processing, survival-related long non-coding RNAs were identified by Cox regression analysis. The prognosis of multiple myeloma patients with differently expressed long non-coding RNAs was predicted by Kaplan-Meier analysis. Meanwhile, stratified analysis was performed based on the concentrations of serum beta 2-microglobulin (S-beta 2m), albumin, and lactate dehydrogenase of multiple myeloma patients. Gene set enrichment analysis was performed to further explore the functions of identified long non-coding RNAs. A total of 176 long non-coding RNAs significantly related to the survival of multiple myeloma patients (p < 0.05) were identified. In dataset GSE24080 and GSE57317, there were 558 and 55 patients being clustered into two groups with significant differences, respectively. Stratified analysis indicated that prediction of the prognoses with these long non-coding RNAs was independent from other clinical phenotype of multiple myeloma. Gene set enrichment analysis-identified pathways of cell cycle, focal adhesion, and G2-M checkpoint were associated with these long non-coding RNAs. A total of 176 long non-coding RNAs, especially RP1-286D6.1, AC008875.2, MTMR9L, AC069360.2, and AL512791.1, were potential biomarkers to evaluate the prognosis of multiple myeloma patients. These long non-coding RNAs participated indispensably in many pathways associated to the development of multiple myeloma; however, the molecular mechanisms need to be further studied.

  5. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    NASA Astrophysics Data System (ADS)

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-01

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  6. From structure prediction to genomic screens for novel non-coding RNAs.

    PubMed

    Gorodkin, Jan; Hofacker, Ivo L

    2011-08-01

    Non-coding RNAs (ncRNAs) are receiving more and more attention not only as an abundant class of genes, but also as regulatory structural elements (some located in mRNAs). A key feature of RNA function is its structure. Computational methods were developed early for folding and prediction of RNA structure with the aim of assisting in functional analysis. With the discovery of more and more ncRNAs, it has become clear that a large fraction of these are highly structured. Interestingly, a large part of the structure is comprised of regular Watson-Crick and GU wobble base pairs. This and the increased amount of available genomes have made it possible to employ structure-based methods for genomic screens. The field has moved from folding prediction of single sequences to computational screens for ncRNAs in genomic sequence using the RNA structure as the main characteristic feature. Whereas early methods focused on energy-directed folding of single sequences, comparative analysis based on structure preserving changes of base pairs has been efficient in improving accuracy, and today this constitutes a key component in genomic screens. Here, we cover the basic principles of RNA folding and touch upon some of the concepts in current methods that have been applied in genomic screens for de novo RNA structures in searches for novel ncRNA genes and regulatory RNA structure on mRNAs. We discuss the strengths and weaknesses of the different strategies and how they can complement each other.

  7. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    SciTech Connect

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-08

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  8. Performance of a municipal solid waste (MSW) incinerator predicted with a computational fluid dynamics (CFD) code

    SciTech Connect

    Anglesio, P.; Negreanu, G.P.

    1998-07-01

    The purpose of this paper is to investigate by the means of numerical simulation the performance of the MSW incinerator with of Vercelli (Italy). FLUENT, a finite-volumes commercial code for Fluid Dynamics has been used to predict the 3-D reacting flows (gaseous phase) within the incinerator geometry, in order to estimate if the three conditions settled by the Italian law (P.D. 915 / 82) are respected: (a) Flue gas temperature at the input of the secondary combustion chamber must exceed 950 C. (b) Oxygen concentration in the same section must exceed 6 %. (c) Residence time for the flue gas in the secondary combustion chamber must exceed 2 seconds. The model of the incinerator has been created using the software pre-processing facilities (wall, input, outlet and live cells), together with the set-up of boundary conditions. There are also imposed the combustion constants (stoichiometry, heat of combustion, air excess). The solving procedure transforms at the level of each live cell the partial derivative equations in algebraic equations, computing the velocities field, the temperatures, gases concentration, etc. These predicted values were compared with the design properties, and the conclusion was that the conditions (a), (b), (c), are respected in normal operation. The powerful graphic interface helps the user to visualize the magnitude of the computed parameters. These results may be successfully used for the design and operation improvements for MSW incinerators. This fact will substantially increase the efficiency, reduce pollutant emissions and optimize the plant overall performance.

  9. Predicting cortical bone adaptation to axial loading in the mouse tibia

    PubMed Central

    Pereira, A. F.; Javaheri, B.; Pitsillides, A. A.; Shefelbine, S. J.

    2015-01-01

    The development of predictive mathematical models can contribute to a deeper understanding of the specific stages of bone mechanobiology and the process by which bone adapts to mechanical forces. The objective of this work was to predict, with spatial accuracy, cortical bone adaptation to mechanical load, in order to better understand the mechanical cues that might be driving adaptation. The axial tibial loading model was used to trigger cortical bone adaptation in C57BL/6 mice and provide relevant biological and biomechanical information. A method for mapping cortical thickness in the mouse tibia diaphysis was developed, allowing for a thorough spatial description of where bone adaptation occurs. Poroelastic finite-element (FE) models were used to determine the structural response of the tibia upon axial loading and interstitial fluid velocity as the mechanical stimulus. FE models were coupled with mechanobiological governing equations, which accounted for non-static loads and assumed that bone responds instantly to local mechanical cues in an on–off manner. The presented formulation was able to simulate the areas of adaptation and accurately reproduce the distributions of cortical thickening observed in the experimental data with a statistically significant positive correlation (Kendall's τ rank coefficient τ = 0.51, p < 0.001). This work demonstrates that computational models can spatially predict cortical bone mechanoadaptation to a time variant stimulus. Such models could be used in the design of more efficient loading protocols and drug therapies that target the relevant physiological mechanisms. PMID:26311315

  10. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel

    SciTech Connect

    Chen, K.F.; Olson, C.A.

    1983-09-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction.

  11. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    SciTech Connect

    Beutler, D.E.; Halbleib, J.A. ); Knott, D.P. )

    1989-12-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude.

  12. Predictive coding accounts of shared representations in parieto-insular networks.

    PubMed

    Ishida, Hiroaki; Suzuki, Keisuke; Grandi, Laura Clara

    2015-04-01

    The discovery of mirror neurons in the ventral premotor cortex (area F5) and inferior parietal cortex (area PFG) in the macaque monkey brain has provided the physiological evidence for direct matching of the intrinsic motor representations of the self and the visual image of the actions of others. The existence of mirror neurons implies that the brain has mechanisms reflecting shared self and other action representations. This may further imply that the neural basis self-body representations may also incorporate components that are shared with other-body representations. It is likely that such a mechanism is also involved in predicting other's touch sensations and emotions. However, the neural basis of shared body representations has remained unclear. Here, we propose a neural basis of body representation of the self and of others in both human and non-human primates. We review a series of behavioral and physiological findings which together paint a picture that the systems underlying such shared representations require integration of conscious exteroception and interoception subserved by a cortical sensory-motor network involving parieto-inner perisylvian circuits (the ventral intraparietal area [VIP]/inferior parietal area [PFG]-secondary somatosensory cortex [SII]/posterior insular cortex [pIC]/anterior insular cortex [aIC]). Based on these findings, we propose a computational mechanism of the shared body representation in the predictive coding (PC) framework. Our mechanism proposes that processes emerging from generative models embedded in these specific neuronal circuits play a pivotal role in distinguishing a self-specific body representation from a shared one. The model successfully accounts for normal and abnormal shared body phenomena such as mirror-touch synesthesia and somatoparaphrenia. In addition, it generates a set of testable experimental predictions.

  13. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music

    PubMed Central

    Vuust, Peter; Witek, Maria A. G.

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813

  14. ER-Worker: A computer code to predict remediation worker exposure and safety hazards

    SciTech Connect

    Blaylock, B.P.; Campbell, A.C.; Hutchison, J.F.; Simek, M.A.P.; Sutherland, J.F.; Legg, J.L.

    1994-12-31

    The U.S. Department of Energy (DOE) has generated and disposed of large quantities of waste as a result of 50 years of nuclear weapons production. This waste has been disposed of in waste sites such as burial grounds, waste pits, holding ponds, and landfills. Many of these waste sites have begun to release contamination offsite and potentially pose risks to humans living or working in the vicinity of these sites. By 2019, DOE must meet its goals to achieve timely compliance with all applicable environmental requirements, clean up the 1989 inventory of hazardous and radioactive wastes at inactive sites and facilities, and safely and efficiently treat, store, and dispose of the waste generated by remediation and operating facilities. Remediation of DOE`s 13,000 facilities, and management of the current and future waste streams, will require the effort of thousands of workers. Workers, as defined here, are persons who directly participate in the cleanup or remediation of DOE sites. Remediation activities include the use of remediation technologies such as bioremediation, surface water controls, and contaminated soil excavation. This document describes a worker health risk evaluation methodology and computer code designed to predict risks associated with Environmental Restoration (ER) activities that are yet to be undertaken. The computer code, designated ER-WORKER, can be used to estimate worker risks across the DOE complex on a site-specific, installation-wide, or programmatic level. This approach generally follows the guidance suggested in the Risk Assessment Guidance for Superfund (RAGS) (EPA 1989a). Key principles from other important Environmental Protection Agency (EPA) and DOE guidance documents are incorporated into the methodology.

  15. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  16. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  17. Resting-state functional connectivity predicts longitudinal change in autistic traits and adaptive functioning in autism

    PubMed Central

    Plitt, Mark; Barnes, Kelly Anne; Wallace, Gregory L.; Kenworthy, Lauren; Martin, Alex

    2015-01-01

    Although typically identified in early childhood, the social communication symptoms and adaptive behavior deficits that are characteristic of autism spectrum disorder (ASD) persist throughout the lifespan. Despite this persistence, even individuals without cooccurring intellectual disability show substantial heterogeneity in outcomes. Previous studies have found various behavioral assessments [such as intelligence quotient (IQ), early language ability, and baseline autistic traits and adaptive behavior scores] to be predictive of outcome, but most of the variance in functioning remains unexplained by such factors. In this study, we investigated to what extent functional brain connectivity measures obtained from resting-state functional connectivity MRI (rs-fcMRI) could predict the variance left unexplained by age and behavior (follow-up latency and baseline autistic traits and adaptive behavior scores) in two measures of outcome—adaptive behaviors and autistic traits at least 1 y postscan (mean follow-up latency = 2 y, 10 mo). We found that connectivity involving the so-called salience network (SN), default-mode network (DMN), and frontoparietal task control network (FPTCN) was highly predictive of future autistic traits and the change in autistic traits and adaptive behavior over the same time period. Furthermore, functional connectivity involving the SN, which is predominantly composed of the anterior insula and the dorsal anterior cingulate, predicted reliable improvement in adaptive behaviors with 100% sensitivity and 70.59% precision. From rs-fcMRI data, our study successfully predicted heterogeneity in outcomes for individuals with ASD that was unaccounted for by simple behavioral metrics and provides unique evidence for networks underlying long-term symptom abatement. PMID:26627261

  18. Resting-state functional connectivity predicts longitudinal change in autistic traits and adaptive functioning in autism.

    PubMed

    Plitt, Mark; Barnes, Kelly Anne; Wallace, Gregory L; Kenworthy, Lauren; Martin, Alex

    2015-12-01

    Although typically identified in early childhood, the social communication symptoms and adaptive behavior deficits that are characteristic of autism spectrum disorder (ASD) persist throughout the lifespan. Despite this persistence, even individuals without cooccurring intellectual disability show substantial heterogeneity in outcomes. Previous studies have found various behavioral assessments [such as intelligence quotient (IQ), early language ability, and baseline autistic traits and adaptive behavior scores] to be predictive of outcome, but most of the variance in functioning remains unexplained by such factors. In this study, we investigated to what extent functional brain connectivity measures obtained from resting-state functional connectivity MRI (rs-fcMRI) could predict the variance left unexplained by age and behavior (follow-up latency and baseline autistic traits and adaptive behavior scores) in two measures of outcome--adaptive behaviors and autistic traits at least 1 y postscan (mean follow-up latency = 2 y, 10 mo). We found that connectivity involving the so-called salience network (SN), default-mode network (DMN), and frontoparietal task control network (FPTCN) was highly predictive of future autistic traits and the change in autistic traits and adaptive behavior over the same time period. Furthermore, functional connectivity involving the SN, which is predominantly composed of the anterior insula and the dorsal anterior cingulate, predicted reliable improvement in adaptive behaviors with 100% sensitivity and 70.59% precision. From rs-fcMRI data, our study successfully predicted heterogeneity in outcomes for individuals with ASD that was unaccounted for by simple behavioral metrics and provides unique evidence for networks underlying long-term symptom abatement.

  19. User's guide for a flat wake rotor inflow/wake velocity prediction code, DOWN

    NASA Technical Reports Server (NTRS)

    Wilson, John C.

    1991-01-01

    A computer code named DOWN was created to implement a flat wake theory for the calculation of rotor inflow and wake velocities. A brief description of the code methodology and instructions for its use are given. The code will be available from NASA's Computer Software Management and Information Center (COSMIC).

  20. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  1. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  2. Time-of-day-dependent adaptation of the HPA axis to predictable social defeat stress.

    PubMed

    Koch, C E; Bartlang, M S; Kiehn, J T; Lucke, L; Naujokat, N; Helfrich-Förster, C; Reber, S O; Oster, H

    2016-12-01

    In modern societies, the risk of developing a whole array of affective and somatic disorders is associated with the prevalence of frequent psychosocial stress. Therefore, a better understanding of adaptive stress responses and their underlying molecular mechanisms is of high clinical interest. In response to an acute stressor, each organism can either show passive freezing or active fight-or-flight behaviour, with activation of sympathetic nervous system and the hypothalamus-pituitary-adrenal (HPA) axis providing the necessary energy for the latter by releasing catecholamines and glucocorticoids (GC). Recent data suggest that stress responses are also regulated by the endogenous circadian clock. In consequence, the timing of stress may critically affect adaptive responses to and/or pathological effects of repetitive stressor exposure. In this article, we characterize the impact of predictable social defeat stress during daytime versus nighttime on bodyweight development and HPA axis activity in mice. While 19 days of social daytime stress led to a transient reduction in bodyweight without altering HPA axis activity at the predicted time of stressor exposure, more detrimental effects were seen in anticipation of nighttime stress. Repeated nighttime stressor exposure led to alterations in food metabolization and reduced HPA axis activity with lower circulating adrenocorticotropic hormone (ACTH) and GC concentrations at the time of predicted stressor exposure. Our data reveal a circadian gating of stress adaptation to predictable social defeat stress at the level of the HPA axis with impact on metabolic homeostasis underpinning the importance of timing for the body's adaptability to repetitive stress.

  3. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490

  4. Starvation stress during larval development reveals predictive adaptive response in adult worker honey bees (Apis mellifera)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A variety of organisms exhibit developmental plasticity that results in differences in adult morphology, physiology or behavior. This variation in the phenotype, called “Predictive Adaptive Response (PAR),” gives a selective advantage in an adult's environment if the adult experiences environments s...

  5. An adaptive handover prediction scheme for seamless mobility based wireless networks.

    PubMed

    Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.

  6. Self-consistent modeling of DEMOs with 1.5D BALDUR integrated predictive modeling code

    NASA Astrophysics Data System (ADS)

    Wisitsorasak, A.; Somjinda, B.; Promping, J.; Onjun, T.

    2017-02-01

    Self-consistent simulations of four DEMO designs proposed by teams from China, Europe, India, and Korea are carried out using the BALDUR integrated predictive modeling code in which theory-based models are used, for both core transport and boundary conditions. In these simulations, a combination of the NCLASS neoclassical transport and multimode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a pedestal temperature model based on a combination of magnetic and flow shear stabilization, pedestal width scaling and an infinite- n ballooning pressure gradient model and a pedestal density model based on a line average density. Even though an optimistic scenario is considered, the simulation results suggest that, with the exclusion of ELMs, the fusion gain Q obtained for these reactors is pessimistic compared to their original designs, i.e. 52% for the Chinese design, 63% for the European design, 22% for the Korean design, and 26% for the Indian design. In addition, the predicted bootstrap current fractions are also found to be lower than their original designs, as fractions of their original designs, i.e. 0.49 (China), 0.66 (Europe), and 0.58 (India). Furthermore, in relation to sensitivity, it is found that increasing values of the auxiliary heating power and the electron line average density from their design values yield an enhancement of fusion performance. In addition, inclusion of sawtooth oscillation effects demonstrate positive impacts on the plasma and fusion performance in European, Indian and Korean DEMOs, but degrade the performance in the Chinese DEMO.

  7. Large-scale prediction of long non-coding RNA functions in a coding–non-coding gene co-expression network

    PubMed Central

    Liao, Qi; Liu, Changning; Yuan, Xiongying; Kang, Shuli; Miao, Ruoyu; Xiao, Hui; Zhao, Guoguang; Luo, Haitao; Bu, Dechao; Zhao, Haitao; Skogerbø, Geir; Wu, Zhongdao; Zhao, Yi

    2011-01-01

    Although accumulating evidence has provided insight into the various functions of long-non-coding RNAs (lncRNAs), the exact functions of the majority of such transcripts are still unknown. Here, we report the first computational annotation of lncRNA functions based on public microarray expression profiles. A coding–non-coding gene co-expression (CNC) network was constructed from re-annotated Affymetrix Mouse Genome Array data. Probable functions for altogether 340 lncRNAs were predicted based on topological or other network characteristics, such as module sharing, association with network hubs and combinations of co-expression and genomic adjacency. The functions annotated to the lncRNAs mainly involve organ or tissue development (e.g. neuron, eye and muscle development), cellular transport (e.g. neuronal transport and sodium ion, acid or lipid transport) or metabolic processes (e.g. involving macromolecules, phosphocreatine and tyrosine). PMID:21247874

  8. Adaptive immunity increases the pace and predictability of evolutionary change in commensal gut bacteria

    PubMed Central

    Barroso-Batista, João; Demengeot, Jocelyne; Gordo, Isabel

    2015-01-01

    Co-evolution between the mammalian immune system and the gut microbiota is believed to have shaped the microbiota's astonishing diversity. Here we test the corollary hypothesis that the adaptive immune system, directly or indirectly, influences the evolution of commensal species. We compare the evolution of Escherichia coli upon colonization of the gut of wild-type and Rag2−/− mice, which lack lymphocytes. We show that bacterial adaptation is slower in immune-compromised animals, a phenomenon explained by differences in the action of natural selection within each host. Emerging mutations exhibit strong beneficial effects in healthy hosts but substantial antagonistic pleiotropy in immune-deficient mice. This feature is due to changes in the composition of the gut microbiota, which differs according to the immune status of the host. Our results indicate that the adaptive immune system influences the tempo and predictability of E. coli adaptation to the mouse gut. PMID:26615893

  9. Adaptive sensing of ECG signals using R-R interval prediction.

    PubMed

    Nakaya, Shogo; Nakamura, Yuichi

    2013-01-01

    There is growing demand for systems consisting of tiny sensor nodes powered with small batteries that acquire electrocardiogram (ECG) data and wirelessly transmit the data to remote base stations or mobile phones continuously over a long period. Conserving electric power in the wireless sensor nodes (WSNs) is essential in such systems. Adaptive sensing is promising for this purpose since it can reduce the energy consumed not only for data transmission but also for sensing. However, the basic method of adaptive sensing, referred to here as "plain adaptive sensing," is not suitable for ECG signals because it sometimes capture the R waves defectively. We introduce an improved adaptive sensing method for ECG signals by incorporating R-R interval prediction. Our method improves the characteristics of ECG compression and drastically reduces the total energy consumption of the WSNs.

  10. A 4.8 kbps code-excited linear predictive coder

    NASA Technical Reports Server (NTRS)

    Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.

    1988-01-01

    A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.

  11. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  12. An adaptive scan of high frequency subbands for dyadic intra frame in MPEG4-AVC/H.264 scalable video coding

    NASA Astrophysics Data System (ADS)

    Shahid, Z.; Chaumont, M.; Puech, W.

    2009-01-01

    This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.

  13. Modeling Light Adaptation in Circadian Clock: Prediction of the Response That Stabilizes Entrainment

    PubMed Central

    Yoshinaga, Tetsuya; Aihara, Kazuyuki

    2011-01-01

    Periods of biological clocks are close to but often different from the rotation period of the earth. Thus, the clocks of organisms must be adjusted to synchronize with day-night cycles. The primary signal that adjusts the clocks is light. In Neurospora, light transiently up-regulates the expression of specific clock genes. This molecular response to light is called light adaptation. Does light adaptation occur in other organisms? Using published experimental data, we first estimated the time course of the up-regulation rate of gene expression by light. Intriguingly, the estimated up-regulation rate was transient during light period in mice as well as Neurospora. Next, we constructed a computational model to consider how light adaptation had an effect on the entrainment of circadian oscillation to 24-h light-dark cycles. We found that cellular oscillations are more likely to be destabilized without light adaption especially when light intensity is very high. From the present results, we predict that the instability of circadian oscillations under 24-h light-dark cycles can be experimentally observed if light adaptation is altered. We conclude that the functional consequence of light adaptation is to increase the adjustability to 24-h light-dark cycles and then adapt to fluctuating environments in nature. PMID:21698191

  14. Predicting local adaptation in fragmented plant populations: implications for restoration genetics

    PubMed Central

    Pickup, Melinda; Field, David L; Rowell, David M; Young, Andrew G

    2012-01-01

    Understanding patterns and correlates of local adaptation in heterogeneous landscapes can provide important information in the selection of appropriate seed sources for restoration. We assessed the extent of local adaptation of fitness components in 12 population pairs of the perennial herb Rutidosis leptorrhynchoides (Asteraceae) and examined whether spatial scale (0.7–600 km), environmental distance, quantitative (QST) and neutral (FST) genetic differentiation, and size of the local and foreign populations could predict patterns of adaptive differentiation. Local adaptation varied among populations and fitness components. Including all population pairs, local adaptation was observed for seedling survival, but not for biomass, while foreign genotype advantage was observed for reproduction (number of inflorescences). Among population pairs, local adaptation increased with QST and local population size for biomass. QST was associated with environmental distance, suggesting ecological selection for phenotypic divergence. However, low FST and variation in population structure in small populations demonstrates the interaction of gene flow and drift in constraining local adaptation in R. leptorrhynchoides. Our study indicates that for species in heterogeneous landscapes, collecting seed from large populations from similar environments to candidate sites is likely to provide the most appropriate seed sources for restoration. PMID:23346235

  15. Modelling and Bayesian adaptive prediction of individual patients' tumour volume change during radiotherapy.

    PubMed

    Tariq, Imran; Chen, Tao; Kirkby, Norman F; Jena, Rajesh

    2016-03-07

    The aim of this study is to develop a mathematical modelling method that can predict individual patients’ response to radiotherapy, in terms of tumour volume change during the treatment. The main concept is to start from a population-average model, which is subsequently updated from an individual’s tumour volume measurement. The model becomes increasingly personalized and so too does the prediction it produces. This idea of adaptive prediction was realised by using a Bayesian approach for updating the model parameters. The feasibility of the developed method was demonstrated on the data from 25 non-small cell lung cancer patients treated with helical tomotherapy, during which tumour volume was measured from daily imaging as part of the image-guided radiotherapy. The method could provide useful information for adaptive treatment planning and dose scheduling based on the patient’s personalised response.

  16. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    SciTech Connect

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-02-15

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  17. A boosting approach for adapting the sparsity of risk prediction signatures based on different molecular levels.

    PubMed

    Sariyar, Murat; Schumacher, Martin; Binder, Harald

    2014-06-01

    Risk prediction models can link high-dimensional molecular measurements, such as DNA methylation, to clinical endpoints. For biological interpretation, often a sparse fit is desirable. Different molecular aggregation levels, such as considering DNA methylation at the CpG, gene, or chromosome level, might demand different degrees of sparsity. Hence, model building and estimation techniques should be able to adapt their sparsity according to the setting. Additionally, underestimation of coefficients, which is a typical problem of sparse techniques, should also be addressed. We propose a comprehensive approach, based on a boosting technique that allows a flexible adaptation of model sparsity and addresses these problems in an integrative way. The main motivation is to have an automatic sparsity adaptation. In a simulation study, we show that this approach reduces underestimation in sparse settings and selects more adequate model sizes than the corresponding non-adaptive boosting technique in non-sparse settings. Using different aggregation levels of DNA methylation data from a study in kidney carcinoma patients, we illustrate how automatically selected values of the sparsity tuning parameter can reflect the underlying structure of the data. In addition to that, prediction performance and variable selection stability is compared to the non-adaptive boosting approach.

  18. Predicting the evolutionary dynamics of seasonal adaptation to novel climates in Arabidopsis thaliana.

    PubMed

    Fournier-Level, Alexandre; Perry, Emily O; Wang, Jonathan A; Braun, Peter T; Migneault, Andrew; Cooper, Martha D; Metcalf, C Jessica E; Schmitt, Johanna

    2016-05-17

    Predicting whether and how populations will adapt to rapid climate change is a critical goal for evolutionary biology. To examine the genetic basis of fitness and predict adaptive evolution in novel climates with seasonal variation, we grew a diverse panel of the annual plant Arabidopsis thaliana (multiparent advanced generation intercross lines) in controlled conditions simulating four climates: a present-day reference climate, an increased-temperature climate, a winter-warming only climate, and a poleward-migration climate with increased photoperiod amplitude. In each climate, four successive seasonal cohorts experienced dynamic daily temperature and photoperiod variation over a year. We measured 12 traits and developed a genomic prediction model for fitness evolution in each seasonal environment. This model was used to simulate evolutionary trajectories of the base population over 50 y in each climate, as well as 100-y scenarios of gradual climate change following adaptation to a reference climate. Patterns of plastic and evolutionary fitness response varied across seasons and climates. The increased-temperature climate promoted genetic divergence of subpopulations across seasons, whereas in the winter-warming and poleward-migration climates, seasonal genetic differentiation was reduced. In silico "resurrection experiments" showed limited evolutionary rescue compared with the plastic response of fitness to seasonal climate change. The genetic basis of adaptation and, consequently, the dynamics of evolutionary change differed qualitatively among scenarios. Populations with fewer founding genotypes and populations with genetic diversity reduced by prior selection adapted less well to novel conditions, demonstrating that adaptation to rapid climate change requires the maintenance of sufficient standing variation.

  19. Predicting the evolutionary dynamics of seasonal adaptation to novel climates in Arabidopsis thaliana

    PubMed Central

    Fournier-Level, Alexandre; Perry, Emily O.; Wang, Jonathan A.; Braun, Peter T.; Migneault, Andrew; Cooper, Martha D.; Metcalf, C. Jessica E.; Schmitt, Johanna

    2016-01-01

    Predicting whether and how populations will adapt to rapid climate change is a critical goal for evolutionary biology. To examine the genetic basis of fitness and predict adaptive evolution in novel climates with seasonal variation, we grew a diverse panel of the annual plant Arabidopsis thaliana (multiparent advanced generation intercross lines) in controlled conditions simulating four climates: a present-day reference climate, an increased-temperature climate, a winter-warming only climate, and a poleward-migration climate with increased photoperiod amplitude. In each climate, four successive seasonal cohorts experienced dynamic daily temperature and photoperiod variation over a year. We measured 12 traits and developed a genomic prediction model for fitness evolution in each seasonal environment. This model was used to simulate evolutionary trajectories of the base population over 50 y in each climate, as well as 100-y scenarios of gradual climate change following adaptation to a reference climate. Patterns of plastic and evolutionary fitness response varied across seasons and climates. The increased-temperature climate promoted genetic divergence of subpopulations across seasons, whereas in the winter-warming and poleward-migration climates, seasonal genetic differentiation was reduced. In silico “resurrection experiments” showed limited evolutionary rescue compared with the plastic response of fitness to seasonal climate change. The genetic basis of adaptation and, consequently, the dynamics of evolutionary change differed qualitatively among scenarios. Populations with fewer founding genotypes and populations with genetic diversity reduced by prior selection adapted less well to novel conditions, demonstrating that adaptation to rapid climate change requires the maintenance of sufficient standing variation. PMID:27140640

  20. Soliciting scientific information and beliefs in predictive modeling and adaptive management

    NASA Astrophysics Data System (ADS)

    Glynn, P. D.; Voinov, A. A.; Shapiro, C. D.

    2015-12-01

    Post-normal science requires public engagement and adaptive corrections in addressing issues with high complexity and uncertainty. An adaptive management framework is presented for the improved management of natural resources and environments through a public participation process. The framework solicits the gathering and transformation and/or modeling of scientific information but also explicitly solicits the expression of participant beliefs. Beliefs and information are compared, explicitly discussed for alignments or misalignments, and ultimately melded back together as a "knowledge" basis for making decisions. An effort is made to recognize the human or participant biases that may affect the information base and the potential decisions. In a separate step, an attempt is made to recognize and predict the potential "winners" and "losers" (perceived or real) of any decision or action. These "winners" and "losers" include present human communities with different spatial, demographic or socio-economic characteristics as well as more dispersed or more diffusely characterized regional or global communities. "Winners" and "losers" may also include future human communities as well as communities of other biotic species. As in any adaptive management framework, assessment of predictions, iterative follow-through and adaptation of policies or actions is essential, and commonly very difficult or impossible to achieve. Recognizing beforehand the limits of adaptive management is essential. More generally, knowledge of the behavioral and economic sciences and of ethics and sociology will be key to a successful implementation of this adaptive management framework. Knowledge of biogeophysical processes will also be essential, but by definition of the issues being addressed, will always be incomplete and highly uncertain. The human dimensions of the issues addressed and the participatory processes used carry their own complexities and uncertainties. Some ideas and principles are

  1. A new type of color-coded light structures for an adapted and rapid determination of point correspondences for 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Caulier, Yannick; Bernhard, Luc; Spinnler, Klaus

    2011-05-01

    This paper proposes a new type of color coded light structures for the inspection of complex moving objects. The novelty of the methods relies on the generation of free-form color patterns permitting the projection of color structures adapted to the geometry of the surfaces to be characterized. The point correspondence determination algorithm consists of a stepwise procedure involving simple and computationally fast methods. The algorithm is therefore robust against varying recording conditions typically arising in real-time quality control environments and can be further integrated for industrial inspection purposes. The proposed approach is validated and compared on the basis of different experimentations concerning the 3D surface reconstruction by projecting adapted spatial color coded patterns. It is demonstrated that in case of certain inspection requirements, the method permits to code more reference points that similar color coded matrix methods.

  2. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  3. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  4. Verification of computational aerodynamic predictions for complex hypersonic vehicles using the INCA{trademark} code

    SciTech Connect

    Payne, J.L.; Walker, M.A.

    1995-01-01

    This paper describes a process of combining two state-of-the-art CFD tools, SPRINT and INCA, in a manner which extends the utility of both codes beyond what is possible from either code alone. The speed and efficiency of the PNS code, SPRING, has been combined with the capability of a Navier-Stokes code to model fully elliptic, viscous separated regions on high performance, high speed flight systems. The coupled SPRINT/INCA capability is applicable for design and evaluation of high speed flight vehicles in the supersonic to hypersonic speed regimes. This paper describes the codes involved, the interface process and a few selected test cases which illustrate the SPRINT/INCA coupling process. Results have shown that the combination of SPRINT and INCA produces correct results and can lead to improved computational analyses for complex, three-dimensional problems.

  5. Modification of the PARC Navier-Stokes Code to predict rocket engine nozzle performance

    NASA Technical Reports Server (NTRS)

    Collins, Frank G.; Myruski, Bryan; Orr, Joseph L.

    1990-01-01

    The PARC2D Navier-Stokes Code was modified to compute the performance parameters for rocket engine nozzles. The perfect gas code was applied to the SSME engine nozzle for inviscid, laminar and turbulent flow. Inviscid computations compare well with Rocketdyne computations. Performance degradation due to the boundary layers is very reasonable. Application of the code to nontraditional nozzle geometries and to low Reynolds nozzles is demonstrated. Modification of the code for equilibrium H2/O2 chemistry is described. Thermodynamic and equilibrium constants are determined from statistical mechanics and the transport properties from exact kinetic theory, using collison integrals determined from appropriate intermolecular potentials. The equilibrium code was used to compute the SSME flowfield. Modifications of the flowfield due to the change of composition are described.

  6. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  7. STGSTK: A computer code for predicting multistage axial flow compressor performance by a meanline stage stacking method

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1982-01-01

    A FORTRAN computer code is presented for off-design performance prediction of axial-flow compressors. Stage and compressor performance is obtained by a stage-stacking method that uses representative velocity diagrams at rotor inlet and outlet meanline radii. The code has options for: (1) direct user input or calculation of nondimensional stage characteristics; (2) adjustment of stage characteristics for off-design speed and blade setting angle; (3) adjustment of rotor deviation angle for off-design conditions; and (4) SI or U.S. customary units. Correlations from experimental data are used to model real flow conditions. Calculations are compared with experimental data.

  8. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  9. SWAT system performance predictions. Project report. [SWAT (Short-Wavelength Adaptive Techniques)

    SciTech Connect

    Parenti, R.R.; Sasiela, R.J.

    1993-03-10

    In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.... Adaptive optics, Phase conjugation, Atmospheric turbulence Synthetic beacon, Laser guide star.

  10. The rhythms of predictive coding? Pre-stimulus phase modulates the influence of shape perception on luminance judgments

    PubMed Central

    Han, Biao; VanRullen, Rufin

    2017-01-01

    Predictive coding is an influential model emphasizing interactions between feedforward and feedback signals. Here, we investigated the temporal dynamics of these interactions. Two gray disks with different versions of the same stimulus, one enabling predictive feedback (a 3D-shape) and one impeding it (random-lines), were simultaneously presented on the left and right of fixation. Human subjects judged the luminance of the two disks while EEG was recorded. The choice of 3D-shape or random-lines as the brighter disk was used to assess the influence of feedback signals on sensory processing in each trial (i.e., as a measure of post-stimulus predictive coding efficiency). Independently of the spatial response (left/right), we found that this choice fluctuated along with the pre-stimulus phase of two spontaneous oscillations: a ~5 Hz oscillation in contralateral frontal electrodes and a ~16 Hz oscillation in contralateral occipital electrodes. This pattern of results demonstrates that predictive coding is a rhythmic process, and suggests that it could take advantage of faster oscillations in low-level areas and slower oscillations in high-level areas. PMID:28262824

  11. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  12. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    SciTech Connect

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-07-05

    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in [1,2] wherein Verification & Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI [3,4] is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for numerical models

  13. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  14. An insula-frontostriatal network mediates flexible cognitive control by adaptively predicting changing control demands

    PubMed Central

    Jiang, Jiefeng; Beck, Jeffrey; Heller, Katherine; Egner, Tobias

    2015-01-01

    The anterior cingulate and lateral prefrontal cortices have been implicated in implementing context-appropriate attentional control, but the learning mechanisms underlying our ability to flexibly adapt the control settings to changing environments remain poorly understood. Here we show that human adjustments to varying control demands are captured by a reinforcement learner with a flexible, volatility-driven learning rate. Using model-based functional magnetic resonance imaging, we demonstrate that volatility of control demand is estimated by the anterior insula, which in turn optimizes the prediction of forthcoming demand in the caudate nucleus. The caudate's prediction of control demand subsequently guides the implementation of proactive and reactive attentional control in dorsal anterior cingulate and dorsolateral prefrontal cortices. These data enhance our understanding of the neuro-computational mechanisms of adaptive behaviour by connecting the classic cingulate-prefrontal cognitive control network to a subcortical control-learning mechanism that infers future demands by flexibly integrating remote and recent past experiences. PMID:26391305

  15. LPTA: Location Predictive and Time Adaptive Data Gathering Scheme with Mobile Sink for Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327

  16. Follow you, follow me: continuous mutual prediction and adaptation in joint tapping.

    PubMed

    Konvalinka, Ivana; Vuust, Peter; Roepstorff, Andreas; Frith, Chris D

    2010-11-01

    To study the mechanisms of coordination that are fundamental to successful interactions we carried out a joint finger tapping experiment in which pairs of participants were asked to maintain a given beat while synchronizing to an auditory signal coming from the other person or the computer. When both were hearing each other, the pair became a coupled, mutually and continuously adaptive unit of two "hyper-followers", with their intertap intervals (ITIs) oscillating in opposite directions on a tap-to-tap basis. There was thus no evidence for the emergence of a leader-follower strategy. We also found that dyads were equally good at synchronizing with the irregular, but responsive other as with the predictable, unresponsive computer. However, they performed worse when the "other" was both irregular and unresponsive. We thus propose that interpersonal coordination is facilitated by the mutual abilities to (a) predict the other's subsequent action and (b) adapt accordingly on a millisecond timescale.

  17. From boys to men: predicting adult adaptation from middle childhood sociometric status.

    PubMed

    Nelson, Sarah E; Dishion, Thomas J

    2004-01-01

    This report examines the predictive validity of sociometric status at age 9-10 to young adult (age 23-24) antisocial behavior, work and school engagement, and arrests using Oregon Youth Study males (N = 206). A variety of analytic strategies included (a) multivariate analyses to examine the variation in adult adaptation as a function of sociometric classification at age 9-10, (b) regression analyses to evaluate the relative contribution of "liked most" and "liked least" peer nominations, and (c) structural equation modeling to predict the young adult outcome constructs from social preference at age 9-10. Contrary to expectation, when controlling for early antisocial behavior and academic skills, boys' social preference scores still predicted young adult outcomes. Longitudinal findings are discussed with respect to the salience of male peer rejection in middle childhood and the social developmental processes that may account for the predictive validity of peer rejection.

  18. Feasibility of using adaptive logic networks to predict compressor unit failure

    SciTech Connect

    Armstrong, W.W.; Chungying Chu; Thomas, M.M.

    1995-12-31

    In this feasibility study, an adaptive logic network (ALN) was trained to predict failures of turbine-driven compressor units using a large database of measurements. No expert knowledge about compressor systems was involved. The predictions used only the statistical properties of the measurements and the indications of failure types. A fuzzy set was used to model measurements typical of normal operation. It was constrained by a requirement imposed during ALN training, that it should have a shape similar to a Gaussian density, more precisely, that its logarithm should be convex-up. Initial results obtained using this approach to knowledge discovery in the database were encouraging.

  19. An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.

    PubMed

    Fout, N; Ma, Kwan-Liu

    2012-12-01

    In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

  20. Model-on-Demand Predictive Control for Nonlinear Hybrid Systems With Application to Adaptive Behavioral Interventions

    PubMed Central

    Nandola, Naresh N.; Rivera, Daniel E.

    2011-01-01

    This paper presents a data-centric modeling and predictive control approach for nonlinear hybrid systems. System identification of hybrid systems represents a challenging problem because model parameters depend on the mode or operating point of the system. The proposed algorithm applies Model-on-Demand (MoD) estimation to generate a local linear approximation of the nonlinear hybrid system at each time step, using a small subset of data selected by an adaptive bandwidth selector. The appeal of the MoD approach lies in the fact that model parameters are estimated based on a current operating point; hence estimation of locations or modes governed by autonomous discrete events is achieved automatically. The local MoD model is then converted into a mixed logical dynamical (MLD) system representation which can be used directly in a model predictive control (MPC) law for hybrid systems using multiple-degree-of-freedom tuning. The effectiveness of the proposed MoD predictive control algorithm for nonlinear hybrid systems is demonstrated on a hypothetical adaptive behavioral intervention problem inspired by Fast Track, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results demonstrate that the proposed algorithm can be useful for adaptive intervention problems exhibiting both nonlinear and hybrid character. PMID:21874087

  1. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  2. Age-Related Changes in Predictive Capacity Versus Internal Model Adaptability: Electrophysiological Evidence that Individual Differences Outweigh Effects of Age

    PubMed Central

    Bornkessel-Schlesewsky, Ina; Philipp, Markus; Alday, Phillip M.; Kretzschmar, Franziska; Grewe, Tanja; Gumpert, Maike; Schumacher, Petra B.; Schlesewsky, Matthias

    2015-01-01

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to

  3. Predicting adaptive phenotypes from multilocus genotypes in Sitka spruce (Picea sitchensis) using random forest.

    PubMed

    Holliday, Jason A; Wang, Tongli; Aitken, Sally

    2012-09-01

    Climate is the primary driver of the distribution of tree species worldwide, and the potential for adaptive evolution will be an important factor determining the response of forests to anthropogenic climate change. Although association mapping has the potential to improve our understanding of the genomic underpinnings of climatically relevant traits, the utility of adaptive polymorphisms uncovered by such studies would be greatly enhanced by the development of integrated models that account for the phenotypic effects of multiple single-nucleotide polymorphisms (SNPs) and their interactions simultaneously. We previously reported the results of association mapping in the widespread conifer Sitka spruce (Picea sitchensis). In the current study we used the recursive partitioning algorithm 'Random Forest' to identify optimized combinations of SNPs to predict adaptive phenotypes. After adjusting for population structure, we were able to explain 37% and 30% of the phenotypic variation, respectively, in two locally adaptive traits--autumn budset timing and cold hardiness. For each trait, the leading five SNPs captured much of the phenotypic variation. To determine the role of epistasis in shaping these phenotypes, we also used a novel approach to quantify the strength and direction of pairwise interactions between SNPs and found such interactions to be common. Our results demonstrate the power of Random Forest to identify subsets of markers that are most important to climatic adaptation, and suggest that interactions among these loci may be widespread.

  4. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  5. Fan Noise Prediction System Development: Source/Radiation Field Coupling and Workstation Conversion for the Acoustic Radiation Code

    NASA Technical Reports Server (NTRS)

    Meyer, H. D.

    1993-01-01

    The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.

  6. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  7. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  8. Hyperbolic Space Sparse Coding with Its Application on Prediction of Alzheimer’s Disease in Mild Cognitive Impairment

    PubMed Central

    Zhang, Jie; Shi, Jie; Stonnington, Cynthia; Li, Qingyang; Gutman, Boris A.; Chen, Kewei; Reiman, Eric M.; Caselli, Richard J.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin

    2016-01-01

    Mild Cognitive Impairment (MCI) is a transitional stage between normal age-related cognitive decline and Alzheimer’s disease (AD). Here we introduce a hyperbolic space sparse coding method to predict impending decline of MCI patients to dementia using surface measures of ventricular enlargement. First, we compute diffeomorphic mappings between ventricular surfaces using a canonical hyperbolic parameter space with consistent boundary conditions and surface tensor-based morphometry is computed to measure local surface deformations. Second, ring-shaped patches of TBM features are selected according to the geometric structure of the hyperbolic parameter space to initialize a dictionary. Sparse coding is then applied on the patch features to learn sparse codes and update the dictionary. Finally, we adopt max-pooling to reduce the feature dimensions and apply Adaboost to predict AD in MCI patients (N = 133) from the Alzheimer’s Disease Neuroimaging Initiative baseline dataset. Our work achieved an accuracy rate of 96.7% and outperformed some other morphometry measures. The hyperbolic space sparse coding method may offer a more sensitive tool to study AD and its early symptom. PMID:28066843

  9. A Temporal Predictive Code for Voice Motor Control: Evidence from ERP and Behavioral Responses to Pitch-shifted Auditory Feedback

    PubMed Central

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R.

    2016-01-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556

  10. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control.

  11. A Comparison of the Predictive Capabilities of Several Turbulence Models using Upwind and Central-Difference Computer Codes

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Vatsa, Veer N.

    1993-01-01

    Four turbulence models are described and evaluated for transonic flows using the upwind code CFL3D and the central-difference code TLNS3D. In particular, the effects of recent modifications to the half-equation model of Johnson-King are explored in detail, and different versions of the model are compared. This model can obtain good results for both two-dimensional (2D) and three-dimensional (3D) separated flows. The one-equation models of Baldwin-Barth and Spalart-Allmaras perform well for separated airfoil flows, but can predict the shock too far forward at the outboard stations of a separated wing. The equilibrium model of Baldwin-Lomax predicts the shock location too far aft for both 2D and 3D separated flows, as expected. In general, all models perform well for attached or mildly separated flows.

  12. A comparison of the predictive capabilities of several turbulence models using upwind and central-difference computer codes

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Vatsa, Veer N.

    1993-01-01

    Four turbulence models are described and evaluated for transonic flows using the upwind code CFL3D and the central-difference code TLNS3D. In particular, the effects of recent modifications to the half-equation model of Johnson-King are explored in detail, and different versions of the model are compared. This model can obtain good results for both two-dimensional (2D) and three-dimensional (3D) separated flows. The one-equation models of Baldwin-Barth and Spalart-Allmaras perform well for separated airfoil flows, but can predict the shock too far forward at the outboard stations of a separated wing. The equilibrium model of Baldwin-Lomax predicts the shock location too far aft for both 2D and 3D separated flows, as expected. In general, all models perform well for attached or mildly separated flows.

  13. Prediction accuracy in estimating joint angle trajectories using a video posture coding method for sagittal lifting tasks.

    PubMed

    Chang, Chien-Chi; McGorry, Raymond W; Lin, Jia-Hua; Xu, Xu; Hsiang, Simon M

    2010-08-01

    This study investigated prediction accuracy of a video posture coding method for lifting joint trajectory estimation. From three filming angles, the coder selected four key snapshots, identified joint angles and then a prediction program estimated the joint trajectories over the course of a lift. Results revealed a limited range of differences of joint angles (elbow, shoulder, hip, knee, ankle) between the manual coding method and the electromagnetic motion tracking system approach. Lifting range significantly affected estimate accuracy for all joints and camcorder filming angle had a significant effect on all joints but the hip. Joint trajectory predictions were more accurate for knuckle-to-shoulder lifts than for floor-to-shoulder or floor-to-knuckle lifts with average root mean square errors (RMSE) of 8.65 degrees , 11.15 degrees and 11.93 degrees , respectively. Accuracy was also greater for the filming angles orthogonal to the participant's sagittal plane (RMSE = 9.97 degrees ) as compared to filming angles of 45 degrees (RMSE = 11.01 degrees ) or 135 degrees (10.71 degrees ). The effects of lifting speed and loading conditions were minimal. To further increase prediction accuracy, improved prediction algorithms and/or better posture matching methods should be investigated. STATEMENT OF RELEVANCE: Observation and classification of postures are common steps in risk assessment of manual materials handling tasks. The ability to accurately predict lifting patterns through video coding can provide ergonomists with greater resolution in characterising or assessing the lifting tasks than evaluation based solely on sampling with a single lifting posture event.

  14. Development of a Mathematical Code to Predict Thermal Degradation of Fuel and Deposit Formation in a Fuel System

    DTIC Science & Technology

    1990-09-01

    MDTCAP 5,7 7 0A DCLML",," pROCESSING SHEIET srx is 5X.!iiACSTD AD- A225 415 WRDC-TR-90-2084 DEVELOPMENT OF A MATHEMATICAL CODE TO PREDICT THERMAL...R. (1983) "Studies of the Mechanism of Turbine Fuel Instability," Colorado School of Mines, NASA CR -167963. Daniel, S. R. (1985) "Jet Fuel Instability

  15. Predictive Calculation of Neutral Beam Heating Plasmas in EAST Tokamak by NUBEAM Code for Certain Parameter Ranges

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin; Fan, Tieshuan; Zhang, Xing; Zhang, Cheng; Ren, Qilong; Hu, Chundong

    2010-12-01

    A predictive calculation is carried out for neutral beam heating of fusion plasmas in EAST by using NUBEAM code under certain plasma conditions. Results calculated are analyzed for different plasma parameters. Relations between major plasma parameters, such as density and temperature, are obtained and key physical processes in the neutral beam heating, including beam power deposition, trapped fraction, heating efficiency, and power loss, are simulated. Other physical processes, such as current-drive, toroidal rotation and neutron emission, are also discussed.

  16. The expression level of small non-coding RNAs derived from the first exon of protein-coding genes is predictive of cancer status.

    PubMed

    Zovoilis, Athanasios; Mungall, Andrew J; Moore, Richard; Varhol, Richard; Chu, Andy; Wong, Tina; Marra, Marco; Jones, Steven J M

    2014-04-01

    Small non-coding RNAs (smRNAs) are known to be significantly enriched near the transcriptional start sites of genes. However, the functional relevance of these smRNAs remains unclear, and they have not been associated with human disease. Within the cancer genome atlas project (TCGA), we have generated small RNA datasets for many tumor types. In prior cancer studies, these RNAs have been regarded as transcriptional "noise," due to their apparent chaotic distribution. In contrast, we demonstrate their striking potential to distinguish efficiently between cancer and normal tissues and classify patients with cancer to subgroups of distinct survival outcomes. This potential to predict cancer status is restricted to a subset of these smRNAs, which is encoded within the first exon of genes, highly enriched within CpG islands and negatively correlated with DNA methylation levels. Thus, our data show that genome-wide changes in the expression levels of small non-coding RNAs within first exons are associated with cancer.

  17. Code requirements document: MODFLOW 2. 1: A program for predicting moderator flow patterns

    SciTech Connect

    Peterson, P.F. . Dept. of Nuclear Engineering); Paik, I.K. )

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors.

  18. Code requirements document: MODFLOW 2.1: A program for predicting moderator flow patterns

    SciTech Connect

    Peterson, P.F.; Paik, I.K.

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors.

  19. Thermal treatments of foods: a predictive general-purpose code for heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Barba, Anna Angela

    2005-05-01

    Thermal treatments of foods required accurate processing protocols. In this context, mathematical modeling of heat and mass transfer can play an important role in the control and definition of the process parameters as well as to design processing systems. In this work a code able to simulate heat and mass transfer phenomena within solid bodies has been developed. The code has been written with the ability of describing different geometries and it can account for any kind of different initial/boundary conditions. Transport phenomena within multi-layer bodies can be described, and time/position dependent material parameters can be implemented. Finally, the code has been validated by comparison with a problem for which the analytical solution is known, and by comparison with a differential scanning calorimetry signal that described the heating treatment of a raw potato (Solanum tuberosum).

  20. The Predictive Utility of Narcissism among Children and Adolescents: Evidence for a Distinction between Adaptive and Maladaptive Narcissism

    ERIC Educational Resources Information Center

    Barry, Christopher T.; Frick, Paul J.; Adler, Kristy K.; Grafeman, Sarah J.

    2007-01-01

    We examined the predictive utility of narcissism among a community sample of children and adolescents (N=98) longitudinally. Analyses focused on the differential utility between maladaptive and adaptive narcissism for predicting later delinquency. Maladaptive narcissism significantly predicted self-reported delinquency at one-, two-, and…

  1. Adaptability and Prediction of Anticipatory Muscular Activity Parameters to Different Movements in the Sitting Position.

    PubMed

    Chikh, Soufien; Watelain, Eric; Faupin, Arnaud; Pinti, Antonio; Jarraya, Mohamed; Garnier, Cyril

    2016-08-01

    Voluntary movement often causes postural perturbation that requires an anticipatory postural adjustment to minimize perturbation and increase the efficiency and coordination during execution. This systematic review focuses specifically on the relationship between the parameters of anticipatory muscular activities and movement finality in sitting position among adults, to study the adaptability and predictability of anticipatory muscular activities parameters to different movements and conditions in sitting position in adults. A systematic literature search was performed using PubMed, Science Direct, Web of Science, Springer-Link, Engineering Village, and EbscoHost. Inclusion and exclusion criteria were applied to retain the most rigorous and specific studies, yielding 76 articles, Seventeen articles were excluded at first reading, and after the application of inclusion and exclusion criteria, 23 were retained. In a sitting position, central nervous system activity precedes movement by diverse anticipatory muscular activities and shows the ability to adapt anticipatory muscular activity parameters to the movement direction, postural stability, or charge weight. In addition, these parameters could be adapted to the speed of execution, as found for the standing position. Parameters of anticipatory muscular activities (duration, order, and amplitude of muscle contractions constituting the anticipatory muscular activity) could be used as a predictive indicator of forthcoming movement. In addition, this systematic review may improve methodology in empirical studies and assistive technology for people with disabilities.

  2. Image set based face recognition using self-regularized non-negative coding and adaptive distance metric learning.

    PubMed

    Mian, Ajmal; Hu, Yiqun; Hartley, Richard; Owens, Robyn

    2013-12-01

    Simple nearest neighbor classification fails to exploit the additional information in image sets. We propose self-regularized nonnegative coding to define between set distance for robust face recognition. Set distance is measured between the nearest set points (samples) that can be approximated from their orthogonal basis vectors as well as from the set samples under the respective constraints of self-regularization and nonnegativity. Self-regularization constrains the orthogonal basis vectors to be similar to the approximated nearest point. The nonnegativity constraint ensures that each nearest point is approximated from a positive linear combination of the set samples. Both constraints are formulated as a single convex optimization problem and the accelerated proximal gradient method with linear-time Euclidean projection is adapted to efficiently find the optimal nearest points between two image sets. Using the nearest points between a query set and all the gallery sets as well as the active samples used to approximate them, we learn a more discriminative Mahalanobis distance for robust face recognition. The proposed algorithm works independently of the chosen features and has been tested on gray pixel values and local binary patterns. Experiments on three standard data sets show that the proposed method consistently outperforms existing state-of-the-art methods.

  3. Validation of the ASSERT subchannel code: Prediction of critical heat flux in standard and nonstandard CANDU bundle geometries

    SciTech Connect

    Carver, M.B.; Kiteley, J.C.; Zhou, R.Q.N.; Junop, S.V.; Rowe, D.S.

    1995-12-01

    The ASSERT code has been developed to address the three-dimensional computation of flow and phase distribution and fuel element surface temperatures within the horizontal subchannels of Canada uranium deuterium (CANDU) pressurized heavy water reactor fuel channels and to provide a detailed prediction of critical heat flux (CHF) distribution throughout the bundle. The ASSERT subchannel code has been validated extensively against a wide repertoire of experiments; its combination of three-dimensional prediction of local flow conditions with a comprehensive method of predicting CHF at these local conditions makes it a unique tool for predicting CHF for situations outside the existing experimental database. In particular, ASSERT is an appropriate tool to systematically investigate CHF under conditions of local geometric variations, such as pressure tube creep and fuel element strain. The numerical methodology used in ASSERT, the constitutive relationships incorporated, and the CHF assessment methodology are discussed. The evolutionary validation plan is also discussed and early validation exercises are summarized. More recent validation exercises in standard and nonstandard geometries are emphasized.

  4. Connectivity Reveals Sources of Predictive Coding Signals in Early Visual Cortex During Processing of Visual Optic Flow.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2016-05-24

    Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding.

  5. The neural code for taste in the nucleus of the solitary tract of the rat: effects of adaptation.

    PubMed

    Di Lorenzo, P M; Lemon, C H

    2000-01-10

    Adaptation of the tongue to NaCl, HCl, quinine or sucrose was used as a tool to study the stability and organization of response profiles in the nucleus of the solitary tract (NTS). Taste responses in the NTS were recorded in anesthetized rats before and after adaptation of the tongue to NaCl, HCl, sucrose or quinine. Results showed that the magnitude of response to test stimuli following adaptation was a function of the context, i.e., adaptation condition, in which the stimuli were presented. Over half of all taste responses were either attenuated or enhanced following the adaptation procedure: NaCl adaptation produced the most widespread, non-stimulus-selective cross-adaptation and sucrose adaptation produced the least frequent cross-adaptation and the most frequent enhancement of taste responses. Adaptation to quinine cross-adapted to sucrose and adaptation to HCl cross-adapted to quinine in over half of the units tested. The adaptation procedure sometimes unmasked taste responses where none were present beforehand and sometimes altered taste responses to test stimuli even though the adapting stimulus did not itself produce a response. These effects demonstrated a form of context-dependency of taste responsiveness in the NTS and further suggest a broad potentiality in the sensitivity of NTS units across taste stimuli. Across unit patterns of response remained distinct from each other under all adaptation conditions. Discriminability of these patterns may provide a neurophysiological basis for residual psychophysical abilities following adaptation.

  6. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    NASA Astrophysics Data System (ADS)

    Karahan, Aydın; Buongiorno, Jacopo

    2010-01-01

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO2-PuO2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium reactors

  7. Incremental Validity of Personality Measures in Predicting Underwater Performance and Adaptation.

    PubMed

    Colodro, Joaquín; Garcés-de-Los-Fayos, Enrique J; López-García, Juan J; Colodro-Conde, Lucía

    2015-03-17

    Intelligence and personality traits are currently considered effective predictors of human behavior and job performance. However, there are few studies about their relevance in the underwater environment. Data from a sample of military personnel performing scuba diving courses were analyzed with regression techniques, testing the contribution of individual differences and ascertaining the incremental validity of the personality in an environment with extreme psychophysical demands. The results confirmed the incremental validity of personality traits (ΔR 2 = .20, f 2 = .25) over the predictive contribution of general mental ability (ΔR 2 = .07, f 2 = .08) in divers' performance. Moreover, personality (R(L)2 = .34) also showed a higher validity to predict underwater adaptation than general mental ability ( R(L)2 = .09). The ROC curve indicated 86% of the maximum possible discrimination power for the prediction of underwater adaptation, AUC = .86, p < .001, 95% CI (.82-.90). These findings confirm the shift and reversal of incremental validity of dispositional traits in the underwater environment and the relevance of personality traits as predictors of an effective response to the changing circumstances of military scuba diving. They also may improve the understanding of the behavioral effects and psychophysiological complications of diving and can also provide guidance for psychological intervention and prevention of risk in this extreme environment.

  8. Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.

  9. Protein Secondary Structure Prediction Using Local Adaptive Techniques in Training Neural Networks

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Zainuddin, Zarita; Joseph, Annie

    2008-01-01

    One of the most significant problems in computer molecular biology today is how to predict a protein's three-dimensional structure from its one-dimensional amino acid sequence or generally call the protein folding problem and difficult to determine the corresponding protein functions. Thus, this paper involves protein secondary structure prediction using neural network in order to solve the protein folding problem. The neural network used for protein secondary structure prediction is multilayer perceptron (MLP) of the feed-forward variety. The training set are taken from the protein data bank which are 120 proteins while 60 testing set is the proteins which were chosen randomly from the protein data bank. Multiple sequence alignment (MSA) is used to get the protein similar sequence and Position Specific Scoring matrix (PSSM) is used for network input. The training process of the neural network involves local adaptive techniques. Local adaptive techniques used in this paper comprises Learning rate by sign changes, SuperSAB, Quickprop and RPROP. From the simulation, the performance for learning rate by Rprop and Quickprop are superior to all other algorithms with respect to the convergence time. However, the best result was obtained using Rprop algorithm.

  10. Prediction of dryout performance for boiling water reactor fuel assemblies based on subchannel analysis with the RINGS code

    SciTech Connect

    Knabe, P.; Wehle, F.

    1995-12-01

    A fuel assembly with a large critical power margin introduces flexibility into reload fuel management. Therefore, optimization of the bundle and spacer geometry to maximize the bundle critical power is an important design objective. With a view to reducing the extent of the complex full-scale tests usually carried out to determine the thermal-hydraulic characteristics of various assembly geometries, the subchannel analysis method was further developed with the Siemens RINGS code. The annular flow code predicts dryout power and dryout location by calculating the conditions at which the liquid film flow rate is reduced to zero, allowing for evaporation, droplet entrainment, and droplet deposition. Appropriate attention is paid to the modeling of spacer effects. Comparison with experimental data of 3 x 3 and 4 x 4 tests shows the capability of RINGS to predict the flow quality and mass flux in subchannels under typical boiling water reactor operating conditions. By using the RINGS code, experimental critical power data for 3 x 3, 4 x 4, 5 x 5, 7 x 7, 8 x 8, 9 x 9, and 10 x 10 fuel assemblies were successfully postcalculated.

  11. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  12. Heart Motion Prediction Based on Adaptive Estimation Algorithms for Robotic Assisted Beating Heart Surgery

    PubMed Central

    Tuna, E. Erdem; Franke, Timothy J.; Bebek, Özkan; Shiose, Akira; Fukamachi, Kiyotaka; Çavuşoğlu, M. Cenk

    2013-01-01

    Robotic assisted beating heart surgery aims to allow surgeons to operate on a beating heart without stabilizers as if the heart is stationary. The robot actively cancels heart motion by closely following a point of interest (POI) on the heart surface—a process called Active Relative Motion Canceling (ARMC). Due to the high bandwidth of the POI motion, it is necessary to supply the controller with an estimate of the immediate future of the POI motion over a prediction horizon in order to achieve sufficient tracking accuracy. In this paper, two least-square based prediction algorithms, using an adaptive filter to generate future position estimates, are implemented and studied. The first method assumes a linear system relation between the consecutive samples in the prediction horizon. On the contrary, the second method performs this parametrization independently for each point over the whole the horizon. The effects of predictor parameters and variations in heart rate on tracking performance are studied with constant and varying heart rate data. The predictors are evaluated using a 3 degrees of freedom test-bed and prerecorded in-vivo motion data. Then, the one-step prediction and tracking performances of the presented approaches are compared with an Extended Kalman Filter predictor. Finally, the essential features of the proposed prediction algorithms are summarized. PMID:23976889

  13. Nonlinear model identification and adaptive model predictive control using neural networks.

    PubMed

    Akpan, Vincent A; Hassapis, George D

    2011-04-01

    This paper presents two new adaptive model predictive control algorithms, both consisting of an on-line process identification part and a predictive control part. Both parts are executed at each sampling instant. The predictive control part of the first algorithm is the Nonlinear Model Predictive Control strategy and the control part of the second algorithm is the Generalized Predictive Control strategy. In the identification parts of both algorithms the process model is approximated by a series-parallel neural network structure which is trained by a recursive least squares (ARLS) method. The two control algorithms have been applied to: 1) the temperature control of a fluidized bed furnace reactor (FBFR) of a pilot plant and 2) the auto-pilot control of an F-16 aircraft. The training and validation data of the neural network are obtained from the open-loop simulation of the FBFR and the nonlinear F-16 aircraft models. The identification and control simulation results show that the first algorithm outperforms the second one at the expense of extra computation time.

  14. Model predictive control with constraints for a nonlinear adaptive cruise control vehicle model in transition manoeuvres

    NASA Astrophysics Data System (ADS)

    Ali, Zeeshan; Popov, Atanas A.; Charles, Guy

    2013-06-01

    A vehicle following control law, based on the model predictive control method, to perform transition manoeuvres (TMs) for a nonlinear adaptive cruise control (ACC) vehicle is presented in this paper. The TM controller ultimately establishes a steady-state following distance behind a preceding vehicle to avoid collision, keeping account of acceleration limits, safe distance, and state constraints. The vehicle dynamics model is for continuous-time domain and captures the real dynamics of the sub-vehicle models for steady-state and transient operations. The ACC vehicle can execute the TM successfully and achieves a steady-state in the presence of complex dynamics within the constraint boundaries.

  15. A general code to predict the drug release kinetics from different shaped matrices.

    PubMed

    Barba, Anna Angela; d'Amore, Matteo; Chirico, Serafina; Lamberti, Gaetano; Titomanlio, Giuseppe

    2009-02-15

    This work deals with the modeling of drug release from solid pharmaceutical systems (matrices) for oral delivery. The attention was paid to the behavior of matrices made of hydrogels and drug, and the modeling was devoted to reproduce all the relevant phenomena (water up-take, gel swelling, diffusivity increase, drug diffusion and polymer erosion). Thus, the transient mass balances (for both drug and water), with the proper initial and boundary conditions were written, and a generalized numerical code was formulated; it is able to describe several geometries (slab, sphere, infinite and finite cylinders; this latter was done by an approximation which reduces the 2D problem to an 1D scheme). The main phenomena observed in drug delivery from hydrogel-based matrix, i.e. polymer swelling and erosion, were taken into account. The code was validated by comparison with analytical solutions, available for some simplified situation, and then it was tested with some experimental data taken from literature.

  16. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE

    PubMed Central

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J.; Gutman, Boris A.; Chen, Kewei; Reiman, Eric M.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin

    2016-01-01

    Alzheimer’s disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance. PMID:27499829

  17. WINCOF-I code for prediction of fan compressor unit with water ingestion

    NASA Technical Reports Server (NTRS)

    Murthy, S. N. B.; Mullican, A.

    1990-01-01

    The PURDUE-WINCOF code, which provides a numerical method of obtaining the performance of a fan-compressor unit of a jet engine with water ingestion into the inlet, was modified to take into account: (1) the scoop factor, (2) the time required for the setting-in of a quasi-steady distribution of water, and (3) the heat and mass transfer processes over the time calculated under 2. The modified code, named WINCOF-I was utilized to obtain the performance of a fan-compressor unit of a generic jet engine. The results illustrate the manner in which quasi-equilibrium conditions become established in the machine and the redistribution of ingested water in various stages in the form of a film out of the casing wall, droplets across the span, and vapor due to mass transfer.

  18. RCS Predictions From a Method of Moments and a Finite-Element Code for Several Targets

    DTIC Science & Technology

    2010-07-01

    01803 14. ABSTRACT This report presents results of radar cross section (RCS) calculations for several interesting targets using a method-of-moments...TERMS radar cross section, method of moments, finite element, modeling 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18... radar cross section (RCS) simulation that require an exact code for solution. In this report, we compare RCS calculations with two very different

  19. Image compression with embedded multiwavelet coding

    NASA Astrophysics Data System (ADS)

    Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay

    1996-03-01

    An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.

  20. Predictive Fallout Composition Modeling: Improvements and Applications of the Defense Land Fallout Interpretive Code

    SciTech Connect

    Hooper, David A; Jodoin, Vincent J; Lee, Ronald W; Monterial, Mateusz

    2012-01-01

    This paper outlines several improvements to the Particle Activity Module of the Defense Land Fallout Interpretive Code (DELFIC). The modeling of each phase of the fallout process is discussed within DELFIC to demonstrate the capabilities and limitations with the code for modeling and simulation. Expansion of the DELFIC isotopic library to include actinides and light elements is shown. Several key features of the new library are demonstrated, including compliance with ENDF/B-VII standards, augmentation of hardwired activated soil and actinide decay calculations with exact Bateman calculations, and full physical and chemical fractionation of all material inventories. Improvements to the radionuclide source term are demonstrated, including the ability to specify heterogeneous fission types and the ability to import source terms from irradiation calculations using the Oak Ridge Isotope Generation (ORIGEN) code. Additionally, the dose, kerma, and effective dose conversion factors are revised. Finally, the application of DELFIC for consequence management planning and forensic analysis is presented. For consequence management, DELFIC is shown to provide disaster recovery teams with simulations of real-time events, including the location, composition, time of arrival, activity rates, and dose rates of fallout, accounting for site-specific atmospheric effects. The results from DELFIC are also demonstrated for use by nuclear forensics teams to plan collection routes (including the determination of optimal collection locations), estimate dose rates to collectors, and anticipate the composition of material at collection sites. These capabilities give mission planners the ability to maximize their effectiveness in the field while minimizing risk to their collectors.

  1. Remaining useful life prediction for an adaptive skew-Wiener process model

    NASA Astrophysics Data System (ADS)

    Huang, Zeyi; Xu, Zhengguo; Ke, Xiaojie; Wang, Wenhai; Sun, Youxian

    2017-03-01

    Predicting the remaining useful life for operational devices plays a critical role in prognostics and health management. As the models based on the stochastic processes are widely used for characterizing the degradation trajectory, an adaptive skew-Wiener model, which is much more flexible than traditional stochastic process models, is proposed to model the degradation drift of industrial devices. To make full use of the prior knowledge and the historical information, an on-line filtering algorithm is proposed for state estimation, a two-stage algorithm is adopted to estimate unknown parameters as well. For remaining useful life prediction, a novel result is presented with an explicit form based on the closed skew normal distribution. Finally, sufficient Monte Carlo simulations and an application for ball bearings in rotating electrical machines are used to validate our approach.

  2. GeneValidator: identify problems with protein-coding gene predictions

    PubMed Central

    Drăgan, Monica-Andreea; Moghul, Ismail; Priyam, Anurag; Bustos, Claudio; Wurm, Yannick

    2016-01-01

    Summary: Genomes of emerging model organisms are now being sequenced at very low cost. However, obtaining accurate gene predictions remains challenging: even the best gene prediction algorithms make substantial errors and can jeopardize subsequent analyses. Therefore, many predicted genes must be time-consumingly visually inspected and manually curated. We developed GeneValidator (GV) to automatically identify problematic gene predictions and to aid manual curation. For each gene, GV performs multiple analyses based on comparisons to gene sequences from large databases. The resulting report identifies problematic gene predictions and includes extensive statistics and graphs for each prediction to guide manual curation efforts. GV thus accelerates and enhances the work of biocurators and researchers who need accurate gene predictions from newly sequenced genomes. Availability and implementation: GV can be used through a web interface or in the command-line. GV is open-source (AGPL), available at https://wurmlab.github.io/tools/genevalidator. Contact: y.wurm@qmul.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26787666

  3. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.

    1996-01-01

    Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.

  4. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  5. A NEW SEMI-EMPIRICAL AMBIENT TO EFFECTIVE DOSE CONVERSION MODEL FOR THE PREDICTIVE CODE FOR AIRCREW RADIATION EXPOSURE (PCAIRE).

    PubMed

    Dumouchel, T; McCall, M; Lemay, F; Bennett, L; Lewis, B; Bean, M

    2016-12-01

    The Predictive Code for Aircrew Radiation Exposure (PCAIRE) is a semi-empirical code that estimates both ambient dose equivalent, based on years of on-board measurements, and effective dose to aircrew. Currently, PCAIRE estimates effective dose by converting the ambient dose equivalent to effective dose (E/H) using a model that is based on radiation transport calculations and on the radiation weighting factors recommended in International Commission on Radiological Protection (ICRP) 60. In this study, a new semi-empirical E/H model is proposed to replace the existing transport calculation models. The new model is based on flight data measured using a tissue-equivalent proportional counter (TEPC). The measured flight TEPC data are separated into a low- and a high-lineal-energy spectrum using an amplitude-weighted (137)Cs TEPC spectrum. The high-lineal-energy spectrum is determined by subtracting the low-lineal-energy spectrum from the measured flight TEPC spectrum. With knowledge of E/H for the low- and high-lineal-energy spectra, the total E/H is estimated for a given flight altitude and geographic location. The semi-empirical E/H model also uses new radiation weighting factors to align the model with the most recent ICRP 103 recommendations. The ICRP 103-based semi-empirical effective dose model predicts that there is a ∼30 % reduction in dose in comparison with the ICRP 60-based model. Furthermore, the ambient dose equivalent is now a more conservative dose estimate for jet aircraft altitudes in the range of 7-13 km (FL230-430). This new semi-empirical E/H model is validated against E/H predicted from a Monte Carlo N-Particle transport code simulation of cosmic ray propagation through the Earth's atmosphere. Its implementation allows PCAIRE to provide an accurate semi-empirical estimate of the effective dose.

  6. Presence of Motor-Intentional Aiming Deficit Predicts Functional Improvement of Spatial Neglect with Prism Adaptation

    PubMed Central

    Goedert, Kelly M.; Chen, Peii; Boston, Raymond C.; Foundas, Anne L.; Barrett, A. M.

    2013-01-01

    Spatial neglect is a debilitating disorder for which there is no agreed upon course of rehabilitation. The lack of consensus on treatment may result from systematic differences in the syndromes’ characteristics, with spatial cognitive deficits potentially affecting perceptual-attentional Where or motor-intentional Aiming spatial processing. Heterogeneity of response to treatment might be explained by different treatment impact on these dissociated deficits: prism adaptation, for example, might reduce Aiming deficits without affecting Where spatial deficits. Here, we tested the hypothesis that classifying patients by their profile of Where-vs-Aiming spatial deficit would predict response to prism adaptation, and specifically that patients with Aiming bias would have better recovery than those with isolated Where bias. We classified the spatial errors of 24 sub-acute right-stroke survivors with left spatial neglect as: 1) isolated Where bias, 2) isolated Aiming bias or 3) both. Participants then completed two weeks of prism adaptation treatment. They also completed the Behavioral Inattention Test (BIT) and Catherine Bergego Scale (CBS) tests of neglect recovery weekly for six weeks. As hypothesized, participants with only Aiming deficits improved on the CBS, whereas, those with only Where deficits did not improve. Participants with both deficits demonstrated intermediate improvement. These results support behavioral classification of spatial neglect patients as a potential valuable tool for assigning targeted, effective early rehabilitation. PMID:24376064

  7. A Novel Model Predictive Control Formulation for Hybrid Systems With Application to Adaptive Behavioral Interventions

    PubMed Central

    Nandola, Naresh N.; Rivera, Daniel E.

    2010-01-01

    This paper presents a novel model predictive control (MPC) formulation for linear hybrid systems. The algorithm relies on a multiple-degree-of-freedom formulation that enables the user to adjust the speed of setpoint tracking, measured disturbance rejection and unmeasured disturbance rejection independently in the closed-loop system. Consequently, controller tuning is more flexible and intuitive than relying on move suppression weights as traditionally used in MPC schemes. The formulation is motivated by the need to achieve robust performance in using the algorithm in emerging applications, for instance, as a decision policy for adaptive, time-varying interventions used in behavioral health. The proposed algorithm is demonstrated on a hypothetical adaptive intervention problem inspired by the Fast Track program, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results in the presence of simultaneous disturbances and significant plant-model mismatch are presented. These demonstrate that a hybrid MPC-based approach for this class of interventions can be tuned for desired performance under demanding conditions that resemble participant variability that is experienced in practice when applying an adaptive intervention to a population. PMID:20830213

  8. Predictive analytics of environmental adaptability in multi-omic network models.

    PubMed

    Angione, Claudio; Lió, Pietro

    2015-10-20

    Bacterial phenotypic traits and lifestyles in response to diverse environmental conditions depend on changes in the internal molecular environment. However, predicting bacterial adaptability is still difficult outside of laboratory controlled conditions. Many molecular levels can contribute to the adaptation to a changing environment: pathway structure, codon usage, metabolism. To measure adaptability to changing environmental conditions and over time, we develop a multi-omic model of Escherichia coli that accounts for metabolism, gene expression and codon usage at both transcription and translation levels. After the integration of multiple omics into the model, we propose a multiobjective optimization algorithm to find the allowable and optimal metabolic phenotypes through concurrent maximization or minimization of multiple metabolic markers. In the condition space, we propose Pareto hypervolume and spectral analysis as estimators of short term multi-omic (transcriptomic and metabolic) evolution, thus enabling comparative analysis of metabolic conditions. We therefore compare, evaluate and cluster different experimental conditions, models and bacterial strains according to their metabolic response in a multidimensional objective space, rather than in the original space of microarray data. We finally validate our methods on a phenomics dataset of growth conditions. Our framework, named METRADE, is freely available as a MATLAB toolbox.

  9. Predicting organismal vulnerability to climate warming: roles of behaviour, physiology and adaptation

    PubMed Central

    Huey, Raymond B.; Kearney, Michael R.; Krockenberger, Andrew; Holtum, Joseph A. M.; Jess, Mellissa; Williams, Stephen E.

    2012-01-01

    A recently developed integrative framework proposes that the vulnerability of a species to environmental change depends on the species' exposure and sensitivity to environmental change, its resilience to perturbations and its potential to adapt to change. These vulnerability criteria require behavioural, physiological and genetic data. With this information in hand, biologists can predict organisms most at risk from environmental change. Biologists and managers can then target organisms and habitats most at risk. Unfortunately, the required data (e.g. optimal physiological temperatures) are rarely available. Here, we evaluate the reliability of potential proxies (e.g. critical temperatures) that are often available for some groups. Several proxies for ectotherms are promising, but analogous ones for endotherms are lacking. We also develop a simple graphical model of how behavioural thermoregulation, acclimation and adaptation may interact to influence vulnerability over time. After considering this model together with the proxies available for physiological sensitivity to climate change, we conclude that ectotherms sharing vulnerability traits seem concentrated in lowland tropical forests. Their vulnerability may be exacerbated by negative biotic interactions. Whether tropical forest (or other) species can adapt to warming environments is unclear, as genetic and selective data are scant. Nevertheless, the prospects for tropical forest ectotherms appear grim. PMID:22566674

  10. Small Engine Technology (SET) Task 23 ANOPP Noise Prediction for Small Engines, Wing Reflection Code

    NASA Technical Reports Server (NTRS)

    Lieber, Lysbeth; Brown, Daniel; Golub, Robert A. (Technical Monitor)

    2000-01-01

    The work performed under Task 23 consisted of the development and demonstration of improvements for the NASA Aircraft Noise Prediction Program (ANOPP), specifically targeted to the modeling of engine noise enhancement due to wing reflection. This report focuses on development of the model and procedure to predict the effects of wing reflection, and the demonstration of the procedure, using a representative wing/engine configuration.

  11. Predicting coral bleaching hotspots: the role of regional variability in thermal stress and potential adaptation rates

    NASA Astrophysics Data System (ADS)

    Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.

    2012-03-01

    Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.

  12. The biology of developmental plasticity and the Predictive Adaptive Response hypothesis.

    PubMed

    Bateson, Patrick; Gluckman, Peter; Hanson, Mark

    2014-06-01

    Many forms of developmental plasticity have been observed and these are usually beneficial to the organism. The Predictive Adaptive Response (PAR) hypothesis refers to a form of developmental plasticity in which cues received in early life influence the development of a phenotype that is normally adapted to the environmental conditions of later life. When the predicted and actual environments differ, the mismatch between the individual's phenotype and the conditions in which it finds itself can have adverse consequences for Darwinian fitness and, later, for health. Numerous examples exist of the long-term effects of cues indicating a threatening environment affecting the subsequent phenotype of the individual organism. Other examples consist of the long-term effects of variations in environment within a normal range, particularly in the individual's nutritional environment. In mammals the cues to developing offspring are often provided by the mother's plane of nutrition, her body composition or stress levels. This hypothetical effect in humans is thought to be important by some scientists and controversial by others. In resolving the conflict, distinctions should be drawn between PARs induced by normative variations in the developmental environment and the ill effects on development of extremes in environment such as a very poor or very rich nutritional environment. Tests to distinguish between different developmental processes impacting on adult characteristics are proposed. Many of the mechanisms underlying developmental plasticity involve molecular epigenetic processes, and their elucidation in the context of PARs and more widely has implications for the revision of classical evolutionary theory.

  13. The biology of developmental plasticity and the Predictive Adaptive Response hypothesis

    PubMed Central

    Bateson, Patrick; Gluckman, Peter; Hanson, Mark

    2014-01-01

    Many forms of developmental plasticity have been observed and these are usually beneficial to the organism. The Predictive Adaptive Response (PAR) hypothesis refers to a form of developmental plasticity in which cues received in early life influence the development of a phenotype that is normally adapted to the environmental conditions of later life. When the predicted and actual environments differ, the mismatch between the individual's phenotype and the conditions in which it finds itself can have adverse consequences for Darwinian fitness and, later, for health. Numerous examples exist of the long-term effects of cues indicating a threatening environment affecting the subsequent phenotype of the individual organism. Other examples consist of the long-term effects of variations in environment within a normal range, particularly in the individual's nutritional environment. In mammals the cues to developing offspring are often provided by the mother's plane of nutrition, her body composition or stress levels. This hypothetical effect in humans is thought to be important by some scientists and controversial by others. In resolving the conflict, distinctions should be drawn between PARs induced by normative variations in the developmental environment and the ill effects on development of extremes in environment such as a very poor or very rich nutritional environment. Tests to distinguish between different developmental processes impacting on adult characteristics are proposed. Many of the mechanisms underlying developmental plasticity involve molecular epigenetic processes, and their elucidation in the context of PARs and more widely has implications for the revision of classical evolutionary theory. PMID:24882817

  14. Adapting Predictive Models for Cepheid Variable Star Classification Using Linear Regression and Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gupta, Kinjal Dhar; Vilalta, Ricardo; Asadourian, Vicken; Macri, Lucas

    2014-05-01

    We describe an approach to automate the classification of Cepheid variable stars into two subtypes according to their pulsation mode. Automating such classification is relevant to obtain a precise determination of distances to nearby galaxies, which in addition helps reduce the uncertainty in the current expansion of the universe. One main difficulty lies in the compatibility of models trained using different galaxy datasets; a model trained using a training dataset may be ineffectual on a testing set. A solution to such difficulty is to adapt predictive models across domains; this is necessary when the training and testing sets do not follow the same distribution. The gist of our methodology is to train a predictive model on a nearby galaxy (e.g., Large Magellanic Cloud), followed by a model-adaptation step to make the model operable on other nearby galaxies. We follow a parametric approach to density estimation by modeling the training data (anchor galaxy) using a mixture of linear models. We then use maximum likelihood to compute the right amount of variable displacement, until the testing data closely overlaps the training data. At that point, the model can be directly used in the testing data (target galaxy).

  15. Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data

    PubMed Central

    Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  16. Self-adaptive MOEA feature selection for classification of bankruptcy prediction data.

    PubMed

    Gaspar-Cunha, A; Recio, G; Costa, L; Estébanez, C

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier.

  17. A predictive model to inform adaptive management of double-crested cormorants and fisheries in Michigan

    USGS Publications Warehouse

    Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.

    2015-01-01

    The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.

  18. Issues in Automatic Object Recognition: Linking Geometry and Material Data to Predictive Signature Codes

    DTIC Science & Technology

    1991-03-01

    dirbuild() opens the databse -file and builds the in-core database table of contents. rt-gettree0 adds a database sub-tree to the active model space, and...and- Chemical 7.mmand ATTN: SMCAR-ESP-L I Dir, VLAMO Rock Island, IL 61299-5000 ATTN: AMSLC-VL-D Director 10 Dir, BRL U.S. Army Aviation Research ATTN...Center ATTN: Louis E. Smith ATTN: Dr. Paul C. St. Hilaire University Park -Code 1210 Denver, CO 80208 Bethesda, MD 20084-5000 1 Dow Chemical , U.S.A

  19. Monitoring Cosmic Radiation Risk: Comparisons between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-01-01

    various materials can be found by use of the SRIM code [17]. F. Pions/Muons The pion, originally referred to as the π meson , was one of the earliest...These are the lightest mesons and have a very short half- life. In atmospheric interactions, they help produce muons and neutrinos [17]. The π+ is...keV/micron High Varies Pions Mesons π0 π+ π- π+,- : +-e π0 : 0 ᝺ keV/micron Low π+,- : 2.6x10-8 π0 : 0.84x10-16 Muons Leptons μ- μ+ +1, -1 ᝺

  20. Monitoring Cosmic Radiation Risk: Comparisons Between Observations and Predictive Codes for Naval Aviation

    DTIC Science & Technology

    2009-07-05

    materials can be found by use of the SRIM code [17]. F. Pions/Muons The pion, originally referred to as the π meson , was one of the earliest...These are the lightest mesons and have a very short half- life. In atmospheric interactions, they help produce muons and neutrinos [17]. The π+ is...micron High Varies Pions Mesons π0 π+ π- π+,- : +-e π0 : 0 ᝺ keV/micron Low π+,- : 2.6x10-8 π0 : 0.84x10-16 Muons Leptons μ- μ+ +1, -1 ᝺ keV

  1. Interest Level in 2-Year-Olds with Autism Spectrum Disorder Predicts Rate of Verbal, Nonverbal, and Adaptive Skill Acquisition

    ERIC Educational Resources Information Center

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2015-01-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2?years predicts subsequent rates of verbal, nonverbal, and adaptive skill…

  2. Expression Quantitative Trait Loci Information Improves Predictive Modeling of Disease Relevance of Non-Coding Genetic Variation

    PubMed Central

    Raj, Towfique; McGeachie, Michael J.; Qiu, Weiliang; Ziniti, John P.; Stubbs, Benjamin J.; Liang, Liming; Martinez, Fernando D.; Strunk, Robert C.; Lemanske, Robert F.; Liu, Andrew H.; Stranger, Barbara E.; Carey, Vincent J.; Raby, Benjamin A.

    2015-01-01

    Disease-associated loci identified through genome-wide association studies (GWAS) frequently localize to non-coding sequence. We and others have demonstrated strong enrichment of such single nucleotide polymorphisms (SNPs) for expression quantitative trait loci (eQTLs), supporting an important role for regulatory genetic variation in complex disease pathogenesis. Herein we describe our initial efforts to develop a predictive model of disease-associated variants leveraging eQTL information. We first catalogued cis-acting eQTLs (SNPs within 100kb of target gene transcripts) by meta-analyzing four studies of three blood-derived tissues (n = 586). At a false discovery rate < 5%, we mapped eQTLs for 6,535 genes; these were enriched for disease-associated genes (P < 10−04), particularly those related to immune diseases and metabolic traits. Based on eQTL information and other variant annotations (distance from target gene transcript, minor allele frequency, and chromatin state), we created multivariate logistic regression models to predict SNP membership in reported GWAS. The complete model revealed independent contributions of specific annotations as strong predictors, including evidence for an eQTL (odds ratio (OR) = 1.2–2.0, P < 10−11) and the chromatin states of active promoters, different classes of strong or weak enhancers, or transcriptionally active regions (OR = 1.5–2.3, P < 10−11). This complete prediction model including eQTL association information ultimately allowed for better discrimination of SNPs with higher probabilities of GWAS membership (6.3–10.0%, compared to 3.5% for a random SNP) than the other two models excluding eQTL information. This eQTL-based prediction model of disease relevance can help systematically prioritize non-coding GWAS SNPs for further functional characterization. PMID:26474488

  3. Predicting CYP2C19 catalytic parameters for enantioselective oxidations using artificial neural networks and a chirality code.

    PubMed

    Hartman, Jessica H; Cothren, Steven D; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A; Miller, Grover P

    2013-07-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (k(cat), K(m), and k(cat)/K(m)), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (k(cat) and K(m)) were more consistent with experimental values than those for catalytic efficiency (k(cat)/K(m)). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds.

  4. lncRScan-SVM: A Tool for Predicting Long Non-Coding RNAs Using Support Vector Machine.

    PubMed

    Sun, Lei; Liu, Hui; Zhang, Lin; Meng, Jia

    2015-01-01

    Functional long non-coding RNAs (lncRNAs) have been bringing novel insight into biological study, however it is still not trivial to accurately distinguish the lncRNA transcripts (LNCTs) from the protein coding ones (PCTs). As various information and data about lncRNAs are preserved by previous studies, it is appealing to develop novel methods to identify the lncRNAs more accurately. Our method lncRScan-SVM aims at classifying PCTs and LNCTs using support vector machine (SVM). The gold-standard datasets for lncRScan-SVM model training, lncRNA prediction and method comparison were constructed according to the GENCODE gene annotations of human and mouse respectively. By integrating features derived from gene structure, transcript sequence, potential codon sequence and conservation, lncRScan-SVM outperforms other approaches, which is evaluated by several criteria such as sensitivity, specificity, accuracy, Matthews correlation coefficient (MCC) and area under curve (AUC). In addition, several known human lncRNA datasets were assessed using lncRScan-SVM. LncRScan-SVM is an efficient tool for predicting the lncRNAs, and it is quite useful for current lncRNA study.

  5. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    NASA Astrophysics Data System (ADS)

    Yu, Jia-Feng; Sui, Tian-Xiang; Wang, Hong-Mei; Wang, Chun-Ling; Jing, Li; Wang, Ji-Hua

    2015-12-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. Project supported by the National Natural Science Foundation of China (Grant Nos. 61302186 and 61271378) and the Funding from the State Key Laboratory of Bioelectronics of Southeast University.

  6. Evaluation of MOSTAS computer code for predicting dynamic loads in two-bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads are compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-0 wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multiblade coordinate transformation for two-bladed rotors to solve the equations of motion by standard eigenanalysis. The results obtained with this approximate analysis do not agree with dynamic blade load amplifications at or close to resonance conditions. The results of the second version, which accounts for periodic coefficients while solving the equations by a time history integration, compare well with the measured data.

  7. Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.

  8. Predicting animal δ18O: Accounting for diet and physiological adaptation

    NASA Astrophysics Data System (ADS)

    Kohn, Matthew J.

    1996-12-01

    Theoretical predictions and measured isotope variations indicate that diet and physiological adaptation have a significant impact on animals δ18O and cannot be ignored. A generalized model is therefore developed for the prediction of animal body water and phosphate δ18O to incorporate these factors quantitatively. Application of the model reproduces most published compositions and compositional trends for mammals and birds. A moderate dependence of animal δ18O on humidity is predicted for drought-tolerant animals, and the correlation between humidity and North American deer bone composition as corrected for local meteoric water is predicted within the scatter of the data. In contrast to an observed strong correlation between kangaroo δ18O and humidity (Δδ18O/Δh ∼ 2.5± 0.4‰/10%r.h.), the predicted humidity dependence is only 1.3 - 1.7‰/10% r.h., and it is inferred that drinking water in hot dry areas of Australia is enriched in 18O over rainwater. Differences in physiology and water turnover readily explain the observed differences in δ18O for several herbivore genera in East Africa, excepting antelopes. Antelope models are more sensitive to biological fractionations, and adjustments to the flux of transcutaneous water vapor within experimentally measured ranges allows their δ18O values to be matched. Models of the seasonal changes of forage composition for two regions with dissimilar climates show that significant seasonal variations in animal isotope composition are expected, and that animals with different physiologies and diets track climate differently. Analysis of different genera with disparate sensitivities to surface water and humidity will allow the most accurate quantification of past climate changes.

  9. Non-coding RNAs in crop genetic modification: considerations and predictable environmental risk assessments (ERA).

    PubMed

    Ramesh, S V

    2013-09-01

    Of late non-coding RNAs (ncRNAs)-mediated gene silencing is an influential tool deliberately deployed to negatively regulate the expression of targeted genes. In addition to the widely employed small interfering RNA (siRNA)-mediated gene silencing approach, other variants like artificial miRNA (amiRNA), miRNA mimics, and artificial transacting siRNAs (tasiRNAs) are being explored and successfully deployed in developing non-coding RNA-based genetically modified plants. The ncRNA-based gene manipulations are typified with mobile nature of silencing signals, interference from viral genome-derived suppressor proteins, and an obligation for meticulous computational analysis to prevaricate any inadvertent effects. In a broad sense, risk assessment inquiries for genetically modified plants based on the expression of ncRNAs are competently addressed by the environmental risk assessment (ERA) models, currently in vogue, designed for the first generation transgenic plants which are based on the expression of heterologous proteins. Nevertheless, transgenic plants functioning on the foundation of ncRNAs warrant due attention with respect to their unique attributes like off-target or non-target gene silencing effects, small RNAs (sRNAs) persistence, food and feed safety assessments, problems in detection and tracking of sRNAs in food, impact of ncRNAs in plant protection measures, effect of mutations etc. The role of recent developments in sequencing techniques like next generation sequencing (NGS) and the ERA paradigm of the different countries in vogue are also discussed in the context of ncRNA-based gene manipulations.

  10. Computational Account of Spontaneous Activity as a Signature of Predictive Coding

    PubMed Central

    Koren, Veronika

    2017-01-01

    Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function. PMID:28114353

  11. Positive predictive value of ICD-9th codes for upper gastrointestinal bleeding and perforation in the Sistema Informativo Sanitario Regionale database.

    PubMed

    Cattaruzzi, C; Troncon, M G; Agostinis, L; García Rodríguez, L A

    1999-06-01

    We identified patients whose records in the Sistema Informativo Sanitario Regionale database in the Italian region of Friuli-Venezia Giulia showed a code of upper gastrointestinal bleeding (UGIB) and perforation according to codes of the International Classification of Diseases (ICD)-9th revision. The validity of site- and lesion-specific codes (531 to 534) and nonspecific codes (5780, 5781, and 5789) was ascertained through manual review of hospital clinical records. The initial group was made of 1779 potential cases of UGIB identified with one of these codes recorded. First, the positive predictive values (PPV) were calculated in a random sample. As a result of the observed high PPV of 531 and 532 codes, additional hospital charts were solely requested for all remaining potential cases with 533, 534, and 578 ICD-9 codes. The overall PPV reached a high of 97% for 531 and 532 site-specific codes, 84% for 534 site-specific codes, and 80% for 533 lesion-specific codes, and a low of 59% for nonspecific codes. These data suggest a considerable research potential for this new computerized health care database in Southern Europe.

  12. Predicting demographically sustainable rates of adaptation: can great tit breeding time keep pace with climate change?

    PubMed Central

    Gienapp, Phillip; Lof, Marjolein; Reed, Thomas E.; McNamara, John; Verhulst, Simon; Visser, Marcel E.

    2013-01-01

    Populations need to adapt to sustained climate change, which requires micro-evolutionary change in the long term. A key question is how the rate of this micro-evolutionary change compares with the rate of environmental change, given that theoretically there is a ‘critical rate of environmental change’ beyond which increased maladaptation leads to population extinction. Here, we parametrize two closely related models to predict this critical rate using data from a long-term study of great tits (Parus major). We used stochastic dynamic programming to predict changes in optimal breeding time under three different climate scenarios. Using these results we parametrized two theoretical models to predict critical rates. Results from both models agreed qualitatively in that even ‘mild’ rates of climate change would be close to these critical rates with respect to great tit breeding time, while for scenarios close to the upper limit of IPCC climate projections the calculated critical rates would be clearly exceeded with possible consequences for population persistence. We therefore tentatively conclude that micro-evolution, together with plasticity, would rescue only the population from mild rates of climate change, although the models make many simplifying assumptions that remain to be tested. PMID:23209174

  13. An adaptive distance-based group contribution method for thermodynamic property prediction.

    PubMed

    He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing

    2016-09-14

    In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory.

  14. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  15. Questions regarding the predictive value of one evolved complex adaptive system for a second: exemplified by the SOD1 mouse.

    PubMed

    Greek, Ray; Hansen, Lawrence A

    2013-11-01

    We surveyed the scientific literature regarding amyotrophic lateral sclerosis, the SOD1 mouse model, complex adaptive systems, evolution, drug development, animal models, and philosophy of science in an attempt to analyze the SOD1 mouse model of amyotrophic lateral sclerosis in the context of evolved complex adaptive systems. Humans and animals are examples of evolved complex adaptive systems. It is difficult to predict the outcome from perturbations to such systems because of the characteristics of complex systems. Modeling even one complex adaptive system in order to predict outcomes from perturbations is difficult. Predicting outcomes to one evolved complex adaptive system based on outcomes from a second, especially when the perturbation occurs at higher levels of organization, is even more problematic. Using animal models to predict human outcomes to perturbations such as disease and drugs should have a very low predictive value. We present empirical evidence confirming this and suggest a theory to explain this phenomenon. We analyze the SOD1 mouse model of amyotrophic lateral sclerosis in order to illustrate this position.

  16. Standard Deviation and Intra Prediction Mode Based Adaptive Spatial Error Concealment (SEC) in H.264/AVC

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Wang, Lei; Ikenaga, Takeshi; Goto, Satoshi

    Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Therefore an error concealment scheme is applied at the video receiver side to mask the damaged video. Considering there are 3 types of MBs (Macro Blocks) in natural video frame, i. e., Textural MB, Edged MB, and Smooth MB, this paper proposes an adaptive spatial error concealment which can choose 3 different methods for these 3 different MBs. For criteria of choosing appropriate method, 2 factors are taken into consideration. Firstly, standard deviation of our proposed edge statistical model is exploited. Secondly, some new features of latest video compression standard H.264/AVC, i. e., intra prediction mode is also considered for criterion formulation. Compared with previous works, which are only based on deterministic measurement, proposed method achieves the best image recovery. Subjective and objective image quality evaluations in experiments confirmed this.

  17. Prediction of Corrosion Resistance of Some Dental Metallic Materials with an Adaptive Regression Model

    NASA Astrophysics Data System (ADS)

    Chelariu, Romeu; Suditu, Gabriel Dan; Mareci, Daniel; Bolat, Georgiana; Cimpoesu, Nicanor; Leon, Florin; Curteanu, Silvia

    2015-04-01

    The aim of this study is to investigate the electrochemical behavior of some dental metallic materials in artificial saliva for different pH (5.6 and 3.4), NaF content (500 ppm, 1000 ppm, and 2000 ppm), and with albumin protein addition (0.6 wt.%) for pH 3.4. The corrosion resistance of the alloys was quantitatively evaluated by polarization resistance, estimated by electrochemical impedance spectroscopy method. An adaptive k-nearest-neighbor regression method was applied for evaluating the corrosion resistance of the alloys by simulation, depending on the operation conditions. The predictions provided by the model are useful for experimental practice, as they can replace or, at least, help to plan the experiments. The accurate results obtained prove that the developed model is reliable and efficient.

  18. Predicted performance benefits of an adaptive digital engine control system of an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Myers, L. P.; Ray, R. J.

    1985-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrating engine-airframe control systems. Currently this is accomplished on the NASA Ames Research Center's F-15 airplane. The two control modes used to implement the systems are an integrated flightpath management mode and in integrated adaptive engine control system (ADECS) mode. The ADECS mode is a highly integrated mode in which the airplane flight conditions, the resulting inlet distortion, and the available engine stall margin are continually computed. The excess stall margin is traded for thrust. The predicted increase in engine performance due to the ADECS mode is presented in this report.

  19. Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2016-09-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.

  20. Predicted performance benefits of an adaptive digital engine control system on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Myers, L. P.; Ray, R. J.

    1985-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrating engine-airframe control systems. Currently this is accomplished on the NASA Ames Research Center's F-15 airplane. The two control modes used to implement the systems are an integrated flightpath management mode and an integrated adaptive engine control system (ADECS) mode. The ADECS mode is a highly integrated mode in which the airplane flight conditions, the resulting inlet distortion, and the available engine stall margin are continually computed. The excess stall margin is traded for thrust. The predicted increase in engine performance due to the ADECS mode is presented in this report.

  1. Physical modelling and adaptive predictive control of diffusion/LPCVD reactors

    NASA Astrophysics Data System (ADS)

    Dewaard, H.

    1992-12-01

    The aim of this study is to design a temperature controller for batch electric diffusion/low pressure chemical vapor deposition (LPCVD) furnaces, that complies with the increasingly more stringent requirements of VLSI processing. A mathematical model has been developed for batch electric diffusion/LPCVD reactors that are currently used in the semiconductor industry for the fabrication of micro-electronic devices. The model has been formulated in terms of partial integro-differential equations, which are derived from the basic energy conservation law of physics. The model takes into account the effects of radiation and conduction. Chapter 2 gives a detailed description of the furnace system and provides some insight into the processes that take place. In chapter 3, the model of the diffusion/LPPCVD furnace is derived. Chapter 4 deals with the design of a temperature control system for the diffusion/LPCVD reactor, that makes use of the model as developed in chapter 3. Chapter 5 gives the results of the control designs, both of simulation and of application on a real furnace. Results of the linear quadratic Gaussian controller, the (non-adaptive) reduced order controller, and the adaptive predictive controller are presented. Finally, in chapter 6, some conclusions are drawn and suggestions for further research are given.

  2. Wind-US Code Contributions to the First AIAA Shock Boundary Layer Interaction Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Vyas, Manan A.; Yoder, Dennis A.

    2013-01-01

    This report discusses the computations of a set of shock wave/turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock/boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Four turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Baseline and Shear Stress Transport k-omega two-equation models, and an explicit algebraic stress k-omega formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.

  3. Organizational changes to thyroid regulation in Alligator mississippiensis: evidence for predictive adaptive responses.

    PubMed

    Boggs, Ashley S P; Lowers, Russell H; Cloy-McCoy, Jessica A; Guillette, Louis J

    2013-01-01

    During embryonic development, organisms are sensitive to changes in thyroid hormone signaling which can reset the hypothalamic-pituitary-thyroid axis. It has been hypothesized that this developmental programming is a 'predictive adaptive response', a physiological adjustment in accordance with the embryonic environment that will best aid an individual's survival in a similar postnatal environment. When the embryonic environment is a poor predictor of the external environment, the developmental changes are no longer adaptive and can result in disease states. We predicted that endocrine disrupting chemicals (EDCs) and environmentally-based iodide imbalance could lead to developmental changes to the thyroid axis. To explore whether iodide or EDCs could alter developmental programming, we collected American alligator eggs from an estuarine environment with high iodide availability and elevated thyroid-specific EDCs, a freshwater environment contaminated with elevated agriculturally derived EDCs, and a reference freshwater environment. We then incubated them under identical conditions. We examined plasma thyroxine and triiodothyronine concentrations, thyroid gland histology, plasma inorganic iodide, and somatic growth at one week (before external nutrition) and ten months after hatching (on identical diets). Neonates from the estuarine environment were thyrotoxic, expressing follicular cell hyperplasia (p = 0.01) and elevated plasma triiodothyronine concentrations (p = 0.0006) closely tied to plasma iodide concentrations (p = 0.003). Neonates from the freshwater contaminated site were hypothyroid, expressing thyroid follicular cell hyperplasia (p = 0.01) and depressed plasma thyroxine concentrations (p = 0.008). Following a ten month growth period under identical conditions, thyroid histology (hyperplasia p = 0.04; colloid depletion p = 0.01) and somatic growth (body mass p<0.0001; length p = 0.02) remained altered among the contaminated

  4. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2017-04-06

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task.

  5. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  6. A Genetic Variant (COMT) Coding Dopaminergic Activity Predicts Personality Traits in Healthy Elderly.

    PubMed

    Kotyuk, Eszter; Duchek, Janet; Head, Denise; Szekely, Anna; Goate, Alison M; Balota, David A

    2015-08-01

    Association studies between the NEO five factor personality inventory and COMT rs4680 have focused on young adults and the results have been inconsistent. However, personality and cortical changes with age may put older adults in a more sensitive range for detecting a relationship. The present study examined associations of COMT rs4680 and personality in older adults. Genetic association analyses were carried out between the NEO and the targeted COMT rs4680 in a large, well-characterized sample of healthy, cognitively normal older adults (N = 616, mean age = 69.26 years). Three significant associations were found: participants with GG genotype showed lower mean scores on Neuroticism (p = 0.039) and higher scores on Agreeableness (p = 0.020) and Conscientiousness (p = 0.006) than participants with AA or AG genotypes. These results suggest that older adults with higher COMT enzymatic activity (GG), therefore lower dopamine level, have lower Neuroticism scores, and higher Agreeableness and Conscientiousness scores. This is consistent with a recent model of phasic and tonic dopamine release suggesting that even though GG genotype is associated with lower tonic dopamine release, the phasic release of dopamine might be optimal for a more adaptive personality profile.

  7. A Genetic Variant (COMT) Coding Dopaminergic Activity Predicts Personality Traits in Healthy Elderly

    PubMed Central

    Kotyuk, Eszter; Duchek, Janet; Head, Denise; Szekely, Anna; Goate, Alison M.; Balota, David A.

    2015-01-01

    Association studies between the NEO five factor personality inventory and COMT rs4680 have focused on young adults and the results have been inconsistent. However, personality and cortical changes with age may put older adults in a more sensitive range for detecting a relationship. The present study examined associations of COMT rs4680 and personality in older adults. Genetic association analyses were carried out between the NEO and the targeted COMT rs4680 in a large, well-characterized sample of healthy, cognitively normal older adults (N = 616, mean age = 69.26 years). Three significant associations were found: participants with GG genotype showed lower mean scores on Neuroticism (p = 0.039) and higher scores on Agreeableness (p = 0.020) and Conscientiousness (p = 0.006) than participants with AA or AG genotypes. These results suggest that older adults with higher COMT enzymatic activity (GG), therefore lower dopamine level, have lower Neuroticism scores, and higher Agreeableness and Conscientiousness scores. This is consistent with a recent model of phasic and tonic dopamine release suggesting that even though GG genotype is associated with lower tonic dopamine release, the phasic release of dopamine might be optimal for a more adaptive personality profile. PMID:25960587

  8. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-01-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively

  9. Comparison of a Structured-LES and an Unstructured-DES Code for Predicting Combustion Instabilities in a Longitudinal Mode Rocket

    DTIC Science & Technology

    2014-12-01

    Structured-LES and an Unstructured-DES Code for Predicting Combustion Instabilities in a Longitudinal Mode Rocket 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...Predicting Combustion Instabilities in a Longitudinal Mode Rocket Matt Harvazinski, Doug Tally, & Venke Sankaran Air Force Research Laboratory Edwards

  10. Predictions of bubbly flows in vertical pipes using two-fluid models in CFDS-FLOW3D code

    SciTech Connect

    Banas, A.O.; Carver, M.B.; Unrau, D.

    1995-09-01

    This paper reports the results of a preliminary study exploring the performance of two sets of two-fluid closure relationships applied to the simulation of turbulent air-water bubbly upflows through vertical pipes. Predictions obtained with the default CFDS-FLOW3D model for dispersed flows were compared with the predictions of a new model (based on the work of Lee), and with the experimental data of Liu. The new model, implemented in the CFDS-FLOW3D code, included additional source terms in the {open_quotes}standard{close_quotes} {kappa}-{epsilon} transport equations for the liquid phase, as well as modified model coefficients and wall functions. All simulations were carried out in a 2-D axisymmetric format, collapsing the general multifluid framework of CFDS-FLOW3D to the two-fluid (air-water) case. The newly implemented model consistently improved predictions of radial-velocity profiles of both phases, but failed to accurately reproduce the experimental phase-distribution data. This shortcoming was traced to the neglect of anisotropic effects in the modelling of liquid-phase turbulence. In this sense, the present investigation should be considered as the first step toward the ultimate goal of developing a theoretically sound and universal CFD-type two-fluid model for bubbly flows in channels.

  11. Prediction of Business Jet Airloads Using The Overflow Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Bounajem, Elias; Buning, Pieter G.

    2001-01-01

    The objective of this work is to evaluate the application of Navier-Stokes computational fluid dynamics technology, for the purpose of predicting off-design condition airloads on a business jet configuration in the transonic regime. The NASA Navier-Stokes flow solver OVERFLOW with Chimera overset grid capability, availability of several numerical schemes and convergence acceleration techniques was selected for this work. A set of scripts which have been compiled to reduce the time required for the grid generation process are described. Several turbulence models are evaluated in the presence of separated flow regions on the wing. Computed results are compared to available wind tunnel data for two Mach numbers and a range of angles-of-attack. Comparisons of wing surface pressure from numerical simulation and wind tunnel measurements show good agreement up to fairly high angles-of-attack.

  12. Facets and mechanisms of adaptive pain behavior: predictive regulation and action

    PubMed Central

    Morrison, India; Perini, Irene; Dunham, James

    2013-01-01

    Neural mechanisms underlying nociception and pain perception are considered to serve the ultimate goal of limiting tissue damage. However, since pain usually occurs in complex environments and situations that call for elaborate control over behavior, simple avoidance is insufficient to explain a range of mammalian pain responses, especially in the presence of competing goals. In this integrative review we propose a Predictive Regulation and Action (PRA) model of acute pain processing. It emphasizes evidence that the nervous system is organized to anticipate potential pain and to adjust behavior before the risk of tissue damage becomes critical. Regulatory processes occur on many levels, and can be dynamically influenced by local interactions or by modulation from other brain areas in the network. The PRA model centers on neural substrates supporting the predictive nature of pain processing, as well as on finely-calibrated yet versatile regulatory processes that ultimately affect behavior. We outline several operational categories of pain behavior, from spinally-mediated reflexes to adaptive voluntary action, situated at various neural levels. An implication is that neural processes that track potential tissue damage in terms of behavioral consequences are an integral part of pain perception. PMID:24348358

  13. Effects of protein conformation in docking: improved pose prediction through protein pocket adaptation

    NASA Astrophysics Data System (ADS)

    Jain, Ajay N.

    2009-06-01

    Computational methods for docking ligands have been shown to be remarkably dependent on precise protein conformation, where acceptable results in pose prediction have been generally possible only in the artificial case of re-docking a ligand into a protein binding site whose conformation was determined in the presence of the same ligand (the "cognate" docking problem). In such cases, on well curated protein/ligand complexes, accurate dockings can be returned as top-scoring over 75% of the time using tools such as Surflex-Dock. A critical application of docking in modeling for lead optimization requires accurate pose prediction for novel ligands, ranging from simple synthetic analogs to very different molecular scaffolds. Typical results for widely used programs in the "cross-docking case" (making use of a single fixed protein conformation) have rates closer to 20% success. By making use of protein conformations from multiple complexes, Surflex-Dock yields an average success rate of 61% across eight pharmaceutically relevant targets. Following docking, protein pocket adaptation and rescoring identifies single pose families that are correct an average of 67% of the time. Consideration of the best of two pose families (from alternate scoring regimes) yields a 75% mean success rate.

  14. Effects of protein conformation in docking: improved pose prediction through protein pocket adaptation.

    PubMed

    Jain, Ajay N

    2009-06-01

    Computational methods for docking ligands have been shown to be remarkably dependent on precise protein conformation, where acceptable results in pose prediction have been generally possible only in the artificial case of re-docking a ligand into a protein binding site whose conformation was determined in the presence of the same ligand (the "cognate" docking problem). In such cases, on well curated protein/ligand complexes, accurate dockings can be returned as top-scoring over 75% of the time using tools such as Surflex-Dock. A critical application of docking in modeling for lead optimization requires accurate pose prediction for novel ligands, ranging from simple synthetic analogs to very different molecular scaffolds. Typical results for widely used programs in the "cross-docking case" (making use of a single fixed protein conformation) have rates closer to 20% success. By making use of protein conformations from multiple complexes, Surflex-Dock yields an average success rate of 61% across eight pharmaceutically relevant targets. Following docking, protein pocket adaptation and rescoring identifies single pose families that are correct an average of 67% of the time. Consideration of the best of two pose families (from alternate scoring regimes) yields a 75% mean success rate.

  15. Eye-pupil displacement and prediction: effects on residual wavefront in adaptive optics retinal imaging

    PubMed Central

    Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea

    2016-01-01

    This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems. PMID:27231607

  16. Eye-pupil displacement and prediction: effects on residual wavefront in adaptive optics retinal imaging.

    PubMed

    Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea

    2016-03-01

    This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems.

  17. Reading the Second Code: Mapping Epigenomes to Understand Plant Growth, Development, and Adaptation to the Environment[OA

    PubMed Central

    2012-01-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  18. Performance Analysis of MIMO-STBC Systems with Higher Coding Rate Using Adaptive Semiblind Channel Estimation Scheme

    PubMed Central

    Kumar, Ravi

    2014-01-01

    Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel. PMID:24688379

  19. Performance analysis of MIMO-STBC systems with higher coding rate using adaptive semiblind channel estimation scheme.

    PubMed

    Kumar, Ravi; Saxena, Rajiv

    2014-01-01

    Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel.

  20. Positive predictive value of diagnosis coding for hemolytic anemias in the Danish National Patient Register

    PubMed Central

    Hansen, Dennis Lund; Overgaard, Ulrik Malthe; Pedersen, Lars; Frederiksen, Henrik

    2016-01-01

    Purpose The nationwide public health registers in Denmark provide a unique opportunity for evaluation of disease-associated morbidity if the positive predictive values (PPVs) of the primary diagnosis are known. The aim of this study was to evaluate the predictive values of hemolytic anemias registered in the Danish National Patient Register. Patients and methods All patients with a first-ever diagnosis of hemolytic anemia from either specialist outpatient clinic contact or inpatient admission at Odense University Hospital from January 1994 through December 2011 were considered for inclusion. Patients with mechanical reason for hemolysis such as an artificial heart valve, and patients with vitamin-B12 or folic acid deficiency were excluded. Results We identified 412 eligible patients: 249 with a congenital hemolytic anemia diagnosis and 163 with acquired hemolytic anemia diagnosis. In all, hemolysis was confirmed in 359 patients, yielding an overall PPV of 87.1% (95% confidence interval [CI]: 83.5%–90.2%). A diagnosis could be established in 392 patients of whom 355 patients had a hemolytic diagnosis. Diagnosis was confirmed in 197 of the 249 patients with congenital hemolytic anemia, yielding a PPV of 79.1% (95% CI: 73.5%–84.0%). Diagnosis of acquired hemolytic anemia could be confirmed in 136 of the 163 patients, resulting in a PPV of 83.4% (95% CI: 76.8%–88.8%). For hemoglobinopathy PPV was 84.1% (95% CI: 77.4%–89.4%), for hereditary spherocytosis PPV was 80.6% (95% CI: 69.5%–88.9%), and for autoimmune hemolytic anemia PPV was 78.4% (95% CI: 70.4%–85.0%). Conclusion The PPV of hemolytic anemias was moderately high. The PPVs were comparable in the three main categories of overall hemolysis, and congenital and acquired hemolytic anemia. PMID:27445504

  1. Signalign: An Ontology of DNA as Signal for Comparative Gene Structure Prediction Using Information-Coding-and-Processing Techniques.

    PubMed

    Yu, Ning; Guo, Xuan; Gu, Feng; Pan, Yi

    2016-03-01

    Conventional character-analysis-based techniques in genome analysis manifest three main shortcomings-inefficiency, inflexibility, and incompatibility. In our previous research, a general framework, called DNA As X was proposed for character-analysis-free techniques to overcome these shortcomings, where X is the intermediates, such as digit, code, signal, vector, tree, graph network, and so on. In this paper, we further implement an ontology of DNA As Signal, by designing a tool named Signalign for comparative gene structure analysis, in which DNA sequences are converted into signal series, processed by modified method of dynamic time warping and measured by signal-to-noise ratio (SNR). The ontology of DNA As Signal integrates the principles and concepts of other disciplines including information coding theory and signal processing into sequence analysis and processing. Comparing with conventional character-analysis-based methods, Signalign can not only have the equivalent or superior performance, but also enrich the tools and the knowledge library of computational biology by extending the domain from character/string to diverse areas. The evaluation results validate the success of the character-analysis-free technique for improved performances in comparative gene structure prediction.

  2. How can a recurrent neurodynamic predictive coding model cope with fluctuation in temporal patterns? Robotic experiments on imitative interaction.

    PubMed

    Ahmadi, Ahmadreza; Tani, Jun

    2017-03-21

    The current paper examines how a recurrent neural network (RNN) model using a dynamic predictive coding scheme can cope with fluctuations in temporal patterns through generalization in learning. The conjecture driving this present inquiry is that a RNN model with multiple timescales (MTRNN) learns by extracting patterns of change from observed temporal patterns, developing an internal dynamic structure such that variance in initial internal states account for modulations in corresponding observed patterns. We trained a MTRNN with low-dimensional temporal patterns, and assessed performance on an imitation task employing these patterns. Analysis reveals that imitating fluctuated patterns consists in inferring optimal internal states by error regression. The model was then tested through humanoid robotic experiments requiring imitative interaction with human subjects. Results show that spontaneous and lively interaction can be achieved as the model successfully copes with fluctuations naturally occurring in human movement patterns.

  3. A computational model for the prediction of jet entrainment in the vicinity of nozzle boattails (the BOAT code)

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.

    1978-01-01

    The development of a computational model (BOAT) for calculating nearfield jet entrainment, and its incorporation in an existing methodology for the prediction of nozzle boattail pressures, is discussed. The model accounts for the detailed turbulence and thermochemical processes occurring in the mixing layer formed between a jet exhaust and surrounding external stream while interfacing with the inviscid exhaust and external flowfield regions in an overlaid, interactive manner. The ability of the BOAT model to analyze simple free shear flows is assessed by comparisons with fundamental laboratory data. The overlaid procedure for incorporating variable pressures into BOAT and the entrainment correction employed to yield an effective plume boundary for the inviscid external flow are demonstrated. This is accomplished via application of BOAT in conjunction with the codes comprising the NASA/LRC patched viscous/inviscid methodology for determining nozzle boattail drag for subsonic/transonic external flows.

  4. Adaptation of Sediment Connectivity Index for Swedish catchments and application for flood prediction of roads

    NASA Astrophysics Data System (ADS)

    Cantone, Carolina; Kalantari, Zahra; Cavalli, Marco; Crema, Stefano

    2016-04-01

    Climate changes are predicted to increase precipitation intensities and occurrence of extreme rainfall events in the near future. Scandinavia has been identified as one of the most sensitive regions in Europe to such changes; therefore, an increase in the risk for flooding, landslides and soil erosion is to be expected also in Sweden. An increase in the occurrence of extreme weather events will impose greater strain on the built environment and major transport infrastructures such as roads and railways. This research aimed to identify the risk of flooding at the road-stream intersections, crucial locations where water and debris can accumulate and cause failures of the existing drainage facilities. Two regions in southwest of Sweden affected by an extreme rainfall event in August 2014, were used for calibrating and testing a statistical flood prediction model. A set of Physical Catchment Descriptors (PCDs) including road and catchment characteristics was identified for the modelling. Moreover, a GIS-based topographic Index of Sediment Connectivity (IC) was used as PCD. The novelty of this study relies on the adaptation of IC for describing sediment connectivity in lowland areas taking into account contribution of soil type, land use and different patterns of precipitation during the event. A weighting factor for IC was calculated by estimating runoff calculated with SCS Curve Number method, assuming a constant value of precipitation for a given time period, corresponding to the critical event. The Digital Elevation Model of the study site was reconditioned at the drainage facilities locations to consider the real flow path in the analysis. These modifications led to highlight the role of rainfall patterns and surface runoff for modelling sediment delivery in lowland areas. Moreover, it was observed that integrating IC into the statistic prediction model increased its accuracy and performance. After the calibration procedure in one of the study areas, the model was

  5. TFaNS Tone Fan Noise Design/Prediction System. Volume 1; System Description, CUP3D Technical Documentation and Manual for Code Developers

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.

  6. Dispositional Mindfulness Predicts Adaptive Affective Responses to Health Messages and Increased Exercise Motivation.

    PubMed

    Kang, Yoona; O'Donnell, Matthew Brook; Strecher, Victor J; Falk, Emily B

    2017-04-01

    Feelings can shape how people respond to persuasive messages. In health communication, adaptive affective responses to potentially threating messages constitute one key to intervention success. The current study tested dispositional mindfulness, characterized by awareness of the present moment, as a predictor of adaptive affective responses to potentially threatening health messages and desirable subsequent health outcomes. Both general and discrete negative affective states (i.e., shame) were examined in relation to mindfulness and intervention success. Individuals (n=67) who reported less than 195 weekly minutes of exercise were recruited. At baseline, participants' dispositional mindfulness and exercise outcomes were assessed, including self-reported exercise motivation and physical activity. A week later, all participants were presented with potentially threatening and self-relevant health messages encouraging physical activity and discouraging sedentary lifestyle, and their subsequent affective response and exercise motivation were assessed. Approximately one month later, changes in exercise motivation and physical activity were assessed again. In addition, participants' level of daily physical activity was monitored by a wrist worn accelerometer throughout the entire duration of the study. Higher dispositional mindfulness predicted greater increases in exercise motivation one month after the intervention. Importantly, this effect was fully mediated by lower negative affect and shame specifically, in response to potentially threatening health messages among highly mindful individuals. Baseline mindfulness was also associated with increased self-reported vigorous activity, but not with daily physical activity as assessed by accelerometers. These findings suggest potential benefits of considering mindfulness as an active individual difference variable in theories of affective processing and health communication.

  7. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, Aaron R.

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  8. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  9. Organizational Changes to Thyroid Regulation in Alligator mississippiensis: Evidence for Predictive Adaptive Responses

    PubMed Central

    Boggs, Ashley S. P.; Lowers, Russell H.; Cloy-McCoy, Jessica A.; Guillette, Louis J.

    2013-01-01

    During embryonic development, organisms are sensitive to changes in thyroid hormone signaling which can reset the hypothalamic-pituitary-thyroid axis. It has been hypothesized that this developmental programming is a ‘predictive adaptive response’, a physiological adjustment in accordance with the embryonic environment that will best aid an individual's survival in a similar postnatal environment. When the embryonic environment is a poor predictor of the external environment, the developmental changes are no longer adaptive and can result in disease states. We predicted that endocrine disrupting chemicals (EDCs) and environmentally-based iodide imbalance could lead to developmental changes to the thyroid axis. To explore whether iodide or EDCs could alter developmental programming, we collected American alligator eggs from an estuarine environment with high iodide availability and elevated thyroid-specific EDCs, a freshwater environment contaminated with elevated agriculturally derived EDCs, and a reference freshwater environment. We then incubated them under identical conditions. We examined plasma thyroxine and triiodothyronine concentrations, thyroid gland histology, plasma inorganic iodide, and somatic growth at one week (before external nutrition) and ten months after hatching (on identical diets). Neonates from the estuarine environment were