Science.gov

Sample records for adaptive predictive coding

  1. Adaptive predictive image coding using local characteristics

    NASA Astrophysics Data System (ADS)

    Hsieh, C. H.; Lu, P. C.; Liou, W. G.

    1989-12-01

    The paper presents an efficient adaptive predictive coding method using the local characteristics of images. In this method, three coding schemes, namely, mean, subsampling combined with fixed DPCM, and ADPCM/PCM, are used and one of these is chosen adaptively based on the local characteristics of images. The prediction parameters of the two-dimensional linear predictor in the ADPCM/PCM are extracted on a block by block basis. Simulation results show that the proposed method is effective in reducing the slope overload distortion and the granular noise at low bit rates, and thus it can improve the visual quality of reconstructed images.

  2. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  3. Hierarchical prediction and context adaptive coding for lossless color image compression.

    PubMed

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  4. Adaptation and visual coding

    PubMed Central

    Webster, Michael A.

    2011-01-01

    Visual coding is a highly dynamic process and continuously adapting to the current viewing context. The perceptual changes that result from adaptation to recently viewed stimuli remain a powerful and popular tool for analyzing sensory mechanisms and plasticity. Over the last decade, the footprints of this adaptation have been tracked to both higher and lower levels of the visual pathway and over a wider range of timescales, revealing that visual processing is much more adaptable than previously thought. This work has also revealed that the pattern of aftereffects is similar across many stimulus dimensions, pointing to common coding principles in which adaptation plays a central role. However, why visual coding adapts has yet to be fully answered. PMID:21602298

  5. Adaptive differential pulse-code modulation with adaptive bit allocation

    NASA Astrophysics Data System (ADS)

    Frangoulis, E. D.; Yoshida, K.; Turner, L. F.

    1984-08-01

    Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.

  6. Telescope Adaptive Optics Code

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  7. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  8. Predictive coding of multisensory timing

    PubMed Central

    Shi, Zhuanghua; Burr, David

    2016-01-01

    The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’.

  9. Predictive coding of multisensory timing

    PubMed Central

    Shi, Zhuanghua; Burr, David

    2016-01-01

    The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz’s ‘perception as inference’. PMID:27695705

  10. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  11. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  12. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  13. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  14. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  15. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  16. Structural coding versus free-energy predictive coding.

    PubMed

    van der Helm, Peter A

    2016-06-01

    Focusing on visual perceptual organization, this article contrasts the free-energy (FE) version of predictive coding (a recent Bayesian approach) to structural coding (a long-standing representational approach). Both use free-energy minimization as metaphor for processing in the brain, but their formal elaborations of this metaphor are fundamentally different. FE predictive coding formalizes it by minimization of prediction errors, whereas structural coding formalizes it by minimization of the descriptive complexity of predictions. Here, both sides are evaluated. A conclusion regarding competence is that FE predictive coding uses a powerful modeling technique, but that structural coding has more explanatory power. A conclusion regarding performance is that FE predictive coding-though more detailed in its account of neurophysiological data-provides a less compelling cognitive architecture than that of structural coding, which, for instance, supplies formal support for the computationally powerful role it attributes to neuronal synchronization.

  17. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  18. Disruption of hierarchical predictive coding during sleep

    PubMed Central

    Strauss, Melanie; Sitt, Jacobo D.; King, Jean-Remi; Elbaz, Maxime; Azizi, Leila; Buiatti, Marco; Naccache, Lionel; van Wassenhove, Virginie; Dehaene, Stanislas

    2015-01-01

    When presented with an auditory sequence, the brain acts as a predictive-coding device that extracts regularities in the transition probabilities between sounds and detects unexpected deviations from these regularities. Does such prediction require conscious vigilance, or does it continue to unfold automatically in the sleeping brain? The mismatch negativity and P300 components of the auditory event-related potential, reflecting two steps of auditory novelty detection, have been inconsistently observed in the various sleep stages. To clarify whether these steps remain during sleep, we recorded simultaneous electroencephalographic and magnetoencephalographic signals during wakefulness and during sleep in normal subjects listening to a hierarchical auditory paradigm including short-term (local) and long-term (global) regularities. The global response, reflected in the P300, vanished during sleep, in line with the hypothesis that it is a correlate of high-level conscious error detection. The local mismatch response remained across all sleep stages (N1, N2, and REM sleep), but with an incomplete structure; compared with wakefulness, a specific peak reflecting prediction error vanished during sleep. Those results indicate that sleep leaves initial auditory processing and passive sensory response adaptation intact, but specifically disrupts both short-term and long-term auditory predictive coding. PMID:25737555

  19. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding

    PubMed Central

    Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.

    2015-01-01

    To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974

  20. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  1. Adaptive Dynamic Event Tree in RAVEN code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur

    2014-11-01

    RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me

  2. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  3. Predictive properties of visual adaptation.

    PubMed

    Chopin, Adrien; Mamassian, Pascal

    2012-04-10

    What humans perceive depends in part on what they have previously experienced. After repeated exposure to one stimulus, adaptation takes place in the form of a negative correlation between the current percept and the last displayed stimuli. Previous work has shown that this negative dependence can extend to a few minutes in the past, but the precise extent and nature of the dependence in vision is still unknown. In two experiments based on orientation judgments, we reveal a positive dependence of a visual percept with stimuli presented remotely in the past, unexpectedly and in contrast to what is known for the recent past. Previous theories of adaptation have postulated that the visual system attempts to calibrate itself relative to an ideal norm or to the recent past. We propose instead that the remote past is used to estimate the world's statistics and that this estimate becomes the reference. According to this new framework, adaptation is predictive: the most likely forthcoming percept is the one that helps the statistics of the most recent percepts match that of the remote past.

  4. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  5. An adaptive motion estimation scheme for video coding.

    PubMed

    Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  6. A Predictive Analysis Approach to Adaptive Testing.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…

  7. Adaptive neural coding: from biological to behavioral decision-making

    PubMed Central

    Louie, Kenway; Glimcher, Paul W.; Webb, Ryan

    2015-01-01

    Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666

  8. Multichannel linear predictive coding of color images

    NASA Astrophysics Data System (ADS)

    Maragos, P. A.; Mersereau, R. M.; Schafer, R. W.

    This paper reports on a preliminary study of applying single-channel (scalar) and multichannel (vector) 2-D linear prediction to color image modeling and coding. Also, the novel idea of a multi-input single-output 2-D ADPCM coder is introduced. The results of this study indicate that texture information in multispectral images can be represented by linear prediction coefficients or matrices, whereas the prediction error conveys edge-information. Moreover, by using a single-channel edge-information the investigators obtained, from original color images of 24 bits/pixel, reconstructed images of good quality at information rates of 1 bit/pixel or less.

  9. Adaptive vehicle motion estimation and prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  10. Predictive Control of Speededness in Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2009-01-01

    An adaptive testing method is presented that controls the speededness of a test using predictions of the test takers' response times on the candidate items in the pool. Two different types of predictions are investigated: posterior predictions given the actual response times on the items already administered and posterior predictions that use the…

  11. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  12. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  13. Visual mismatch negativity: a predictive coding view

    PubMed Central

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control “refractoriness” effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  14. Simpler Adaptive Selection of Golomb Power-of-Two Codes

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2007-01-01

    An alternative method of adaptive selection of Golomb power-of-two (GPO2) codes has been devised for use in efficient, lossless encoding of sequences of non-negative integers from discrete sources. The method is intended especially for use in compression of digital image data. This method is somewhat suboptimal, but offers the advantage in that it involves significantly less computation than does a prior method of adaptive selection of optimum codes through brute force application of all code options to every block of samples.

  15. Adaptive Modulation and Coding for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  16. The multiform motor cortical output: Kinematic, predictive and response coding.

    PubMed

    Sartori, Luisa; Betti, Sonia; Chinellato, Eris; Castiello, Umberto

    2015-09-01

    Observing actions performed by others entails a subliminal activation of primary motor cortex reflecting the components encoded in the observed action. One of the most debated issues concerns the role of this output: Is it a mere replica of the incoming flow of information (kinematic coding), is it oriented to anticipate the forthcoming events (predictive coding) or is it aimed at responding in a suitable fashion to the actions of others (response coding)? The aim of the present study was to disentangle the relative contribution of these three levels and unify them into an integrated view of cortical motor coding. We combined transcranial magnetic stimulation (TMS) and electromyography recordings at different timings to probe the excitability of corticospinal projections to upper and lower limb muscles of participants observing a soccer player performing: (i) a penalty kick straight in their direction and then coming to a full stop, (ii) a penalty kick straight in their direction and then continuing to run, (iii) a penalty kick to the side and then continuing to run. The results show a modulation of the observer's corticospinal excitability in different effectors at different times reflecting a multiplicity of motor coding. The internal replica of the observed action, the predictive activation, and the adaptive integration of congruent and non-congruent responses to the actions of others can coexist in a not mutually exclusive way. Such a view offers reconciliation among different (and apparently divergent) frameworks in action observation literature, and will promote a more complete and integrated understanding of recent findings on motor simulation, motor resonance and automatic imitation. PMID:25727547

  17. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  18. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  19. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  20. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  1. Scalable hologram video coding for adaptive transmitting service.

    PubMed

    Seo, Young-Ho; Lee, Yoon-Hyuk; Yoo, Ji-Sang; Kim, Dong-Wook

    2013-01-01

    This paper discusses processing techniques for an adaptive digital holographic video service in various reconstruction environments, and proposes two new scalable coding schemes. The proposed schemes are constructed according to the hologram generation or acquisition schemes: hologram-based resolution-scalable coding (HRS) and light source-based signal-to-noise ratio scalable coding (LSS). HRS is applied for holograms that are already acquired or generated, while LSS is applied to the light sources before generating digital holograms. In the LSS scheme, the light source information is lossless coded because it is too important to lose, while the HRS scheme adopts a lossy coding method. In an experiment, we provide eight stages of an HRS scheme whose data compression ratios range from 1:1 to 100:1 for each layered data. For LSS, four layers and 16 layers of scalable coding schemes are provided. We experimentally show that the proposed techniques make it possible to service a digital hologram video adaptively to the various displays with different resolutions, computation capabilities of the receiver side, or bandwidths of the network.

  2. ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Collaboration: Enzo Collaboration; and others

    2014-04-01

    This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.

  3. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  4. FLY: a Tree Code for Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.

    FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.

  5. Adapting Price Predictions in TAC SCM

    NASA Astrophysics Data System (ADS)

    Pardoe, David; Stone, Peter

    In agent-based markets, adapting to the behavior of other agents is often necessary for success. When it is not possible to directly model individual competitors, an agent may instead model and adapt to the market conditions that result from competitor behavior. Such an agent could still benefit from reasoning about specific competitor strategies by considering how various combinations of these strategies would impact the conditions being modeled. We present an application of such an approach to a specific prediction problem faced by the agent TacTex-06 in the Trading Agent Competition's Supply Chain Management scenario (TAC SCM).

  6. Intelligent robots that adapt, learn, and predict

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Liao, X.; Ghaffari, M.; Alhaj Ali, S. M.

    2005-10-01

    The purpose of this paper is to describe the concept and architecture for an intelligent robot system that can adapt, learn and predict the future. This evolutionary approach to the design of intelligent robots is the result of several years of study on the design of intelligent machines that could adapt using computer vision or other sensory inputs, learn using artificial neural networks or genetic algorithms, exhibit semiotic closure with a creative controller and perceive present situations by interpretation of visual and voice commands. This information processing would then permit the robot to predict the future and plan its actions accordingly. In this paper we show that the capability to adapt, and learn naturally leads to the ability to predict the future state of the environment which is just another form of semiotic closure. That is, predicting a future state without knowledge of the future is similar to making a present action without knowledge of the present state. The theory will be illustrated by considering the situation of guiding a mobile robot through an unstructured environment for a rescue operation. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots.

  7. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  8. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  9. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  10. Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways

    PubMed Central

    Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.

    2013-01-01

    Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101

  11. An Online Adaptive Model for Location Prediction

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Theodoros; Anagnostopoulos, Christos; Hadjiefthymiades, Stathes

    Context-awareness is viewed as one of the most important aspects in the emerging pervasive computing paradigm. Mobile context-aware applications are required to sense and react to changing environment conditions. Such applications, usually, need to recognize, classify and predict context in order to act efficiently, beforehand, for the benefit of the user. In this paper, we propose a mobility prediction model, which deals with context representation and location prediction of moving users. Machine Learning (ML) techniques are used for trajectory classification. Spatial and temporal on-line clustering is adopted. We rely on Adaptive Resonance Theory (ART) for location prediction. Location prediction is treated as a context classification problem. We introduce a novel classifier that applies a Hausdorff-like distance over the extracted trajectories handling location prediction. Since our approach is time-sensitive, the Hausdorff distance is considered more advantageous than a simple Euclidean norm. A learning method is presented and evaluated. We compare ART with Offline kMeans and Online kMeans algorithms. Our findings are very promising for the use of the proposed model in mobile context aware applications.

  12. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  13. Adaptive shape coding for perceptual decisions in the human brain.

    PubMed

    Kourtzi, Zoe; Welchman, Andrew E

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  14. Adaptive shape coding for perceptual decisions in the human brain

    PubMed Central

    Kourtzi, Zoe; Welchman, Andrew E.

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  15. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    SciTech Connect

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  16. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  17. AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics

    NASA Astrophysics Data System (ADS)

    Plewa, T.; Müller, E.

    2001-08-01

    Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.

  18. Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination

    PubMed Central

    Thomas, Blake T.; Blalock, Davis W.; Levy, William B.

    2015-01-01

    Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts. PMID:26176744

  19. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  20. Results and code predictions for ABCOVE aerosol code validation - Test AB5

    SciTech Connect

    Hilliard, R K; McCormack, J D; Postma, A K

    1983-11-01

    A program for aerosol behavior code validation and evaluation (ABCOVE) has been developed in accordance with the LMFBR Safety Program Plan. The ABCOVE program is a cooperative effort between the USDOE, the USNRC, and their contractor organizations currently involved in aerosol code development, testing or application. The first large-scale test in the ABCOVE program, AB5, was performed in the 850-m{sup 3} CSTF vessel using a sodium spray as the aerosol source. Seven organizations made pretest predictions of aerosol behavior using seven different computer codes (HAA-3, HAA-4, HAARM-3, QUICK, MSPEC, MAEROS and CONTAIN). Three of the codes were used by more than one user so that the effect of user input could be assessed, as well as the codes themselves. Detailed test results are presented and compared with the code predictions for eight key parameters.

  1. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  2. System code requirements for SBWR LOCA predictions

    SciTech Connect

    Rohatgi, U.S.; Slovik, G.; Kroeger, P.

    1994-12-31

    The simplified boiling water reactor (SBWR) is the latest design in the family of boiling water reactors (BWRs) from General Electric. The concept is based on many innovative, passive, safety systems that rely on naturally occurring phenomena, such as natural circulation, gravity flows, and condensation. Reliability has been improved by eliminating active systems such as pumps and valves. The reactor pressure vessel (RPV) is connected to heat exchangers submerged in individual water tanks, which are open to atmosphere. These heat exchanger, or isolation condensers (ICs), provide a heat sink to reduce the RPV pressure when isolated. The RPV is also connected to three elevated tanks of water called the gravity-driven cooling system (GDCS). During a loss-of-coolant accident (LOCA), the RPV is depressurized by the automatic depressurization system (ADS), allowing the gravity-driven flow from the GDCS tanks. The containment pressure is controlled by a passive containment cooling system (PCCS) and suppression pool. Similarly, there are new plant protection systems in the SBWR, such as fine-motion control rod drive, passive standby liquid control system, and the automatic feedwater runback system. These safety and plant protection systems respond to phenomena that are different from previous BWR designs. System codes must be upgraded to include models for the phenomena expected during transients for the SBWR.

  3. Olfactory coding in Drosophila larvae investigated by cross-adaptation.

    PubMed

    Boyle, Jennefer; Cobb, Matthew

    2005-09-01

    In order to reveal aspects of olfactory coding, the effects of sensory adaptation on the olfactory responses of first-instar Drosophila melanogaster larvae were tested. Larvae were pre-stimulated with a homologous series of acetic esters (C3-C9), and their responses to each of these odours were then measured. The overall patterns suggested that methyl acetate has no specific pathway but was detected by all the sensory pathways studied here, that butyl and pentyl acetate tended to have similar effects to each other and that hexyl acetate was processed separately from the other odours. In a number of cases, cross-adaptation transformed a control attractive response into a repulsive response; in no case was an increase in attractiveness observed. This was investigated by studying changes in dose-response curves following pre-stimulation. These findings are discussed in light of the possible intra- and intercellular mechanisms of adaptation and the advantage of altered sensitivity for the larva. PMID:16155221

  4. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  5. N-Body Code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Yoshii, Yuzuru

    2001-09-01

    We have developed a simulation code with the techniques that enhance both spatial and time resolution of the particle-mesh (PM) method, for which the spatial resolution is restricted by the spacing of structured mesh. The adaptive-mesh refinement (AMR) technique subdivides the cells that satisfy the refinement criterion recursively. The hierarchical meshes are maintained by the special data structure and are modified in accordance with the change of particle distribution. In general, as the resolution of the simulation increases, its time step must be shortened and more computational time is required to complete the simulation. Since the AMR enhances the spatial resolution locally, we reduce the time step locally also, instead of shortening it globally. For this purpose, we used a technique of hierarchical time steps (HTS), which changes the time step, from particle to particle, depending on the size of the cell in which particles reside. Some test calculations show that our implementation of AMR and HTS is successful. We have performed cosmological simulation runs based on our code and found that many of halo objects have density profiles that are well fitted to the universal profile proposed in 1996 by Navarro, Frenk, & White over the entire range of their radius.

  6. 3D Finite Element Trajectory Code with Adaptive Meshing

    NASA Astrophysics Data System (ADS)

    Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien

    2004-11-01

    Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.

  7. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  8. Efficient multiview depth video coding using depth synthesis prediction

    NASA Astrophysics Data System (ADS)

    Lee, Cheon; Choi, Byeongho; Ho, Yo-Sung

    2011-07-01

    The view synthesis prediction (VSP) method utilizes interview correlations between views by generating an additional reference frame in the multiview video coding. This paper describes a multiview depth video coding scheme that incorporates depth view synthesis and additional prediction modes. In the proposed scheme, we exploit the reconstructed neighboring depth frame to generate an additional reference depth image for the current viewpoint to be coded using the depth image-based-rendering technique. In order to generate high-quality reference depth images, we used pre-processing on depth, depth image warping, and two types of hole filling methods depending on the number of available reference views. After synthesizing the additional depth image, we encode the depth video using the proposed additional prediction modes named VSP modes; those additional modes refer to the synthesized depth image. In particular, the VSP_SKIP mode refers to the co-located block of the synthesized frame without the coding motion vectors and residual data, which gives most of the coding gains. Experimental results demonstrate that the proposed depth view synthesis method provides high-quality depth images for the current view and the proposed VSP modes provide high coding gains, especially on the anchor frames.

  9. Dream to Predict? REM Dreaming as Prospective Coding

    PubMed Central

    Llewellyn, Sue

    2016-01-01

    The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies—through expectations derived from discerning patterns in associated past experiences—would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus—without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning—within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate—another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a

  10. Dream to Predict? REM Dreaming as Prospective Coding.

    PubMed

    Llewellyn, Sue

    2015-01-01

    The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies-through expectations derived from discerning patterns in associated past experiences-would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus-without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning-within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate-another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a predictive

  11. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  12. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.

  13. The NASA-LeRC wind turbine sound prediction code

    NASA Astrophysics Data System (ADS)

    Viterna, L. A.

    1981-05-01

    Since regular operation of the DOE/NASA MOD-1 wind turbine began in October 1979 about 10 nearby households have complained of noise from the machine. Development of the NASA-LeRC with turbine sound prediction code began in May 1980 as part of an effort to understand and reduce the noise generated by MOD-1. Tone sound levels predicted with this code are in generally good agreement with measured data taken in the vicinity MOD-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may be amplifying the actual sound levels by about 6 dB. Parametric analysis using the code has shown that the predominant contributions to MOD-1 rotor noise are: (1) the velocity deficit in the wake of the support tower; (2) the high rotor speed; and (3) off column operation.

  14. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Since regular operation of the DOE/NASA MOD-1 wind turbine began in October 1979 about 10 nearby households have complained of noise from the machine. Development of the NASA-LeRC with turbine sound prediction code began in May 1980 as part of an effort to understand and reduce the noise generated by MOD-1. Tone sound levels predicted with this code are in generally good agreement with measured data taken in the vicinity MOD-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may be amplifying the actual sound levels by about 6 dB. Parametric analysis using the code has shown that the predominant contributions to MOD-1 rotor noise are: (1) the velocity deficit in the wake of the support tower; (2) the high rotor speed; and (3) off column operation.

  15. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  16. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  17. Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.

    PubMed

    Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay

    2013-07-01

    In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.

  18. Channel Error Propagation In Predictor Adaptive Differential Pulse Code Modulation (DPCM) Coders

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; Rao, K. R.

    1980-11-01

    New adaptive differential pulse code modulation (ADPCM) coders with adaptive prediction are proposed and compared with existing non-adaptive DPCM coders, for processing composite National Television System Commission (NTSC) television signals. Comparisons are based on quantitative criteria as well as subjective evaluation of the processed still frames. The performance of the proposed predictors is shown to be independent of well-designed quantizers and better than existing predictors in such critical regions of the pictures as edges ind contours. Test data consists of four color images with varying levels of activity, color and detail. The adaptive predictors, however, are sensitive to channel errors. Propagation of transmission noise is dependent on the type of prediction and on location of noise i.e., whether in an uniform region or in an active region. The transmission error propagation for different predictors is investigated. By introducing leak in predictor output and/or predictor function it is shown that this propagation can be significantly reduced. The combination predictors not only attenuate and/or terminate the channel error propagation but also improve the predictor performance based on quantitative evaluation such as essential peak value and mean square error between the original and reconstructed images.

  19. Adaptive phase-coded reconstruction for cardiac CT

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Mayo, John; Acharya, Kishor; Pan, Tin-Su

    2000-04-01

    Cardiac imaging with conventional computed tomography (CT) has gained significant attention in recent years. New hardware development enables a CT scanner to rotate at a faster speed so that less cardiac motion is present in acquired projection data. Many new tomographic reconstruction techniques have also been developed to reduce the artifacts induced by the cardiac motion. Most of the algorithms make use of the projection data collected over several cardiac cycles to formulate a single projection data set. Because the data set is formed with samples collected roughly in the same phase of a cardiac cycle, the temporal resolution of the newly formed data set is significantly improved compared with projections collected continuously. In this paper, we present an adaptive phase- coded reconstruction scheme (APR) for cardiac CT. Unlike the previously proposed schemes where the projection sector size is identical, APR determines each sector size based on the tomographic reconstruction algorithm. The newly proposed scheme ensures that the temporal resolution of each sector is substantially equal. In addition, the scan speed is selected based on the measured EKG signal of the patient.

  20. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS

    SciTech Connect

    McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.

  1. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  2. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  3. Reward prediction error coding in dorsal striatal neurons.

    PubMed

    Oyama, Kei; Hernádi, István; Iijima, Toshio; Tsutsui, Ken-Ichiro

    2010-08-25

    In the current theory of learning, the reward prediction error (RPE), the difference between expected and received reward, is thought to be a key factor in reward-based learning, working as a teaching signal. The activity of dopamine neurons is known to code RPE, and the release of dopamine is known to modify the strength of synaptic connectivity in the target neurons. A fundamental interest in current neuroscience concerns the origin of RPE signals in the brain. Here, we show that a group of rat striatal neurons show a clear parametric RPE coding similar to that of dopamine neurons when tested under probabilistic pavlovian conditioning. Together with the fact that striatum and dopamine neurons have strong direct and indirect fiber connections, the result suggests that the striatum plays an important role in coding RPE signal by cooperating with dopamine neurons.

  4. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  5. GenDecoder: genetic code prediction for metazoan mitochondria.

    PubMed

    Abascal, Federico; Zardoya, Rafael; Posada, David

    2006-07-01

    Although the majority of the organisms use the same genetic code to translate DNA, several variants have been described in a wide range of organisms, both in nuclear and organellar systems, many of them corresponding to metazoan mitochondria. These variants are usually found by comparative sequence analyses, either conducted manually or with the computer. Basically, when a particular codon in a query-species is linked to positions for which a specific amino acid is consistently found in other species, then that particular codon is expected to translate as that specific amino acid. Importantly, and despite the simplicity of this approach, there are no available tools to help predicting the genetic code of an organism. We present here GenDecoder, a web server for the characterization and prediction of mitochondrial genetic codes in animals. The analysis of automatic predictions for 681 metazoans aimed us to study some properties of the comparative method, in particular, the relationship among sequence conservation, taxonomic sampling and reliability of assignments. Overall, the method is highly precise (99%), although highly divergent organisms such as platyhelminths are more problematic. The GenDecoder web server is freely available from http://darwin.uvigo.es/software/gendecoder.html.

  6. Was Wright right? The canonical genetic code is an empirical example of an adaptive peak in nature; deviant genetic codes evolved using adaptive bridges.

    PubMed

    Seaborg, David M

    2010-08-01

    The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of "adaptive bridges," neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept.

  7. Customizing Countermeasure Prescriptions using Predictive Measures of Sensorimotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Miller, C. A.; Batson, C. D.; Wood, S. J.; Guined, J. R.; Cohen, H. S.; Buccello-Stout, R.; DeDios, Y. E.; Kofman, I. S.; Szecsy, D. L.; Erdeniz, B.; Koppelmans, V.; Seidler, R. D.

    2014-01-01

    Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the readapation phase following a return to a gravitational environment. These alterations may lead to disruption in the ability to perform mission critical functional tasks during and after these gravitational transitions. Astronauts show significant inter-subject variation in adaptive capability following gravitational transitions. The ability to predict the manner and degree to which each individual astronaut will be affected would improve the effectiveness of a countermeasure comprised of a training program designed to enhance sensorimotor adaptability. Due to this inherent individual variability we need to develop predictive measures of sensorimotor adaptability that will allow us to predict, before actual space flight, which crewmember will experience challenges in adaptive capacity. Thus, obtaining this information will allow us to design and implement better sensorimotor adaptability training countermeasures that will be customized for each crewmember's unique adaptive capabilities. Therefore the goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to design sensorimotor adaptability training countermeasures that are customized for each crewmember's individual sensorimotor adaptive characteristics. To achieve these goals we are currently pursuing the following specific aims: Aim 1: Determine whether behavioral metrics of individual sensory bias predict sensorimotor adaptability. For this aim, subjects perform tests that delineate individual sensory biases in tests of visual, vestibular, and proprioceptive function. Aim 2: Determine if individual capability for strategic and plastic-adaptive responses predicts sensorimotor adaptability. For this aim, each subject's strategic and plastic-adaptive motor learning abilities are assessed using

  8. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  9. AGR-1 Safety Test Predictions using the PARFUME code

    SciTech Connect

    Blaise Collin

    2012-05-01

    The PARFUME modeling code was used to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the first irradiation test of the Advanced Gas Reactor program (AGR-1). These calculations support the AGR-1 Safety Testing Experiment, which is part of the PIE effort on AGR-1. Modeling of the AGR-1 Safety Test Predictions includes a 620-day irradiation followed by a 300-hour heat-up phase of selected AGR-1 compacts. Results include fuel failure probability, palladium penetration, and fractional release of fission products. Results show that no particle failure is predicted during irradiation or heat-up, and that fractional release of fission products is limited during irradiation but that it significantly increases during heat-up.

  10. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  11. Genome-environment associations in sorghum landraces predict adaptive traits

    PubMed Central

    Lasky, Jesse R.; Upadhyaya, Hari D.; Ramu, Punna; Deshpande, Santosh; Hash, C. Tom; Bonnette, Jason; Juenger, Thomas E.; Hyma, Katie; Acharya, Charlotte; Mitchell, Sharon E.; Buckler, Edward S.; Brenton, Zachary; Kresovich, Stephen; Morris, Geoffrey P.

    2015-01-01

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these could be used to predict phenotypic variation for adaptive traits. We tested this proposition in the global food crop Sorghum bicolor, characterizing 1943 georeferenced landraces at 404,627 SNPs and quantifying allelic associations with bioclimatic and soil gradients. Environment explained a substantial portion of SNP variation, independent of geographical distance, and genic SNPs were enriched for environmental associations. Further, environment-associated SNPs predicted genotype-by-environment interactions under experimental drought stress and aluminum toxicity. Our results suggest that genomic signatures of environmental adaptation may be useful for crop improvement, enhancing germplasm identification and marker-assisted selection. Together, genome-environment associations and phenotypic analyses may reveal the basis of environmental adaptation. PMID:26601206

  12. Genome-environment associations in sorghum landraces predict adaptive traits.

    PubMed

    Lasky, Jesse R; Upadhyaya, Hari D; Ramu, Punna; Deshpande, Santosh; Hash, C Tom; Bonnette, Jason; Juenger, Thomas E; Hyma, Katie; Acharya, Charlotte; Mitchell, Sharon E; Buckler, Edward S; Brenton, Zachary; Kresovich, Stephen; Morris, Geoffrey P

    2015-07-01

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these could be used to predict phenotypic variation for adaptive traits. We tested this proposition in the global food crop Sorghum bicolor, characterizing 1943 georeferenced landraces at 404,627 SNPs and quantifying allelic associations with bioclimatic and soil gradients. Environment explained a substantial portion of SNP variation, independent of geographical distance, and genic SNPs were enriched for environmental associations. Further, environment-associated SNPs predicted genotype-by-environment interactions under experimental drought stress and aluminum toxicity. Our results suggest that genomic signatures of environmental adaptation may be useful for crop improvement, enhancing germplasm identification and marker-assisted selection. Together, genome-environment associations and phenotypic analyses may reveal the basis of environmental adaptation. PMID:26601206

  13. Development of a massively parallel parachute performance prediction code

    SciTech Connect

    Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.

    1997-04-01

    The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.

  14. Automatic network-adaptive ultra-low-bit-rate video coding

    NASA Astrophysics Data System (ADS)

    Chien, Wei-Jung; Lam, Tuyet-Trang; Abousleman, Glen P.; Karam, Lina J.

    2006-05-01

    This paper presents a software-only, real-time video coder/decoder (codec) for use with low-bandwidth channels where the bandwidth is unknown or varies with time. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths of less than 1 kbits/second. The encoder and decoder establish two virtual connections over a single IP-based communications link. The first connection is UDP/IP guaranteed throughput, which is used to transmit the compressed video stream in real time, while the second is TCP/IP guaranteed delivery, which is used for two-way control and compression parameter updating. The TCP/IP link serves as a virtual feedback channel and enables the decoder to instruct the encoder to throttle back the transmission bit rate in response to the measured packet loss ratio. It also enables either side to initiate on-the-fly parameter updates such as bit rate, frame rate, frame size, and correlation parameter, among others. The codec also incorporates frame-rate throttling whereby the number of frames decoded is adjusted based upon the available processing resources. Thus, the proposed codec is capable of automatically adjusting the transmission bit rate and decoding frame rate to adapt to any network scenario. Video coding results for a variety of network bandwidths and configurations are presented to illustrate the vast capabilities of the proposed video coding system.

  15. Prediction of the space adaptation syndrome

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Homick, J. L.; Ryan, P.; Moseley, E. C.

    1984-01-01

    The univariate and multivariate relationships of provocative measures used to produce motion sickness symptoms were described. Normative subjects were used to develop and cross-validate sets of linear equations that optimally predict motion sickness in parabolic flights. The possibility of reducing the number of measurements required for prediction was assessed. After describing the variables verbally and statistically for 159 subjects, a factor analysis of 27 variables was completed to improve understanding of the relationships between variables and to reduce the number of measures for prediction purposes. The results of this analysis show that none of variables are significantly related to the responses to parabolic flights. A set of variables was selected to predict responses to KC-135 flights. A series of discriminant analyses were completed. Results indicate that low, moderate, or severe susceptibility could be correctly predicted 64 percent and 53 percent of the time on original and cross-validation samples, respectively. Both the factor analysis and the discriminant analysis provided no basis for reducing the number of tests.

  16. Interpersonal predictive coding, not action perception, is impaired in autism.

    PubMed

    von der Lühe, T; Manera, V; Barisic, I; Becchio, C; Vogeley, K; Schilbach, L

    2016-05-01

    This study was conducted to examine interpersonal predictive coding in individuals with high-functioning autism (HFA). Healthy and HFA participants observed point-light displays of two agents (A and B) performing separate actions. In the 'communicative' condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the 'individual' condition, agent A's communicative action was substituted by a non-communicative action. Using a simultaneous masking-detection task, we demonstrate that observing agent A's communicative gesture enhanced visual discrimination of agent B for healthy controls, but not for participants with HFA. These results were not explained by differences in attentional factors as measured via eye-tracking, or by differences in the recognition of the point-light actions employed. Our findings, therefore, suggest that individuals with HFA are impaired in the use of social information to predict others' actions and provide behavioural evidence that such deficits could be closely related to impairments of predictive coding. PMID:27069050

  17. Interpersonal predictive coding, not action perception, is impaired in autism

    PubMed Central

    von der Lühe, T.; Manera, V.; Barisic, I.; Becchio, C.; Vogeley, K.

    2016-01-01

    This study was conducted to examine interpersonal predictive coding in individuals with high-functioning autism (HFA). Healthy and HFA participants observed point-light displays of two agents (A and B) performing separate actions. In the ‘communicative’ condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the ‘individual’ condition, agent A's communicative action was substituted by a non-communicative action. Using a simultaneous masking-detection task, we demonstrate that observing agent A's communicative gesture enhanced visual discrimination of agent B for healthy controls, but not for participants with HFA. These results were not explained by differences in attentional factors as measured via eye-tracking, or by differences in the recognition of the point-light actions employed. Our findings, therefore, suggest that individuals with HFA are impaired in the use of social information to predict others' actions and provide behavioural evidence that such deficits could be closely related to impairments of predictive coding. PMID:27069050

  18. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-01

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  19. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-01

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK. PMID:27505775

  20. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    PubMed

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  1. Development of a predictive code for aircrew radiation exposure.

    PubMed

    McCall, M J; Lemay, F; Bean, M R; Lewis, B J; Bennett, L G I

    2009-10-01

    Using the empirical data measured by the Royal Military College with a tissue equivalent proportional counter, a model was derived to allow for the interpolation of the dose rate for any global position, altitude and date. Through integration of the dose-rate function over a great circle flight path or between various waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAire) was further developed to provide an estimate of the total dose equivalent on any route worldwide at any period in the solar cycle.

  2. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  3. Arc Jet Facility Test Condition Predictions Using the ADSI Code

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda

    2015-01-01

    The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.

  4. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences.

  5. Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.

    PubMed

    Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura

    2016-02-01

    Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413

  6. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  7. Efficient Coding Theory Predicts a Tilt Aftereffect from Viewing Untilted Patterns.

    PubMed

    May, Keith A; Zhaoping, Li

    2016-06-20

    The brain is bombarded with a continuous stream of sensory information, but biological limitations on the data-transmission rate require this information to be encoded very efficiently [1]. Li and Atick [2] proposed that the two eyes' signals are coded efficiently in the brain using mutually decorrelated binocular summation and differencing channels; when a channel is strongly stimulated by the visual input, such that sensory noise is negligible, the channel should undergo temporary desensitization (known as adaptation). To date, the evidence for this theory has been limited [3, 4], and the binocular differencing channel is missing from many models of binocular integration [5-10]. Li and Atick's theory makes the remarkable prediction that perceived direction of tilt (clockwise or counterclockwise) of a test pattern can be controlled by pre-exposing observers to visual adaptation patterns that are untilted or even have no orientation signal. Here, we confirm this prediction. Each test pattern consisted of different images presented to the two eyes such that the binocular summation and difference signals were tilted in opposite directions, to give ambiguous information about tilt; by selectively desensitizing one or other of the binocular channels using untilted or non-oriented binocular adaptation patterns, we controlled the perceived tilt of the test pattern. Our results provide compelling evidence that the brain contains binocular summation and differencing channels that adapt to the prevailing binocular statistics.

  8. Adaptation reduces variability of the neuronal population code

    NASA Astrophysics Data System (ADS)

    Farkhooi, Farzad; Muller, Eilif; Nawrot, Martin P.

    2011-05-01

    Sequences of events in noise-driven excitable systems with slow variables often show serial correlations among their intervals of events. Here, we employ a master equation for generalized non-renewal processes to calculate the interval and count statistics of superimposed processes governed by a slow adaptation variable. For an ensemble of neurons with spike-frequency adaptation, this results in the regularization of the population activity and an enhanced postsynaptic signal decoding. We confirm our theoretical results in a population of cortical neurons recorded in vivo.

  9. Near-lossless image compression by adaptive prediction: new developments and comparison of algorithms

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Alparone, Luciano; Baronti, Stefano

    2003-01-01

    This paper describes state-of-the-art approaches to near-lossless image compression by adaptive causal DPCM and presents two advanced schemes based on crisp and fuzzy switching of predictors, respectively. The former relies on a linear-regression prediction in which a different predictor is employed for each image block. Such block-representative predictors are calculated from the original data set through an iterative relaxation-labeling procedure. Coding time are affordable thanks to fast convergence of training. Decoding is always performed in real time. The latter is still based on adaptive MMSE prediction in which a different predictor at each pixel position is achieved by blending a number of prototype predictors through adaptive weights calculated from the past decoded samples. Quantization error feedback loops are introduced into the basic lossless encoders to enable user-defined upper-bounded reconstruction errors. Both schemes exploit context modeling of prediction errors followed by arithmetic coding to enhance entropy coding performances. A thorough performance comparison on a wide test image set show the superiority of the proposed schemes over both up-to-date encoders in the literature and new/upcoming standards.

  10. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    SciTech Connect

    D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite

  11. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  12. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition.

  13. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition. PMID:21986295

  14. Adaptive Zero-Coefficient Distribution Scan for Inter Block Mode Coding of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Wang, Jing-Xin; Su, Alvin W. Y.

    Scanning quantized transform coefficients is an important tool for video coding. For example, the MPEG-4 video coder adopts three different scans to get better coding efficiency. This paper proposes an adaptive zero-coefficient distribution scan in inter block coding. The proposed method attempts to improve H.264/AVC zero coefficient coding by modifying the scan operation. Since the zero-coefficient distribution is changed by the proposed scan method, new VLC tables for syntax elements used in context-adaptive variable length coding (CAVLC) are also provided. The savings in bit-rate range from 2.2% to 5.1% in the high bit-rate cases, depending on different test sequences.

  15. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  16. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  17. Results and code predictions for ABCOVE (aerosol behavior code validation and evaluation) aerosol code validation: Test AB6 with two aerosol species. [LMFBR

    SciTech Connect

    Hilliard, R K; McCormack, J C; Muhlestein, L D

    1984-12-01

    A program for aerosol behavior code validation and evaluation (ABCOVE) has been developed in accordance with the LMFBR Safety Program Plan. The ABCOVE program is a cooperative effort between the USDOE, the USNRC, and their contractor organizations currently involved in aerosol code development, testing or application. The second large-scale test in the ABCOVE program, AB6, was performed in the 850-m/sup 3/ CSTF vessel with a two-species test aerosol. The test conditions simulated the release of a fission product aerosol, NaI, in the presence of a sodium spray fire. Five organizations made pretest predictions of aerosol behavior using seven computer codes. Three of the codes (QUICKM, MAEROS and CONTAIN) were discrete, multiple species codes, while four (HAA-3, HAA-4, HAARM-3 and SOFIA) were log-normal codes which assume uniform coagglomeration of different aerosol species. Detailed test results are presented and compared with the code predictions for seven key aerosol behavior parameters.

  18. Deficits in context-dependent adaptive coding of reward in schizophrenia.

    PubMed

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism's ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  19. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  20. AGR-2 safety test predictions using the PARFUME code

    SciTech Connect

    Collin, Blaise P.

    2014-09-01

    This report documents calculations performed to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the second irradiation test of the Advanced Gas Reactor program (AGR-2). The calculations include the modeling of the AGR-2 irradiation that occurred from June 2010 to October 2013 in the Advanced Test Reactor (ATR) and the modeling of a safety testing phase to support safety tests planned at Oak Ridge National Laboratory and at Idaho National Laboratory (INL) for a selection of AGR-2 compacts. The heat-up of AGR-2 compacts is a critical component of the AGR-2 fuel performance evaluation, and its objectives are to identify the effect of accident test temperature, burnup, and irradiation temperature on the performance of the fuel at elevated temperature. Safety testing of compacts will be followed by detailed examinations of the fuel particles to further evaluate fission product retention and behavior of the kernel and coatings. The modeling was performed using the particle fuel model computer code PARFUME developed at INL. PARFUME is an advanced gas-cooled reactor fuel performance modeling and analysis code (Miller 2009). It has been developed as an integrated mechanistic code that evaluates the thermal, mechanical, and physico-chemical behavior of fuel particles during irradiation to determine the failure probability of a population of fuel particles given the particle-to-particle statistical variations in physical dimensions and material properties that arise from the fuel fabrication process, accounting for all viable mechanisms that can lead to particle failure. The code also determines the diffusion of fission products from the fuel through the particle coating layers, and through the fuel matrix to the coolant boundary. The subsequent release of fission products is calculated at the compact level (release of fission products from the compact). PARFUME calculates the

  1. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  2. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  3. Predictive coding of depth images across multiple views

    NASA Astrophysics Data System (ADS)

    Morvan, Yannick; Farin, Dirk; de With, Peter H. N.

    2007-02-01

    A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.

  4. Predicting Adaptive Behavior from the Bayley Scales of Infant Development.

    ERIC Educational Resources Information Center

    Hotard, Stephen; McWhirter, Richard

    To examine the proportion of variance in adaptive functioning predictable from mental ability, chronological age, I.Q., evidence of brain malfunction, seizure medication, and receptive and expressive language scores, 25 severely and profoundly retarded institutionalized persons (2-19 years old) were administered the Bayley Infant Scale Mental…

  5. Effects of adaptation on neural coding by primary sensory interneurons in the cricket cercal system.

    PubMed

    Clague, H; Theunissen, F; Miller, J P

    1997-01-01

    Methods of stochastic systems analysis were applied to examine the effect of adaptation on frequency encoding by two functionally identical primary interneurons of the cricket cercal system. Stimulus reconstructions were obtained from a linear filtering transformation of spike trains elicited in response to bursts of broadband white noise air current stimuli (5-400 Hz). Each linear reconstruction was compared with the actual stimulus in the frequency domain to obtain a measure of waveform coding accuracy as a function of frequency. The term adaptation in this paper refers to the decrease in firing rate of a cell after the onset or increase in power of a white noise stimulus. The increase in firing rate after stimulus offset or decrease in stimulus power is assumed to be a complementary aspect of the same phenomenon. As the spike rate decreased during the course of adaptation, the total amount of information carried about the velocity waveform of the stimulus also decreased. The quality of coding of frequencies between 70 and 400 Hz decreased dramatically. The quality of coding of frequencies between 5 and 70 Hz decreased only slightly or even increased in some cases. The disproportionate loss of information about the higher frequencies could be attributed in part to the more rapid loss of spikes correlated with high-frequency stimulus components than of spikes correlated with low-frequency components. An increase in the responsiveness of a cell to frequencies > 70 Hz was correlated with a decrease in the ability of that cell to encode frequencies in the 5-70 Hz range. This nonlinear property could explain the improvement seen in some cases in the coding accuracy of frequencies between 5 and 70 Hz during the course of adaptation. Waveform coding properties also were characterized for fully adapted neurons at several stimulus intensities. The changes in coding observed through the course of adaptation were similar in nature to those found across stimulus powers

  6. Image subband coding using context-based classification and adaptive quantization.

    PubMed

    Yoo, Y; Ortega, A; Yu, B

    1999-01-01

    Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.

  7. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  8. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  9. Selection, adaptation, and predictive information in changing environments

    NASA Astrophysics Data System (ADS)

    Feltgen, Quentin; Nemenman, Ilya

    2014-03-01

    Adaptation by means of natural selection is a key concept in evolutionary biology. Individuals better matched to the surrounding environment outcompete the others. This increases the fraction of the better adapted individuals in the population, and hence increases its collective fitness. Adaptation is also prominent on the physiological scale in neuroscience and cell biology. There each individual infers properties of the environment and changes to become individually better, improving the overall population as well. Traditionally, these two notions of adaption have been considered distinct. Here we argue that both types of adaptation result in the same population growth in a broad class of analytically tractable population dynamics models in temporally changing environments. In particular, both types of adaptation lead to subextensive corrections to the population growth rates. These corrections are nearly universal and are equal to the predictive information in the environment time series, which is also the characterization of the time series complexity. This work has been supported by the James S. McDonnell Foundation.

  10. Adaptive bandwidth measurements of importance functions for speech intelligibility prediction

    PubMed Central

    Whitmal, Nathaniel A.; DeRoy, Kristina

    2011-01-01

    The Articulation Index (AI) and Speech Intelligibility Index (SII) predict intelligibility scores from measurements of speech and hearing parameters. One component in the prediction is the “importance function,” a weighting function that characterizes contributions of particular spectral regions of speech to speech intelligibility. Previous work with SII predictions for hearing-impaired subjects suggests that prediction accuracy might improve if importance functions for individual subjects were available. Unfortunately, previous importance function measurements have required extensive intelligibility testing with groups of subjects, using speech processed by various fixed-bandwidth low-pass and high-pass filters. A more efficient approach appropriate to individual subjects is desired. The purpose of this study was to evaluate the feasibility of measuring importance functions for individual subjects with adaptive-bandwidth filters. In two experiments, ten subjects with normal-hearing listened to vowel-consonant-vowel (VCV) nonsense words processed by low-pass and high-pass filters whose bandwidths were varied adaptively to produce specified performance levels in accordance with the transformed up-down rules of Levitt [(1971). J. Acoust. Soc. Am. 49, 467–477]. Local linear psychometric functions were fit to resulting data and used to generate an importance function for VCV words. Results indicate that the adaptive method is reliable and efficient, and produces importance function data consistent with that of the corresponding AI/SII importance function. PMID:22225057

  11. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  12. Predictive optimized adaptive PSS in a single machine infinite bus.

    PubMed

    Milla, Freddy; Duarte-Mermoud, Manuel A

    2016-07-01

    Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison.

  13. Towards feasible and effective predictive wavefront control for adaptive optics

    SciTech Connect

    Poyneer, L A; Veran, J

    2008-06-04

    We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.

  14. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    PubMed Central

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  15. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  16. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PMID:24684315

  17. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  18. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Astrophysics Data System (ADS)

    Chen, J.-H.; Gersho, A.

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  19. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  20. Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.

    2010-01-01

    Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.

  1. Adaptive prediction of respiratory motion for motion compensation radiotherapy

    NASA Astrophysics Data System (ADS)

    Ren, Qing; Nishioka, Seiko; Shirato, Hiroki; Berbeco, Ross I.

    2007-11-01

    One potential application of image-guided radiotherapy is to track the target motion in real time, then deliver adaptive treatment to a dynamic target by dMLC tracking or respiratory gating. However, the existence of a finite time delay (or a system latency) between the image acquisition and the response of the treatment system to a change in tumour position implies that some kind of predictive ability should be included in the real-time dynamic target treatment. If diagnostic x-ray imaging is used for the tracking, the dose given over a whole image-guided radiotherapy course can be significant. Therefore, the x-ray beam used for motion tracking should be triggered at a relatively slow pulse frequency, and an interpolation between predictions can be used to provide a fast tracking rate. This study evaluates the performance of an autoregressive-moving average (ARMA) model based prediction algorithm for reducing tumour localization error due to system latency and slow imaging rate. For this study, we use 3D motion data from ten lung tumour cases where the peak-to-peak motion is greater than 8 mm. Some strongly irregular traces with variation in amplitude and phase were included. To evaluate the prediction accuracy, the standard deviations between predicted and actual motion position are computed for three system latencies (0.1, 0.2 and 0.4 s) at several imaging rates (1.25-10 Hz), and compared against the situation of no prediction. The simulation results indicate that the implementation of the prediction algorithm in real-time target tracking can improve the localization precision for all latencies and imaging rates evaluated. From a common initial setting of model parameters, the predictor can quickly provide an accurate prediction of the position after collecting 20 initial data points. In this retrospective analysis, we calculate the standard deviation of the predicted position from the twentieth position data to the end of the session at 0.1 s interval. For both

  2. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  3. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel. [CONDOR code

    SciTech Connect

    Chen, K.F.; Olson, C.A.

    1983-09-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction.

  4. Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC

    NASA Astrophysics Data System (ADS)

    Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo

    We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.

  5. Dynamic Forces in Spur Gears - Measurement, Prediction, and Code Validation

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Townsend, Dennis P.; Rebbechi, Brian; Lin, Hsiang Hsi

    1996-01-01

    Measured and computed values for dynamic loads in spur gears were compared to validate a new version of the NASA gear dynamics code DANST-PC. Strain gage data from six gear sets with different tooth profiles were processed to determine the dynamic forces acting between the gear teeth. Results demonstrate that the analysis code successfully simulates the dynamic behavior of the gears. Differences between analysis and experiment were less than 10 percent under most conditions.

  6. Seizure prediction using adaptive neuro-fuzzy inference system.

    PubMed

    Rabbi, Ahmed F; Azinfar, Leila; Fazel-Rezai, Reza

    2013-01-01

    In this study, we present a neuro-fuzzy approach of seizure prediction from invasive Electroencephalogram (EEG) by applying adaptive neuro-fuzzy inference system (ANFIS). Three nonlinear seizure predictive features were extracted from a patient's data obtained from the European Epilepsy Database, one of the most comprehensive EEG database for epilepsy research. A total of 36 hours of recordings including 7 seizures was used for analysis. The nonlinear features used in this study were similarity index, phase synchronization, and nonlinear interdependence. We designed an ANFIS classifier constructed based on these features as input. Fuzzy if-then rules were generated by the ANFIS classifier using the complex relationship of feature space provided during training. The membership function optimization was conducted based on a hybrid learning algorithm. The proposed method achieved highest sensitivity of 80% with false prediction rate as low as 0.46 per hour. PMID:24110134

  7. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors

    PubMed Central

    Murphy, Peter R.; van Moort, Marianne L.; Nieuwenhuis, Sander

    2016-01-01

    Reaction time (RT) is commonly observed to slow down after an error. This post-error slowing (PES) has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES. PMID:27010472

  8. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  9. Visual Bias Predicts Gait Adaptability in Novel Sensory Discordant Conditions

    NASA Technical Reports Server (NTRS)

    Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    We designed a gait training study that presented combinations of visual flow and support-surface manipulations to investigate the response of healthy adults to novel discordant sensorimotor conditions. We aimed to determine whether a relationship existed between subjects visual dependence and their postural stability and cognitive performance in a new discordant environment presented at the conclusion of training (Transfer Test). Our training system comprised a treadmill placed on a motion base facing a virtual visual scene that provided a variety of sensory challenges. Ten healthy adults completed 3 training sessions during which they walked on a treadmill at 1.1 m/s while receiving discordant support-surface and visual manipulations. At the first visit, in an analysis of normalized torso translation measured in a scene-movement-only condition, 3 of 10 subjects were classified as visually dependent. During the Transfer Test, all participants received a 2-minute novel exposure. In a combined measure of stride frequency and reaction time, the non-visually dependent subjects showed improved adaptation on the Transfer Test compared to their visually dependent counterparts. This finding suggests that individual differences in the ability to adapt to new sensorimotor conditions may be explained by individuals innate sensory biases. An accurate preflight assessment of crewmembers biases for visual dependence could be used to predict their propensities to adapt to novel sensory conditions. It may also facilitate the development of customized training regimens that could expedite adaptation to alternate gravitational environments.

  10. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.

  11. A New Adaptive Framework for Collaborative Filtering Prediction

    PubMed Central

    Almosallam, Ibrahim A.; Shang, Yi

    2010-01-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924

  12. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  13. IR performance study of an adaptive coded aperture "diffractive imaging" system employing MEMS "eyelid shutter" technologies

    NASA Astrophysics Data System (ADS)

    Mahalanobis, A.; Reyner, C.; Patel, H.; Haberfelde, T.; Brady, David; Neifeld, Mark; Kumar, B. V. K. Vijaya; Rogers, Stanley

    2007-09-01

    Adaptive coded aperture sensing is an emerging technology enabling real time, wide-area IR/visible sensing and imaging. Exploiting unique imaging architectures, adaptive coded aperture sensors achieve wide field of view, near-instantaneous optical path repositioning, and high resolution while reducing weight, power consumption and cost of air- and space born sensors. Such sensors may be used for military, civilian, or commercial applications in all optical bands but there is special interest in diffraction imaging sensors for IR applications. Extension of coded apertures from Visible to the MWIR introduces the effects of diffraction and other distortions not observed in shorter wavelength systems. A new approach is being developed under the DARPA/SPO funded LACOSTE (Large Area Coverage Optical search-while Track and Engage) program, that addresses the effects of diffraction while gaining the benefits of coded apertures, thus providing flexibility to vary resolution, possess sufficient light gathering power, and achieve a wide field of view (WFOV). The photonic MEMS-Eyelid "sub-aperture" array technology is currently being instantiated in this DARPA program to be the heart of conducting the flow (heartbeat) of the incoming signal. However, packaging and scalability are critical factors for the MEMS "sub-aperture" technology which will determine system efficacy as well as military and commercial usefulness. As larger arrays with 1,000,000+ sub-apertures are produced for this LACOSTE effort, the available Degrees of Freedom (DOF) will enable better spatial resolution, control and refinement on the coding for the system. Studies (SNR simulations) will be performed (based on the Adaptive Coded Aperture algorithm implementation) to determine the efficacy of this diffractive MEMS approach and to determine the available system budget based on simulated bi-static shutter-element DOF degradation (1%, 5%, 10%, 20%, etc..) trials until the degradation level where it is

  14. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  15. Development of a shock noise prediction code for high-speed helicopters - The subsonically moving shock

    NASA Technical Reports Server (NTRS)

    Tadghighi, H.; Holz, R.; Farassat, F.; Lee, Yung-Jang

    1991-01-01

    A previously defined airfoil subsonic shock-noise prediction formula whose result depends on a mapping of the time-dependent shock surface to a time-independent computational domain is presently coded and incorporated in the NASA-Langley rotor-noise prediction code, WOPWOP. The structure and algorithms used in the shock-noise prediction code are presented; special care has been taken to reduce computation time while maintaining accuracy. Numerical examples of shock-noise prediction are presented for hover and forward flight. It is confirmed that shock noise is an important component of the quadrupole source.

  16. Genetic code prediction for metazoan mitochondria with GenDecoder.

    PubMed

    Abascal, Federico; Zardoya, Rafael; Posada, David

    2009-01-01

    There is a standard genetic code that is used by most organisms, but exceptions exist in which particular codons are translated with a different meaning, i.e., as a different amino acid. The characterization of the genetic code of an organism is hence a key step for properly analyzing and translating its protein-coding genes. Such characterization is particularly important in the case of metazoan mitochondrial genomes for two reasons: first, many variant codes occur in them and second, mitochondrial data is frequently used for evolutionary studies. Variant codes are usually found by comparative sequence analyses. Given a protein alignment, if a particular codon for a given species occurs at positions in which a particular amino acid is frequently found in other species, then the most likely hypothesis is that the codon is translated as that particular amino acid in that species. Previously, we have shown that this method can be very reliable provided that enough taxa and positions are included in the comparisons and have implemented it in the web-ser GenDecoder (http://darwin.uvigo.es/software/gendecoder.html). In this chapter we describe the rationale of the method used by GenDecoder and its usage through worked examples, highlighting the potential problems that can arise during the analysis.

  17. Development of three-dimensional hydrodynamical and MHD codes using Adaptive Mesh Refinement scheme with TVD

    NASA Astrophysics Data System (ADS)

    den, M.; Yamashita, K.; Ogawa, T.

    A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.

  18. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  19. Context adaptive lossless and near-lossless coding for digital angiographies.

    PubMed

    dos Santos, Rafael A P; Scharcanski, Jacob

    2007-01-01

    This paper presents a context adaptive coding method for image sequences in hemodynamics. The proposed method implements motion compensation through of a two-stage context adaptive linear predictor. It is robust to the local intensity changes and the noise that often degrades these image sequences, and provides lossless and near-lossless quality. Our preliminary experiments with lossless compression of 12 bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed in [1], respectively. The performance tends to improve for near-lossless compression.

  20. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  1. Adaptive modelling of structured molecular representations for toxicity prediction

    NASA Astrophysics Data System (ADS)

    Bertinetto, Carlo; Duce, Celia; Micheli, Alessio; Solaro, Roberto; Tiné, Maria Rosaria

    2012-12-01

    We investigated the possibility of modelling structure-toxicity relationships by direct treatment of the molecular structure (without using descriptors) through an adaptive model able to retain the appropriate structural information. With respect to traditional descriptor-based approaches, this provides a more general and flexible way to tackle prediction problems that is particularly suitable when little or no background knowledge is available. Our method employs a tree-structured molecular representation, which is processed by a recursive neural network (RNN). To explore the realization of RNN modelling in toxicological problems, we employed a data set containing growth impairment concentrations (IGC50) for Tetrahymena pyriformis.

  2. Performance predictions for the Keck telescope adaptive optics system

    SciTech Connect

    Gavel, D.T.; Olivier, S.S.

    1995-08-07

    The second Keck ten meter telescope (Keck-11) is slated to have an infrared-optimized adaptive optics system in the 1997--1998 time frame. This system will provide diffraction-limited images in the 1--3 micron region and the ability to use a diffraction-limited spectroscopy slit. The AO system is currently in the preliminary design phase and considerable analysis has been performed in order to predict its performance under various seeing conditions. In particular we have investigated the point-spread function, energy through a spectroscopy slit, crowded field contrast, object limiting magnitude, field of view, and sky coverage with natural and laser guide stars.

  3. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  4. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  5. Prediction of Conductivity by Adaptive Neuro-Fuzzy Model

    PubMed Central

    Akbarzadeh, S.; Arof, A. K.; Ramesh, S.; Khanmirzaei, M. H.; Nor, R. M.

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  6. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  7. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  8. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  9. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    PubMed

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design.

  10. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  11. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  12. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    SciTech Connect

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.

  13. Less can be more: RNA-adapters may enhance coding capacity of replicators.

    PubMed

    de Boer, Folkert K; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary 'functional' structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This 'RNA-adapter' can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in order to

  14. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  15. Generating Adaptive Behaviour within a Memory-Prediction Framework

    PubMed Central

    Rawlinson, David; Kowadlo, Gideon

    2012-01-01

    The Memory-Prediction Framework (MPF) and its Hierarchical-Temporal Memory implementation (HTM) have been widely applied to unsupervised learning problems, for both classification and prediction. To date, there has been no attempt to incorporate MPF/HTM in reinforcement learning or other adaptive systems; that is, to use knowledge embodied within the hierarchy to control a system, or to generate behaviour for an agent. This problem is interesting because the human neocortex is believed to play a vital role in the generation of behaviour, and the MPF is a model of the human neocortex. We propose some simple and biologically-plausible enhancements to the Memory-Prediction Framework. These cause it to explore and interact with an external world, while trying to maximize a continuous, time-varying reward function. All behaviour is generated and controlled within the MPF hierarchy. The hierarchy develops from a random initial configuration by interaction with the world and reinforcement learning only. Among other demonstrations, we show that a 2-node hierarchy can learn to successfully play “rocks, paper, scissors” against a predictable opponent. PMID:22272231

  16. Predictive Simulation Generates Human Adaptations during Loaded and Inclined Walking

    PubMed Central

    Hicks, Jennifer L.; Delp, Scott L.

    2015-01-01

    Predictive simulation is a powerful approach for analyzing human locomotion. Unlike techniques that track experimental data, predictive simulations synthesize gaits by minimizing a high-level objective such as metabolic energy expenditure while satisfying task requirements like achieving a target velocity. The fidelity of predictive gait simulations has only been systematically evaluated for locomotion data on flat ground. In this study, we construct a predictive simulation framework based on energy minimization and use it to generate normal walking, along with walking with a range of carried loads and up a range of inclines. The simulation is muscle-driven and includes controllers based on muscle force and stretch reflexes and contact state of the legs. We demonstrate how human-like locomotor strategies emerge from adapting the model to a range of environmental changes. Our simulation dynamics not only show good agreement with experimental data for normal walking on flat ground (92% of joint angle trajectories and 78% of joint torque trajectories lie within 1 standard deviation of experimental data), but also reproduce many of the salient changes in joint angles, joint moments, muscle coordination, and metabolic energy expenditure observed in experimental studies of loaded and inclined walking. PMID:25830913

  17. Computer code for the prediction of nozzle admittance

    NASA Technical Reports Server (NTRS)

    Nguyen, Thong V.

    1988-01-01

    A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.

  18. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q.; Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L.

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  19. Operation of the helicopter antenna radiation prediction code

    NASA Technical Reports Server (NTRS)

    Braeden, E. W.; Klevenow, F. T.; Newman, E. H.; Rojas, R. G.; Sampath, K. S.; Scheik, J. T.; Shamansky, H. T.

    1993-01-01

    HARP is a front end as well as a back end for the AMC and NEWAIR computer codes. These codes use the Method of Moments (MM) and the Uniform Geometrical Theory of Diffraction (UTD), respectively, to calculate the electromagnetic radiation patterns for antennas on aircraft. The major difficulty in using these codes is in the creation of proper input files for particular aircraft and in verifying that these files are, in fact, what is intended. HARP creates these input files in a consistent manner and allows the user to verify them for correctness using sophisticated 2 and 3D graphics. After antenna field patterns are calculated using either MM or UTD, HARP can display the results on the user's screen or provide hardcopy output. Because the process of collecting data, building the 3D models, and obtaining the calculated field patterns was completely automated by HARP, the researcher's productivity can be many times what it could be if these operations had to be done by hand. A complete, step by step, guide is provided so that the researcher can quickly learn to make use of all the capabilities of HARP.

  20. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  1. ICAN: A versatile code for predicting composite properties

    NASA Technical Reports Server (NTRS)

    Ginty, C. A.; Chamis, C. C.

    1986-01-01

    The Integrated Composites ANalyzer (ICAN), a stand-alone computer code, incorporates micromechanics equations and laminate theory to analyze/design multilayered fiber composite structures. Procedures for both the implementation of new data in ICAN and the selection of appropriate measured data are summarized for: (1) composite systems subject to severe thermal environments; (2) woven fabric/cloth composites; and (3) the selection of new composite systems including those made from high strain-to-fracture fibers. The comparisons demonstrate the versatility of ICAN as a reliable method for determining composite properties suitable for preliminary design.

  2. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  3. Predictive codes of familiarity and context during the perceptual learning of facial identities.

    PubMed

    Apps, Matthew A J; Tsakiris, Manos

    2013-01-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  4. Predicting early adolescent gang involvement from middle school adaptation.

    PubMed

    Dishion, Thomas J; Nelson, Sarah E; Yasui, Miwa

    2005-03-01

    This study examined the role of adaptation in the first year of middle school (Grade 6, age 11) to affiliation with gangs by the last year of middle school (Grade 8, age 13). The sample consisted of 714 European American (EA) and African American (AA) boys and girls. Specifically, academic grades, reports of antisocial behavior, and peer relations in 6th grade were used to predict multiple measures of gang involvement by 8th grade. The multiple measures of gang involvement included self-, peer, teacher, and counselor reports. Unexpectedly, self-report measures of gang involvement did not correlate highly with peer and school staff reports. The results, however, were similar for other and self-report measures of gang involvement. Mean level analyses revealed statistically reliable differences in 8th-grade gang involvement as a function of the youth gender and ethnicity. Structural equation prediction models revealed that peer nominations of rejection, acceptance, academic failure, and antisocial behavior were predictive of gang involvement for most youth. These findings suggest that the youth level of problem behavior and the school ecology (e.g., peer rejection, school failure) require attention in the design of interventions to prevent the formation of gangs among high-risk young adolescents.

  5. Fire aerosol experiment and comparisons with computer code predictions

    NASA Astrophysics Data System (ADS)

    Gregory, W. S.; Nichols, B. D.; White, B. W.; Smith, P. R.; Leslie, I. H.; Corkran, J. R.

    1988-08-01

    Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the U.S. Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300 C. To date, we have used quartz aerosol with a median diameter of about 10 microns as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 cu ft/min) and three nominal gas temperatures (ambient, 150 C, and 300 C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data.

  6. Signature Product Code for Predicting Protein-Protein Interactions

    SciTech Connect

    Martin, Shawn B.; Brown, William M.

    2004-09-25

    The SigProdV1.0 software consists of four programs which together allow the prediction of protein-protein interactions using only amino acid sequences and experimental data. The software is based on the use of tensor products of amino acid trimers coupled with classifiers known as support vector machines. Essentially the program looks for amino acid trimer pairs which occur more frequently in protein pairs which are known to interact. These trimer pairs are then used to make predictions about unknown protein pairs. A detailed description of the method can be found in the paper: S. Martin, D. Roe, J.L. Faulon. "Predicting protein-protein interactions using signature products," Bioinformatics, available online from Advance Access, Aug. 19, 2004.

  7. Prediction of image partitions using Fourier descriptors: application to segmentation-based coding schemes.

    PubMed

    Marqués, F; Llorens, B; Gasull, A

    1998-01-01

    This paper presents a prediction technique for partition sequences. It uses a region-by-region approach that consists of four steps: region parameterization, region prediction, region ordering, and partition creation. The time evolution of each region is divided into two types: regular motion and shape deformation. Both types of evolution are parameterized by means of the Fourier descriptors and they are separately predicted in the Fourier domain. The final predicted partition is built from the ordered combination of the predicted regions, using morphological tools. With this prediction technique, two different applications are addressed in the context of segmentation-based coding approaches. Noncausal partition prediction is applied to partition interpolation, and examples using complete partitions are presented. In turn, causal partition prediction is applied to partition extrapolation for coding purposes, and examples using complete partitions as well as sequences of binary images--shape information in video object planes (VOPs)--are presented. PMID:18276271

  8. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  9. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  10. Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Haghiabi, Amir Hamzeh

    2016-07-01

    In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computing technique for predicting longitudinal dispersion coefficient ( D L ) in rivers. As mentioned in the literature, experimental dataset related to D L was collected and used for preparing MARS model. Results of MARS model were compared with multi-layer neural network model and empirical formulas. To define the most effective parameters on D L , the Gamma test was used. Performance of MARS model was assessed by calculation of standard error indices. Error indices showed that MARS model has suitable performance and is more accurate compared to multi-layer neural network model and empirical formulas. Results of the Gamma test and MARS model showed that flow depth ( H) and ratio of the mean velocity to shear velocity ( u/ u ∗) were the most effective parameters on the D L .

  11. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  12. Signature Product Code for Predicting Protein-Protein Interactions

    2004-09-25

    The SigProdV1.0 software consists of four programs which together allow the prediction of protein-protein interactions using only amino acid sequences and experimental data. The software is based on the use of tensor products of amino acid trimers coupled with classifiers known as support vector machines. Essentially the program looks for amino acid trimer pairs which occur more frequently in protein pairs which are known to interact. These trimer pairs are then used to make predictionsmore » about unknown protein pairs. A detailed description of the method can be found in the paper: S. Martin, D. Roe, J.L. Faulon. "Predicting protein-protein interactions using signature products," Bioinformatics, available online from Advance Access, Aug. 19, 2004.« less

  13. Less Can Be More: RNA-Adapters May Enhance Coding Capacity of Replicators

    PubMed Central

    de Boer, Folkert K.; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary ‘functional’ structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This ‘RNA-adapter’ can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in

  14. PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions

    NASA Technical Reports Server (NTRS)

    Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark

    1990-01-01

    Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.

  15. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  16. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  17. Adaptation of TRIPND Field Line Tracing Code to a Shaped, Poloidal Divertor Geometry

    NASA Astrophysics Data System (ADS)

    Monat, P.; Moyer, R. A.; Evans, T. E.

    2001-10-01

    The magnetic field line tracing code TRIPND(T.E. Evans, Proc. 18th Conf. on Control. Fusion and Plasma Phys., Berlin, Germany, Vol. 15C, Part II (European Physical Society, 1991) p. 65.) has been modified to use the axisymmetric equilibrium magnetic fields from an EFIT reconstruction in place of circular equilibria with multi-filament current profile expansions. This adaptation provides realistic plasma current profiles in shaped geometries. A major advantage of this modification is that it allows investigation of magnetic field line trajectories in any device for which an EFIT reconstruction is available. The TRIPND code has been used to study the structure of the magnetic field line topology in circular, limiter tokamaks, including Tore Supra and TFTR and has been benchmarked against the GOURDON code used in Europe for magnetic field line tracing. The new version of the code, called TRIP3D, is used to investigate the sensitivity of various shaped equilibria to non-axisymmetric perturbations such as a shifted F coil or error field correction coils.

  18. Radiographic image sequence coding using adaptive finite-state vector quantization

    NASA Astrophysics Data System (ADS)

    Joo, Chang-Hee; Choi, Jong S.

    1990-11-01

    Vector quantization is an effective spatial domain image coding technique at under 1 . 0 bits per pixel. To achieve the quality at lower rates it is necessary to exploit spatial redundancy over a larger region of pixels than is possible with memoryless VQ. A fmite state vector quant. izer can achieve the same performance as memoryless VQ at lower rates. This paper describes an athptive finite state vector quantization for radiographic image sequence coding. Simulation experiment has been carried out with 4*4 blocks of pixels from a sequence of cardiac angiogram consisting of 40 frames of size 256*256pixels each. At 0. 45 bpp the resulting adaptive FSVQ encoder achieves performance comparable to earlier memoryless VQs at 0. 8 bpp.

  19. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  20. Ducted-Fan Engine Acoustic Predictions using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor; temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier-Stokes codes can be used both to generate and propagate rotor/stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  1. Ducted-Fan Engine Acoustic Predictions Using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor, temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier Stokes codes can be used both to generate and propagate rotor-stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  2. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  3. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  4. Predictive coding under the free-energy principle.

    PubMed

    Friston, Karl; Kiebel, Stefan

    2009-05-12

    This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs.

  5. Predictive coding under the free-energy principle.

    PubMed

    Friston, Karl; Kiebel, Stefan

    2009-05-12

    This paper considers prediction and perceptual categorization as an inference problem that is solved by the brain. We assume that the brain models the world as a hierarchy or cascade of dynamical systems that encode causal structure in the sensorium. Perception is equated with the optimization or inversion of these internal models, to explain sensory data. Given a model of how sensory data are generated, we can invoke a generic approach to model inversion, based on a free energy bound on the model's evidence. The ensuing free-energy formulation furnishes equations that prescribe the process of recognition, i.e. the dynamics of neuronal activity that represent the causes of sensory input. Here, we focus on a very general model, whose hierarchical and dynamical structure enables simulated brains to recognize and predict trajectories or sequences of sensory states. We first review hierarchical dynamical models and their inversion. We then show that the brain has the necessary infrastructure to implement this inversion and illustrate this point using synthetic birds that can recognize and categorize birdsongs. PMID:19528002

  6. Coded aperture imaging - Predicted performance of uniformly redundant arrays

    NASA Technical Reports Server (NTRS)

    Fenimore, E. E.

    1978-01-01

    It is noted that uniformly redundant arrays (URAs) have autocorrelation functions with perfectly flat sidelobes. A generalized signal-to-noise equation has been developed to predict URA performance. The signal-to-noise value is formulated as a function of aperture transmission or density, the ratio of the intensity of a resolution element to the integrated source intensity, and the ratio of detector background noise to the integrated intensity. It is shown that the only two-dimensional URAs known have a transmission of one half. This is not a great limitation because a nonoptimum transmission of one half never reduces the signal-to-noise ratio more than 30%. The reconstructed URA image contains practically uniform noise, regardless of the object structure. URA's improvement over the single-pinhole camera is much larger for high-intensity points than for low-intensity points.

  7. Maneuvering Rotorcraft Noise Prediction: A New Code for a New Problem

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Bres, Guillaume A.; Perez, Guillaume; Jones, Henry E.

    2002-01-01

    This paper presents the unique aspects of the development of an entirely new maneuver noise prediction code called PSU-WOPWOP. The main focus of the code is the aeroacoustic aspects of the maneuver noise problem, when the aeromechanical input data are provided (namely aircraft and blade motion, blade airloads). The PSU-WOPWOP noise prediction capability was developed for rotors in steady and transient maneuvering flight. Featuring an object-oriented design, the code allows great flexibility for complex rotor configuration and motion (including multiple rotors and full aircraft motion). The relative locations and number of hinges, flexures, and body motions can be arbitrarily specified to match the any specific rotorcraft. An analysis of algorithm efficiency is performed for maneuver noise prediction along with a description of the tradeoffs made specifically for the maneuvering noise problem. Noise predictions for the main rotor of a rotorcraft in steady descent, transient (arrested) descent, hover and a mild "pop-up" maneuver are demonstrated.

  8. Predicting Health Care Cost Transitions Using a Multidimensional Adaptive Prediction Process.

    PubMed

    Guo, Xiaobo; Gandy, William; Coberley, Carter; Pope, James; Rula, Elizabeth; Wells, Aaron

    2015-08-01

    Managing population health requires meeting individual care needs while striving for increased efficiency and quality of care. Predictive models can integrate diverse data to provide objective assessment of individual prospective risk to identify individuals requiring more intensive health management in the present. The purpose of this research was to develop and test a predictive modeling approach, Multidimensional Adaptive Prediction Process (MAPP). MAPP is predicated on dividing the population into cost cohorts and then utilizing a collection of models and covariates to optimize future cost prediction for individuals in each cohort. MAPP was tested on 3 years of administrative health care claims starting in 2009 for health plan members (average n=25,143) with evidence of coronary heart disease. A "status quo" reference modeling methodology applied to the total annual population was established for comparative purposes. Results showed that members identified by MAPP contributed $7.9 million and $9.7 million more in 2011 health care costs than the reference model for cohorts increasing in cost or remaining high cost, respectively. Across all cohorts, the additional accurate cost capture of MAPP translated to an annual difference of $1882 per member, a 21% improvement, relative to the reference model. The results demonstrate that improved future cost prediction is achievable using a novel adaptive multiple model approach. Through accurate prospective identification of individuals whose costs are expected to increase, MAPP can help health care entities achieve efficient resource allocation while improving care quality for emergent need individuals who are intermixed among a diverse set of health care consumers.

  9. A Predictive Approach to Eliminating Errors in Software Code

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA s Metrics Data Program Data Repository is a database that stores problem, product, and metrics data. The primary goal of this data repository is to provide project data to the software community. In doing so, the Metrics Data Program collects artifacts from a large NASA dataset, generates metrics on the artifacts, and then generates reports that are made available to the public at no cost. The data that are made available to general users have been sanitized and authorized for publication through the Metrics Data Program Web site by officials representing the projects from which the data originated. The data repository is operated by NASA s Independent Verification and Validation (IV&V) Facility, which is located in Fairmont, West Virginia, a high-tech hub for emerging innovation in the Mountain State. The IV&V Facility was founded in 1993, under the NASA Office of Safety and Mission Assurance, as a direct result of recommendations made by the National Research Council and the Report of the Presidential Commission on the Space Shuttle Challenger Accident. Today, under the direction of Goddard Space Flight Center, the IV&V Facility continues its mission to provide the highest achievable levels of safety and cost-effectiveness for mission-critical software. By extending its data to public users, the facility has helped improve the safety, reliability, and quality of complex software systems throughout private industry and other government agencies. Integrated Software Metrics, Inc., is one of the organizations that has benefited from studying the metrics data. As a result, the company has evolved into a leading developer of innovative software-error prediction tools that help organizations deliver better software, on time and on budget.

  10. Genome-environment associations in sorghum landraces predict adaptive traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these ...

  11. Optimal joint power-rate adaptation for error resilient video coding

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew

    2008-01-01

    In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.

  12. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  13. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  14. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-01

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code. PMID:27607718

  15. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-01

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  16. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  17. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves

    PubMed Central

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    Abstract The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities. Based on decision theory, the authors propose an alternative index, the “average deviation about the probability threshold” (ADAPT). An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model. Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models. PMID:26765451

  18. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  19. Overview of numerical codes developed for predicted electrothermal deicing of aircraft blades

    NASA Technical Reports Server (NTRS)

    Keith, Theo G.; De Witt, Kenneth J.; Wright, William B.; Masiulaniec, K. Cyril

    1988-01-01

    An overview of the deicing computer codes that have been developed at the University of Toledo under sponsorship of the NASA-Lewis Research Center is presented. These codes simulate the transient heat conduction and phase change occurring in an electrothermal deicier pad that has an arbitrary accreted ice shape on its surface. The codes are one-dimensional rectangular, two-dimensional rectangular, and two-dimensional with a coordinate transformation to model the true blade geometry. All modifications relating to the thermal physics of the deicing problem that have been incorporated into the codes will be discussed. Recent results of reformulating the codes using different numerical methods to increase program efficiency are described. In particular, this reformulation has enabled a more comprehensive two-dimensional code to run in much less CPU time than the original version. The code predictions are compared with experimental data obtained in the NASA-Lewis Icing Research Tunnel with a UH1H blade fitted with a B. F. Goodrich electrothermal deicer pad. Both continuous and cyclic heater firing cases are considered. The major objective in this comparison is to illustrate which codes give acceptable results in different regions of the airfoil for different heater firing sequences.

  20. TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes

    NASA Technical Reports Server (NTRS)

    Topol, David A.

    1999-01-01

    TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.

  1. EMdeCODE: a novel algorithm capable of reading words of epigenetic code to predict enhancers and retroviral integration sites and to identify H3R2me1 as a distinctive mark of coding versus non-coding genes.

    PubMed

    Santoni, Federico Andrea

    2013-02-01

    Existence of some extra-genetic (epigenetic) codes has been postulated since the discovery of the primary genetic code. Evident effects of histone post-translational modifications or DNA methylation over the efficiency and the regulation of DNA processes are supporting this postulation. EMdeCODE is an original algorithm that approximate the genomic distribution of given DNA features (e.g. promoter, enhancer, viral integration) by identifying relevant ChIPSeq profiles of post-translational histone marks or DNA binding proteins and combining them in a supermark. EMdeCODE kernel is essentially a two-step procedure: (i) an expectation-maximization process calculates the mixture of epigenetic factors that maximize the Sensitivity (recall) of the association with the feature under study; (ii) the approximated density is then recursively trimmed with respect to a control dataset to increase the precision by reducing the number of false positives. EMdeCODE densities improve significantly the prediction of enhancer loci and retroviral integration sites with respect to previous methods. Importantly, it can also be used to extract distinctive factors between two arbitrary conditions. Indeed EMdeCODE identifies unexpected epigenetic profiles specific for coding versus non-coding RNA, pointing towards a new role for H3R2me1 in coding regions.

  2. Application of TURBO-AE to Flutter Prediction: Aeroelastic Code Development

    NASA Technical Reports Server (NTRS)

    Hoyniak, Daniel; Simons, Todd A.; Stefko, George (Technical Monitor)

    2001-01-01

    The TURBO-AE program has been evaluated by comparing the obtained results to cascade rig data and to prediction made from various in-house programs. A high-speed fan cascade, a turbine cascade, a turbine cascade and a fan geometry that shower flutter in torsion mode were analyzed. The steady predictions for the high-speed fan cascade showed the TURBO-AE predictions to match in-house codes. However, the predictions did not match the measured blade surface data. Other researchers also reported similar disagreement with these data set. Unsteady runs for the fan configuration were not successful using TURBO-AE .

  3. Adaptive coded spreading OFDM signal for dynamic-λ optical access network

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-12-01

    This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.

  4. Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation

    NASA Technical Reports Server (NTRS)

    Locicero, J. L.; Schilling, D. L.

    1977-01-01

    An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.

  5. The NASPE/BPEG generic pacemaker code for antibradyarrhythmia and adaptive-rate pacing and antitachyarrhythmia devices.

    PubMed

    Bernstein, A D; Camm, A J; Fletcher, R D; Gold, R D; Rickards, A F; Smyth, N P; Spielman, S R; Sutton, R

    1987-07-01

    A new generic pacemaker code, derived from and compatible with the Revised ICHD Code, was proposed jointly by the North American Society of Pacing and Electrophysiology (NASPE) Mode Code Committee and the British Pacing and Electrophysiology Group (BPEG), and has been adopted by the NASPE Board of Trustees. It is abbreviated as the NBG (for "NASPE/BPEG Generic") Code, and was developed to permit extension of the generic-code concept to pacemakers whose escape rate is continuously controlled by monitoring some physiologic variable, rather than determined by fixed escape intervals measured from stimuli or sensed depolarizations, and to antitachyarrhythmia devices including cardioverters and defibrillators. The NASPE/BPEG Code incorporates an "R" in the fourth position to signify rate modulation (adaptive-rate pacing), and one of four letters in the fifth position to indicate the presence of antitachyarrhythmia-pacing capability or of cardioversion or defibrillation functions. PMID:2441363

  6. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  7. Cumulative Time Series Representation for Code Blue prediction in the Intensive Care Unit.

    PubMed

    Salas-Boni, Rebeca; Bai, Yong; Hu, Xiao

    2015-01-01

    Patient monitors in hospitals generate a high number of false alarms that compromise patients care and burden clinicians. In our previous work, an attempt to alleviate this problem by finding combinations of monitor alarms and laboratory test that were predictive of code blue events, called SuperAlarms. Our current work consists of developing a novel time series representation that accounts for both cumulative effects and temporality was developed, and it is applied to code blue prediction in the intensive care unit (ICU). The health status of patients is represented both by a term frequency approach, TF, often used in natural language processing; and by our novel cumulative approach. We call this representation "weighted accumulated occurrence representation", or WAOR. These two representations are fed into a L1 regularized logistic regression classifier, and are used to predict code blue events. Our performance was assessed online in an independent set. We report the sensitivity of our algorithm at different time windows prior to the code blue event, as well as the work-up to detect ratio and the proportion of false code blue detections divided by the number of false monitor alarms. We obtained a better performance with our cumulative representation, retaining a sensitivity close to our previous work while improving the other metrics. PMID:26306261

  8. GeneAlign: a coding exon prediction tool based on phylogenetical comparisons.

    PubMed

    Hsieh, Shu Ju; Lin, Chun Yuan; Liu, Ning Han; Chow, Wei Yuan; Tang, Chuan Yi

    2006-07-01

    GeneAlign is a coding exon prediction tool for predicting protein coding genes by measuring the homologies between a sequence of a genome and related sequences, which have been annotated, of other genomes. Identifying protein coding genes is one of most important tasks in newly sequenced genomes. With increasing numbers of gene annotations verified by experiments, it is feasible to identify genes in the newly sequenced genomes by comparing to annotated genes of phylogenetically close organisms. GeneAlign applies CORAL, a heuristic linear time alignment tool, to determine if regions flanked by the candidate signals (initiation codon-GT, AG-GT and AG-STOP codon) are similar to annotated coding exons. Employing the conservation of gene structures and sequence homologies between protein coding regions increases the prediction accuracy. GeneAlign was tested on Projector dataset of 491 human-mouse homologous sequence pairs. At the gene level, both the average sensitivity and the average specificity of GeneAlign are 81%, and they are larger than 96% at the exon level. The rates of missing exons and wrong exons are smaller than 1%. GeneAlign is a free tool available at http://genealign.hccvs.hc.edu.tw.

  9. Blind Adaptive Decorrelating RAKE (DRAKE) Downlink Receiver for Space-Time Block Coded Multipath CDMA

    NASA Astrophysics Data System (ADS)

    Jayaweera, Sudharman K.; Poor, H. Vincent

    2003-12-01

    A downlink receiver is proposed for space-time block coded CDMA systems operating in multipath channels. By combining the powerful RAKE receiver concept for a frequency selective channel with space-time decoding, it is shown that the performance of mobile receivers operating in the presence of channel fading can be improved significantly. The proposed receiver consists of a bank of decorrelating filters designed to suppress the multiple access interference embedded in the received signal before the space-time decoding. The new receiver performs the space-time decoding along each resolvable multipath component and then the outputs are diversity combined to obtain the final decision statistic. The proposed receiver relies on a key constraint imposed on the output of each filter in the bank of decorrelating filters in order to maintain the space-time block code structure embedded in the signal. The proposed receiver can easily be adapted blindly, requiring only the desired user's signature sequence, which is also attractive in the context of wireless mobile communications. Simulation results are provided to confirm the effectiveness of the proposed receiver in multipath CDMA systems.

  10. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  11. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  12. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  13. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  14. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  15. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  16. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  17. A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes

    NASA Astrophysics Data System (ADS)

    Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun

    The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.

  18. TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.

  19. Fast multiple run_before decoding method for efficient implementation of an H.264/advanced video coding context-adaptive variable length coding decoder

    NASA Astrophysics Data System (ADS)

    Ki, Dae Wook; Kim, Jae Ho

    2013-07-01

    We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.

  20. A 3-D Vortex Code for Parachute Flow Predictions: VIPAR Version 1.0

    SciTech Connect

    STRICKLAND, JAMES H.; HOMICZ, GREGORY F.; PORTER, VICKI L.; GOSSLER, ALBERT A.

    2002-07-01

    This report describes a 3-D fluid mechanics code for predicting flow past bluff bodies whose surfaces can be assumed to be made up of shell elements that are simply connected. Version 1.0 of the VIPAR code (Vortex Inflation PARachute code) is described herein. This version contains several first order algorithms that we are in the process of replacing with higher order ones. These enhancements will appear in the next version of VIPAR. The present code contains a motion generator that can be used to produce a large class of rigid body motions. The present code has also been fully coupled to a structural dynamics code in which the geometry undergoes large time dependent deformations. Initial surface geometry is generated from triangular shell elements using a code such as Patran and is written into an ExodusII database file for subsequent input into VIPAR. Surface and wake variable information is output into two ExodusII files that can be post processed and viewed using software such as EnSight{trademark}.

  1. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  2. The role of prediction and outcomes in adaptive cognitive control.

    PubMed

    Schiffer, Anne-Marike; Waszak, Florian; Yeung, Nick

    2015-01-01

    Humans adaptively perform actions to achieve their goals. This flexible behaviour requires two core abilities: the ability to anticipate the outcomes of candidate actions and the ability to select and implement actions in a goal-directed manner. The ability to predict outcomes has been extensively researched in reinforcement learning paradigms, but this work has often focused on simple actions that are not embedded in hierarchical and sequential structures that are characteristic of goal-directed human behaviour. On the other hand, the ability to select actions in accordance with high-level task goals, particularly in the presence of alternative responses and salient distractors, has been widely researched in cognitive control paradigms. Cognitive control research, however, has often paid less attention to the role of action outcomes. The present review attempts to bridge these accounts by proposing an outcome-guided mechanism for selection of extended actions. Our proposal builds on constructs from the hierarchical reinforcement learning literature, which emphasises the concept of reaching and evaluating informative states, i.e., states that constitute subgoals in complex actions. We develop an account of the neural mechanisms that allow outcome-guided action selection to be achieved in a network that relies on projections from cortical areas to the basal ganglia and back-projections from the basal ganglia to the cortex. These cortico-basal ganglia-thalamo-cortical 'loops' allow convergence - and thus integration - of information from non-adjacent cortical areas (for example between sensory and motor representations). This integration is essential in action sequences, for which achieving an anticipated sensory state signals the successful completion of an action. We further describe how projection pathways within the basal ganglia allow selection between representations, which may pertain to movements, actions, or extended action plans. The model lastly envisages

  3. Transforming the sensing and numerical prediction of high-impact local weather through dynamic adaptation.

    PubMed

    Droegemeier, Kelvin K

    2009-03-13

    Mesoscale weather, such as convective systems, intense local rainfall resulting in flash floods and lake effect snows, frequently is characterized by unpredictable rapid onset and evolution, heterogeneity and spatial and temporal intermittency. Ironically, most of the technologies used to observe the atmosphere, predict its evolution and compute, transmit or store information about it, operate in a static pre-scheduled framework that is fundamentally inconsistent with, and does not accommodate, the dynamic behaviour of mesoscale weather. As a result, today's weather technology is highly constrained and far from optimal when applied to any particular situation. This paper describes a new cyberinfrastructure framework, in which remote and in situ atmospheric sensors, data acquisition and storage systems, assimilation and prediction codes, data mining and visualization engines, and the information technology frameworks within which they operate, can change configuration automatically, in response to evolving weather. Such dynamic adaptation is designed to allow system components to achieve greater overall effectiveness, relative to their static counterparts, for any given situation. The associated service-oriented architecture, known as Linked Environments for Atmospheric Discovery (LEAD), makes advanced meteorological and cyber tools as easy to use as ordering a book on the web. LEAD has been applied in a variety of settings, including experimental forecasting by the US National Weather Service, and allows users to focus much more attention on the problem at hand and less on the nuances of data formats, communication protocols and job execution environments.

  4. Validation of Framework Code Approach to a Life Prediction System for Fiber Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Gravett, Phillip

    1997-01-01

    The grant was conducted by the MMC Life Prediction Cooperative, an industry/government collaborative team, Ohio Aerospace Institute (OAI) acted as the prime contractor on behalf of the Cooperative for this grant effort. See Figure I for the organization and responsibilities of team members. The technical effort was conducted during the period August 7, 1995 to June 30, 1996 in cooperation with Erwin Zaretsky, the LERC Program Monitor. Phil Gravett of Pratt & Whitney was the principal technical investigator. Table I documents all meeting-related coordination memos during this period. The effort under this grant was closely coordinated with an existing USAF sponsored program focused on putting into practice a life prediction system for turbine engine components made of metal matrix composites (MMC). The overall architecture of the NMC life prediction system was defined in the USAF sponsored program (prior to this grant). The efforts of this grant were focussed on implementing and tailoring of the life prediction system, the framework code within it and the damage modules within it to meet the specific requirements of the Cooperative. T'he tailoring of the life prediction system provides the basis for pervasive and continued use of this capability by the industry/government cooperative. The outputs of this grant are: 1. Definition of the framework code to analysis modules interfaces, 2. Definition of the interface between the materials database and the finite element model, and 3. Definition of the integration of the framework code into an FEM design tool.

  5. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot.

    PubMed

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  6. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    PubMed Central

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  7. Adaptive evolution: evaluating empirical support for theoretical predictions.

    PubMed

    Olson-Manning, Carrie F; Wagner, Maggie R; Mitchell-Olds, Thomas

    2012-12-01

    Adaptive evolution is shaped by the interaction of population genetics, natural selection and underlying network and biochemical constraints. Variation created by mutation, the raw material for evolutionary change, is translated into phenotypes by flux through metabolic pathways and by the topography and dynamics of molecular networks. Finally, the retention of genetic variation and the efficacy of selection depend on population genetics and demographic history. Emergent high-throughput experimental methods and sequencing technologies allow us to gather more evidence and to move beyond the theory in different systems and populations. Here we review the extent to which recent evidence supports long-established theoretical principles of adaptation.

  8. Specification and Prediction of the Radiation Environment Using Data Assimilative VERB code

    NASA Astrophysics Data System (ADS)

    Shprits, Yuri; Kellerman, Adam

    2016-07-01

    We discuss how data assimilation can be used for the reconstruction of long-term evolution, bench-marking of the physics based codes and used to improve the now-casting and focusing of the radiation belts and ring current. We also discuss advanced data assimilation methods such as parameter estimation and smoothing. We present a number of data assimilation applications using the VERB 3D code. The 3D data assimilative VERB allows us to blend together data from GOES, RBSP A and RBSP B. 1) Model with data assimilation allows us to propagate data to different pitch angles, energies, and L-shells and blends them together with the physics-based VERB code in an optimal way. We illustrate how to use this capability for the analysis of the previous events and for obtaining a global and statistical view of the system. 2) The model predictions strongly depend on initial conditions that are set up for the model. Therefore, the model is as good as the initial conditions that it uses. To produce the best possible initial conditions, data from different sources (GOES, RBSP A, B, our empirical model predictions based on ACE) are all blended together in an optimal way by means of data assimilation, as described above. The resulting initial conditions do not have gaps. This allows us to make more accurate predictions. Real-time prediction framework operating on our website, based on GOES, RBSP A, B and ACE data, and 3D VERB, is presented and discussed.

  9. Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM

    NASA Astrophysics Data System (ADS)

    Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.

    2004-01-01

    In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a

  10. Predicting Early Adolescent Gang Involvement from Middle School Adaptation

    ERIC Educational Resources Information Center

    Dishion, Thomas J.; Nelson, Sarah E.; Yasui, Miwa

    2005-01-01

    This study examined the role of adaptation in the first year of middle school (Grade 6, age 11) to affiliation with gangs by the last year of middle school (Grade 8, age 13). The sample consisted of 714 European American (EA) and African American (AA) boys and girls. Specifically, academic grades, reports of antisocial behavior, and peer relations…

  11. Sequence Prediction With Sparse Distributed Hyperdimensional Coding Applied to the Analysis of Mobile Phone Use Patterns.

    PubMed

    Rasanen, Okko J; Saarinen, Jukka P

    2016-09-01

    Modeling and prediction of temporal sequences is central to many signal processing and machine learning applications. Prediction based on sequence history is typically performed using parametric models, such as fixed-order Markov chains ( n -grams), approximations of high-order Markov processes, such as mixed-order Markov models or mixtures of lagged bigram models, or with other machine learning techniques. This paper presents a method for sequence prediction based on sparse hyperdimensional coding of the sequence structure and describes how higher order temporal structures can be utilized in sparse coding in a balanced manner. The method is purely incremental, allowing real-time online learning and prediction with limited computational resources. Experiments with prediction of mobile phone use patterns, including the prediction of the next launched application, the next GPS location of the user, and the next artist played with the phone media player, reveal that the proposed method is able to capture the relevant variable-order structure from the sequences. In comparison with the n -grams and the mixed-order Markov models, the sparse hyperdimensional predictor clearly outperforms its peers in terms of unweighted average recall and achieves an equal level of weighted average recall as the mixed-order Markov chain but without the batch training of the mixed-order model.

  12. Genomic islands predict functional adaptation in marine actinobacteria

    SciTech Connect

    Penn, Kevin; Jenkins, Caroline; Nett, Markus; Udwary, Daniel; Gontang, Erin; McGlinchey, Ryan; Foster, Brian; Lapidus, Alla; Podell, Sheila; Allen, Eric; Moore, Bradley; Jensen, Paul

    2009-04-01

    Linking functional traits to bacterial phylogeny remains a fundamental but elusive goal of microbial ecology 1. Without this information, it becomes impossible to resolve meaningful units of diversity and the mechanisms by which bacteria interact with each other and adapt to environmental change. Ecological adaptations among bacterial populations have been linked to genomic islands, strain-specific regions of DNA that house functionally adaptive traits 2. In the case of environmental bacteria, these traits are largely inferred from bioinformatic or gene expression analyses 2, thus leaving few examples in which the functions of island genes have been experimentally characterized. Here we report the complete genome sequences of Salinispora tropica and S. arenicola, the first cultured, obligate marine Actinobacteria 3. These two species inhabit benthic marine environments and dedicate 8-10percent of their genomes to the biosynthesis of secondary metabolites. Despite a close phylogenetic relationship, 25 of 37 secondary metabolic pathways are species-specific and located within 21 genomic islands, thus providing new evidence linking secondary metabolism to ecological adaptation. Species-specific differences are also observed in CRISPR sequences, suggesting that variations in phage immunity provide fitness advantages that contribute to the cosmopolitan distribution of S. arenicola 4. The two Salinispora genomes have evolved by complex processes that include the duplication and acquisition of secondary metabolite genes, the products of which provide immediate opportunities for molecular diversification and ecological adaptation. Evidence that secondary metabolic pathways are exchanged by Horizontal Gene Transfer (HGT) yet are fixed among globally distributed populations 5 supports a functional role for their products and suggests that pathway acquisition represents a previously unrecognized force driving bacterial diversification

  13. Validation of the ORIGEN-S code for predicting radionuclide inventories in used CANDU fuel

    NASA Astrophysics Data System (ADS)

    Tait, J. C.; Gauld, I.; Kerr, A. H.

    1995-05-01

    The safety assessment being conducted by AECL Research for the concept of deep geological disposal of used CANDU UO 2 fuel requires the calculation of radionuclide inventories in the fuel to provide source terms for radionuclide release. This report discusses the validation of selected actinide and fission-product inventories calculated using the ORIGEN-S code coupled with the WIMS-AECL lattice code, using data from analytical measurements of radioisotope inventories in Pickering CANDU reactor fuel. The recent processing of new ENDF/B-VI cross-section data has allowed the ORIGEN-S calculations to be performed using the most up-to-date nuclear data available. The results indicate that the code is reliably predicting actinide and the majority of fission-product inventories to within the analytical uncertainty.

  14. Coding of predicted reward omission by dopamine neurons in a conditioned inhibition paradigm.

    PubMed

    Tobler, Philippe N; Dickinson, Anthony; Schultz, Wolfram

    2003-11-12

    Animals learn not only about stimuli that predict reward but also about those that signal the omission of an expected reward. We used a conditioned inhibition paradigm derived from animal learning theory to train a discrimination between a visual stimulus that predicted reward (conditioned excitor) and a second stimulus that predicted the omission of reward (conditioned inhibitor). Performing the discrimination required attention to both the conditioned excitor and the inhibitor; however, dopamine neurons showed very different responses to the two classes of stimuli. Conditioned inhibitors elicited considerable depressions in 48 of 69 neurons (median of 35% below baseline) and minor activations in 29 of 69 neurons (69% above baseline), whereas reward-predicting excitors induced pure activations in all 69 neurons tested (242% above baseline), thereby demonstrating that the neurons discriminated between conditioned stimuli predicting reward versus nonreward. The discriminative responses to stimuli with differential reward-predicting but common attentional functions indicate differential neural coding of reward prediction and attention. The neuronal responses appear to reflect reward prediction errors, thus suggesting an extension of the correspondence between learning theory and activity of single dopamine neurons to the prediction of nonreward.

  15. A high temperature fatigue life prediction computer code based on the Total Strain Version of Strainrange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1991-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented, based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  16. Reconstruction for distributed video coding: a Markov random field approach with context-adaptive smoothness prior

    NASA Astrophysics Data System (ADS)

    Zhang, Yongsheng; Xiong, Hongkai; He, Zhihai; Yu, Songyu

    2010-07-01

    An important issue in Wyner-Ziv video coding is the reconstruction of Wyner-Ziv frames with decoded bit-planes. So far, there are two major approaches: the Maximum a Posteriori (MAP) reconstruction and the Minimum Mean Square Error (MMSE) reconstruction algorithms. However, these approaches do not exploit smoothness constraints in natural images. In this paper, we model a Wyner-Ziv frame by Markov random fields (MRFs), and produce reconstruction results by finding an MAP estimation of the MRF model. In the MRF model, the energy function consists of two terms: a data term, MSE distortion metric in this paper, measuring the statistical correlation between side-information and the source, and a smoothness term enforcing spatial coherence. In order to better describe the spatial constraints of images, we propose a context-adaptive smoothness term by analyzing the correspondence between the output of Slepian-Wolf decoding and successive frames available at decoders. The significance of the smoothness term varies in accordance with the spatial variation within different regions. To some extent, the proposed approach is an extension to the MAP and MMSE approaches by exploiting the intrinsic smoothness characteristic of natural images. Experimental results demonstrate a considerable performance gain compared with the MAP and MMSE approaches.

  17. An Euler code prediction of near field to midfield sonic boom pressure signatures

    NASA Technical Reports Server (NTRS)

    Siclari, M. J.; Darden, C. M.

    1990-01-01

    A new approach is presented for computing sonic boom pressure signatures in the near field to midfield that utilizes a fully three-dimensional Euler finite volume code capable of analyzing complex geometries. Both linear and nonlinear sonic boom methodologies exist but for the most part rely primarily on equivalent area distributions for the prediction of far field pressure signatures. This is due to the absence of a flexible nonlinear methodology that can predict near field pressure signatures generated by three-dimensional aircraft geometries. It is the intention of the present study to present a nonlinear Euler method than can fill this gap and supply the needed near field signature data for many of the existing sonic boom codes.

  18. A Cerebellar Framework for Predictive Coding and Homeostatic Regulation in Depressive Disorder.

    PubMed

    Schutter, Dennis J L G

    2016-02-01

    Depressive disorder is associated with abnormalities in the processing of reward and punishment signals and disturbances in homeostatic regulation. These abnormalities are proposed to impair error minimization routines for reducing uncertainty. Several lines of research point towards a role of the cerebellum in reward- and punishment-related predictive coding and homeostatic regulatory function in depressive disorder. Available functional and anatomical evidence suggests that in addition to the cortico-limbic networks, the cerebellum is part of the dysfunctional brain circuit in depressive disorder as well. It is proposed that impaired cerebellar function contributes to abnormalities in predictive coding and homeostatic dysregulation in depressive disorder. Further research on the role of the cerebellum in depressive disorder may further extend our knowledge on the functional and neural mechanisms of depressive disorder and development of novel antidepressant treatments strategies targeting the cerebellum.

  19. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  20. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  1. Prediction of material strength and fracture of brittle materials using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Stellingwwerf, R.F.

    1995-12-31

    The design of many devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics that are used in armor packages; glass that is used in windshields; and rock and concrete that are used in oil wells. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, they did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  2. Users manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE)

    NASA Technical Reports Server (NTRS)

    Ruff, Gary A.; Berkowitz, Brian M.

    1990-01-01

    LEWICE is an ice accretion prediction code that applies a time-stepping procedure to calculate the shape of an ice accretion. The potential flow field is calculated in LEWICE using the Douglas Hess-Smith 2-D panel code (S24Y). This potential flow field is then used to calculate the trajectories of particles and the impingement points on the body. These calculations are performed to determine the distribution of liquid water impinging on the body, which then serves as input to the icing thermodynamic code. The icing thermodynamic model is based on the work of Messinger, but contains several major modifications and improvements. This model is used to calculate the ice growth rate at each point on the surface of the geometry. By specifying an icing time increment, the ice growth rate can be interpreted as an ice thickness which is added to the body, resulting in the generation of new coordinates. This procedure is repeated, beginning with the potential flow calculations, until the desired icing time is reached. The operation of LEWICE is illustrated through the use of five examples. These examples are representative of the types of applications expected for LEWICE. All input and output is discussed, along with many of the diagnostic messages contained in the code. Several error conditions that may occur in the code for certain icing conditions are identified, and a course of action is recommended. LEWICE has been used to calculate a variety of ice shapes, but should still be considered a research code. The code should be exercised further to identify any shortcomings and inadequacies. Any modifications identified as a result of these cases, or of additional experimental results, should be incorporated into the model. Using it as a test bed for improvements to the ice accretion model is one important application of LEWICE.

  3. SIM_ADJUST -- A computer code that adjusts simulated equivalents for observations or predictions

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2008-01-01

    This report documents the SIM_ADJUST computer code. SIM_ADJUST surmounts an obstacle that is sometimes encountered when using universal model analysis computer codes such as UCODE_2005 (Poeter and others, 2005), PEST (Doherty, 2004), and OSTRICH (Matott, 2005; Fredrick and others (2007). These codes often read simulated equivalents from a list in a file produced by a process model such as MODFLOW that represents a system of interest. At times values needed by the universal code are missing or assigned default values because the process model could not produce a useful solution. SIM_ADJUST can be used to (1) read a file that lists expected observation or prediction names and possible alternatives for the simulated values; (2) read a file produced by a process model that contains space or tab delimited columns, including a column of simulated values and a column of related observation or prediction names; (3) identify observations or predictions that have been omitted or assigned a default value by the process model; and (4) produce an adjusted file that contains a column of simulated values and a column of associated observation or prediction names. The user may provide alternatives that are constant values or that are alternative simulated values. The user may also provide a sequence of alternatives. For example, the heads from a series of cells may be specified to ensure that a meaningful value is available to compare with an observation located in a cell that may become dry. SIM_ADJUST is constructed using modules from the JUPITER API, and is intended for use on any computer operating system. SIM_ADJUST consists of algorithms programmed in Fortran90, which efficiently performs numerical calculations.

  4. Results from baseline tests of the SPRE I and comparison with code model predictions

    SciTech Connect

    Cairelli, J.E.; Geng, S.M.; Skupinski, R.C.

    1994-09-01

    The Space Power Research Engine (SPRE), a free-piston Stirling engine with linear alternator, is being tested at the NASA Lewis Research Center as part of the Civil Space Technology Initiative (CSTI) as a candidate for high capacity space power. This paper presents results of base-line engine tests at design and off-design operating conditions. The test results are compared with code model predictions.

  5. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  6. Episodic memories predict adaptive value-based decision-making.

    PubMed

    Murty, Vishnu P; FeldmanHall, Oriel; Hunter, Lindsay E; Phelps, Elizabeth A; Davachi, Lila

    2016-05-01

    Prior research illustrates that memory can guide value-based decision-making. For example, previous work has implicated both working memory and procedural memory (i.e., reinforcement learning) in guiding choice. However, other types of memories, such as episodic memory, may also influence decision-making. Here we test the role for episodic memory-specifically item versus associative memory-in supporting value-based choice. Participants completed a task where they first learned the value associated with trial unique lotteries. After a short delay, they completed a decision-making task where they could choose to reengage with previously encountered lotteries, or new never before seen lotteries. Finally, participants completed a surprise memory test for the lotteries and their associated values. Results indicate that participants chose to reengage more often with lotteries that resulted in high versus low rewards. Critically, participants not only formed detailed, associative memories for the reward values coupled with individual lotteries, but also exhibited adaptive decision-making only when they had intact associative memory. We further found that the relationship between adaptive choice and associative memory generalized to more complex, ecologically valid choice behavior, such as social decision-making. However, individuals more strongly encode experiences of social violations-such as being treated unfairly, suggesting a bias for how individuals form associative memories within social contexts. Together, these findings provide an important integration of episodic memory and decision-making literatures to better understand key mechanisms supporting adaptive behavior. PMID:26999046

  7. Adaptive and predictive control of a simulated robot arm.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo

    2013-06-01

    In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs).

  8. Predicting multi-wall structural response to hypervelocity impact using the hull code

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.

    1993-01-01

    Previously, multi-wall structures have been analyzed extensively, primarily through experiment, as a means of increasing the meteoroid/space debris impact protection of spacecraft. As structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative to experimental testing, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under different impact loading conditions. The results of comparing experimental tests to Hull Hydrodynamic Computer Code predictions are reported. Also, the results of a numerical parametric study of multi-wall structural response to hypervelocity cylindrical projectile impact are presented.

  9. Why do you fear the bogeyman? An embodied predictive coding model of perceptual inference.

    PubMed

    Pezzulo, Giovanni

    2014-09-01

    Why are we scared by nonperceptual entities such as the bogeyman, and why does the bogeyman only visit us during the night? Why does hearing a window squeaking in the night suggest to us the unlikely idea of a thief or a killer? And why is this more likely to happen after watching a horror movie? To answer these and similar questions, we need to put mind and body together again and consider the embodied nature of perceptual and cognitive inference. Predictive coding provides a general framework for perceptual inference; I propose to extend it by including interoceptive and bodily information. The resulting embodied predictive coding inference permits one to compare alternative hypotheses (e.g., is the sound I hear generated by a thief or the wind?) using the same inferential scheme as in predictive coding, but using both sensory and interoceptive information as evidence, rather than just considering sensory events. If you hear a window squeaking in the night after watching a horror movie, you may consider plausible a very unlikely hypothesis (e.g., a thief, or even the bogeyman) because it explains both what you sense (e.g., the window squeaking in the night) and how you feel (e.g., your high heart rate). The good news is that the inference that I propose is fully rational and gives minds and bodies equal dignity. The bad news is that it also gives an embodiment to the bogeyman, and a reason to fear it.

  10. Why do you fear the bogeyman? An embodied predictive coding model of perceptual inference.

    PubMed

    Pezzulo, Giovanni

    2014-09-01

    Why are we scared by nonperceptual entities such as the bogeyman, and why does the bogeyman only visit us during the night? Why does hearing a window squeaking in the night suggest to us the unlikely idea of a thief or a killer? And why is this more likely to happen after watching a horror movie? To answer these and similar questions, we need to put mind and body together again and consider the embodied nature of perceptual and cognitive inference. Predictive coding provides a general framework for perceptual inference; I propose to extend it by including interoceptive and bodily information. The resulting embodied predictive coding inference permits one to compare alternative hypotheses (e.g., is the sound I hear generated by a thief or the wind?) using the same inferential scheme as in predictive coding, but using both sensory and interoceptive information as evidence, rather than just considering sensory events. If you hear a window squeaking in the night after watching a horror movie, you may consider plausible a very unlikely hypothesis (e.g., a thief, or even the bogeyman) because it explains both what you sense (e.g., the window squeaking in the night) and how you feel (e.g., your high heart rate). The good news is that the inference that I propose is fully rational and gives minds and bodies equal dignity. The bad news is that it also gives an embodiment to the bogeyman, and a reason to fear it. PMID:24307092

  11. Sensorimotor adaptation error signals are derived from realistic predictions of movement outcomes.

    PubMed

    Wong, Aaron L; Shelhamer, Mark

    2011-03-01

    Neural systems that control movement maintain accuracy by adaptively altering motor commands in response to errors. It is often assumed that the error signal that drives adaptation is equivalent to the sensory error observed at the conclusion of a movement; for saccades, this is typically the visual (retinal) error. However, we instead propose that the adaptation error signal is derived as the difference between the observed visual error and a realistic prediction of movement outcome. Using a modified saccade-adaptation task in human subjects, we precisely controlled the amount of error experienced at the conclusion of a movement by back-stepping the target so that the saccade is hypometric (positive retinal error), but less hypometric than if the target had not moved (smaller retinal error than expected). This separates prediction error from both visual errors and motor corrections. Despite positive visual errors and forward-directed motor corrections, we found an adaptive decrease in saccade amplitudes, a finding that is well-explained by the employment of a prediction-based error signal. Furthermore, adaptive changes in movement size were linearly correlated to the disparity between the predicted and observed movement outcomes, in agreement with the forward-model hypothesis of motor learning, which states that adaptation error signals incorporate predictions of motor outcomes computed using a copy of the motor command (efference copy).

  12. Adaptive and aberrant reward prediction signals in the human brain.

    PubMed

    Roiser, Jonathan P; Stephan, Klaas E; den Ouden, Hanneke E M; Friston, Karl J; Joyce, Eileen M

    2010-04-01

    Theories of the positive symptoms of schizophrenia hypothesize a role for aberrant reinforcement signaling driven by dysregulated dopamine transmission. Recently, we provided evidence of aberrant reward learning in symptomatic, but not asymptomatic patients with schizophrenia, using a novel paradigm, the Salience Attribution Test (SAT). The SAT is a probabilistic reward learning game that employs cues that vary across task-relevant and task-irrelevant dimensions; it provides behavioral indices of adaptive and aberrant reward learning. As an initial step prior to future clinical studies, here we used functional magnetic resonance imaging to examine the neural basis of adaptive and aberrant reward learning during the SAT in healthy volunteers. As expected, cues associated with high relative to low reward probabilities elicited robust hemodynamic responses in a network of structures previously implicated in motivational salience; the midbrain, in the vicinity of the ventral tegmental area, and regions targeted by its dopaminergic projections, i.e. medial dorsal thalamus, ventral striatum and prefrontal cortex (PFC). Responses in the medial dorsal thalamus and polar PFC were strongly correlated with the degree of adaptive reward learning across participants. Finally, and most importantly, differential dorsolateral PFC and middle temporal gyrus (MTG) responses to cues with identical reward probabilities were very strongly correlated with the degree of aberrant reward learning. Participants who showed greater aberrant learning exhibited greater dorsolateral PFC responses, and reduced MTG responses, to cues erroneously inferred to be less strongly associated with reward. The results are discussed in terms of their implications for different theories of associative learning. PMID:19969090

  13. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  14. Model-Biased, Data-Driven Adaptive Failure Prediction

    NASA Technical Reports Server (NTRS)

    Leen, Todd K.

    2004-01-01

    This final report, which contains a research summary and a viewgraph presentation, addresses clustering and data simulation techniques for failure prediction. The researchers applied their techniques to both helicopter gearbox anomaly detection and segmentation of Earth Observing System (EOS) satellite imagery.

  15. Interpreting functional effects of coding variants: challenges in proteome-scale prediction, annotation and assessment.

    PubMed

    Shameer, Khader; Tripathi, Lokesh P; Kalari, Krishna R; Dudley, Joel T; Sowdhamini, Ramanathan

    2016-09-01

    Accurate assessment of genetic variation in human DNA sequencing studies remains a nontrivial challenge in clinical genomics and genome informatics. Ascribing functional roles and/or clinical significances to single nucleotide variants identified from a next-generation sequencing study is an important step in genome interpretation. Experimental characterization of all the observed functional variants is yet impractical; thus, the prediction of functional and/or regulatory impacts of the various mutations using in silico approaches is an important step toward the identification of functionally significant or clinically actionable variants. The relationships between genotypes and the expressed phenotypes are multilayered and biologically complex; such relationships present numerous challenges and at the same time offer various opportunities for the design of in silico variant assessment strategies. Over the past decade, many bioinformatics algorithms have been developed to predict functional consequences of single nucleotide variants in the protein coding regions. In this review, we provide an overview of the bioinformatics resources for the prediction, annotation and visualization of coding single nucleotide variants. We discuss the currently available approaches and major challenges from the perspective of protein sequence, structure, function and interactions that require consideration when interpreting the impact of putatively functional variants. We also discuss the relevance of incorporating integrated workflows for predicting the biomedical impact of the functionally important variations encoded in a genome, exome or transcriptome. Finally, we propose a framework to classify variant assessment approaches and strategies for incorporation of variant assessment within electronic health records.

  16. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1993-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  17. Thought Insertion as a Self-Disturbance: An Integration of Predictive Coding and Phenomenological Approaches

    PubMed Central

    Sterzer, Philipp; Mishara, Aaron L.; Voss, Martin; Heinz, Andreas

    2016-01-01

    Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from “nowhere”, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: “Ichstörungen”) of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer) first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a “becoming sensory” of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, “internal” events such as thoughts (including volitions, emotions and memories) can also be associated with increased prediction-error signaling and are thus imbued

  18. In silico prediction of long intergenic non-coding RNAs in sheep.

    PubMed

    Bakhtiarizadeh, Mohammad Reza; Hosseinpour, Batool; Arefnezhad, Babak; Shamabadi, Narges; Salami, Seyed Alireza

    2016-04-01

    Long non-coding RNAs (lncRNAs) are transcribed RNA molecules >200 nucleotides in length that do not encode proteins and serve as key regulators of diverse biological processes. Recently, thousands of long intergenic non-coding RNAs (lincRNAs), a type of lncRNAs, have been identified in mammalians using massive parallel large sequencing technologies. The availability of the genome sequence of sheep (Ovis aries) has allowed us genomic prediction of non-coding RNAs. This is the first study to identify lincRNAs using RNA-seq data of eight different tissues of sheep, including brain, heart, kidney, liver, lung, ovary, skin, and white adipose. A computational pipeline was employed to characterize 325 putative lincRNAs with high confidence from eight important tissues of sheep using different criteria such as GC content, exon number, gene length, co-expression analysis, stability, and tissue-specific scores. Sixty-four putative lincRNAs displayed tissues-specific expression. The highest number of tissues-specific lincRNAs was found in skin and brain. All novel lincRNAs that aligned to the human and mouse lincRNAs had conserved synteny. These closest protein-coding genes were enriched in 11 significant GO terms such as limb development, appendage development, striated muscle tissue development, and multicellular organismal development. The findings reported here have important implications for the study of sheep genome.

  19. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  20. Fast intra-prediction algorithms for high efficiency video coding standard

    NASA Astrophysics Data System (ADS)

    Kibeya, Hassan; Belghith, Fatma; Ben Ayed, Mohammed Ali; Masmoudi, Nouri

    2016-01-01

    High efficiency video coding (HEVC) is the latest video compression standard that provides significant performance improvement on the compression ratio compared to all existing video coding standards. The intra-prediction procedure plays an important role in the HEVC encoder, and it is being achieved by providing up to 35 intra-modes with a larger coding unit requiring a high computational complexity that needs to be alleviated. Toward this end, the paper proposes two fast intra-mode decision algorithms that exploit the features of video sequences. First, an early detection of zero transform and quantified coefficients method is applied to generate threshold values employed for early termination of the intra-decision process and hence accelerates the encoding procedure. Another fast intra-mode decision algorithm is elaborated that relies on a refinement technique. Based on statistical analyses of frequently chosen modes, only a small part of the candidate modes is chosen for intra-prediction process, which reduces the complexity of the intra-encoding procedure. The performance of the proposed algorithms is verified through comparative analysis of encoding time, visual image quality, and compression ratio. Compared to HM 10.0, the encoding time reduction can reach 69% with only a slight degradation of image quality and compression ratio.

  1. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  2. Adaptive quarter-pel motion estimation and motion vector coding algorithm for the H.264/AVC standard

    NASA Astrophysics Data System (ADS)

    Jung, Seung-Won; Park, Chun-Su; Ha, Le Thanh; Ko, Sung-Jea

    2009-11-01

    We present an adaptive quarter-pel (Qpel) motion estimation (ME) method for H.264/AVC. Instead of applying Qpel ME to all macroblocks (MBs), the proposed method selectively performs Qpel ME in an MB level. In order to reduce the bit rate, we also propose a motion vector (MV) encoding technique that adaptively selects a different variable length coding (VLC) table according to the accuracy of the MV. Experimental results show that the proposed method can achieve about 3% average bit rate reduction.

  3. Do We Need Better Climate Predictions to Adapt to a Changing Climate? (Invited)

    NASA Astrophysics Data System (ADS)

    Dessai, S.; Hulme, M.; Lempert, R.; Pielke, R., Jr.

    2009-12-01

    Based on a series of international scientific assessments, climate change has been presented to society as a major problem that needs urgently to be tackled. The science that underpins these assessments has been pre-dominantly from the realm of the natural sciences and central to this framing have been ‘projections’ of future climate change (and its impacts on environment and society) under various greenhouse gas emissions scenarios and using a variety of climate model predictions with embedded assumptions. Central to much of the discussion surrounding adaptation to climate change is the claim - explicit or implicit - that decision makers need accurate and increasingly precise assessments of future impacts of climate change in order to adapt successfully. If true, this claim places a high premium on accurate and precise climate predictions at a range of geographical and temporal scales; such predictions therefore become indispensable, and indeed a prerequisite for, effective adaptation decision-making. But is effective adaptation tied to the ability of the scientific enterprise to predict future climate with accuracy and precision? If so, this may impose a serious and intractable limit on adaptation. This paper proceeds in three sections. It first gathers evidence of claims that climate prediction is necessary for adaptation decision-making. This evidence is drawn from peer-reviewed literature and from published science funding strategies and government policy in a number of different countries. The second part discusses the challenges of climate prediction and why science will consistently be unable to provide accurate and precise predictions of future climate relevant for adaptation (usually at the local/regional level). Section three discusses whether these limits to future foresight represent a limit to adaptation, arguing that effective adaptation need not be limited by a general inability to predict future climate. Given the deep uncertainties involved in

  4. Life Prediction for a CMC Component Using the NASALIFE Computer Code

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John Z.; Murthy, Pappu L. N.; Mital, Subodh K.

    2005-01-01

    The computer code, NASALIFE, was used to provide estimates for life of an SiC/SiC stator vane under varying thermomechanical loading conditions. The primary intention of this effort is to show how the computer code NASALIFE can be used to provide reasonable estimates of life for practical propulsion system components made of advanced ceramic matrix composites (CMC). Simple loading conditions provided readily observable and acceptable life predictions. Varying the loading conditions such that low cycle fatigue and creep were affected independently provided expected trends in the results for life due to varying loads and life due to creep. Analysis was based on idealized empirical data for the 9/99 Melt Infiltrated SiC fiber reinforced SiC.

  5. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  6. Adaptive Pole Placement Controllers For Robotic Manipulators With Predictive Action

    NASA Astrophysics Data System (ADS)

    Kaynak, Okyay; Hoyer, Helmut

    1987-10-01

    This paper proposes two pole assignment control schemes for robotic manipulators, based on an anticipatory action. In one, the control objective is for the velocity tracking error to decay with a prespecified dynamics. In the other, a generalised cost function is minimized and the weighting factors in the cost function are determined to achieve desired closed loop pole locations for the tracking error. The prediction scheme used ensures a high degree of robustness against system-model mismatch as demonstrated by the simulation results presented.

  7. Transitional states in marine fisheries: adapting to predicted global change.

    PubMed

    MacNeil, M Aaron; Graham, Nicholas A J; Cinner, Joshua E; Dulvy, Nicholas K; Loring, Philip A; Jennings, Simon; Polunin, Nicholas V C; Fisk, Aaron T; McClanahan, Tim R

    2010-11-27

    Global climate change has the potential to substantially alter the production and community structure of marine fisheries and modify the ongoing impacts of fishing. Fish community composition is already changing in some tropical, temperate and polar ecosystems, where local combinations of warming trends and higher environmental variation anticipate the changes likely to occur more widely over coming decades. Using case studies from the Western Indian Ocean, the North Sea and the Bering Sea, we contextualize the direct and indirect effects of climate change on production and biodiversity and, in turn, on the social and economic aspects of marine fisheries. Climate warming is expected to lead to (i) yield and species losses in tropical reef fisheries, driven primarily by habitat loss; (ii) community turnover in temperate fisheries, owing to the arrival and increasing dominance of warm-water species as well as the reduced dominance and departure of cold-water species; and (iii) increased diversity and yield in Arctic fisheries, arising from invasions of southern species and increased primary production resulting from ice-free summer conditions. How societies deal with such changes will depend largely on their capacity to adapt--to plan and implement effective responses to change--a process heavily influenced by social, economic, political and cultural conditions.

  8. Aircrew dosimetry using the Predictive Code for Aircrew Radiation Exposure (PCAIRE).

    PubMed

    Lewis, B J; Bennett, L G I; Green, A R; Butler, A; Desormeaux, M; Kitching, F; McCall, M J; Ellaschuk, B; Pierre, M

    2005-01-01

    During 2003, a portable instrument suite was used to conduct cosmic radiation measurements on 49 jet-altitude flights, which brings the total number of in-flight measurements by this research group to over 160 flights since 1999. From previous measurements, correlations have been developed to allow for the interpolation of the dose-equivalent rate for any global position, altitude and date. The result was a Predictive Code for Aircrew Radiation Exposure (PCAIRE), which has since been improved. This version of the PCAIRE has been validated against the integral route dose measurements made at commercial aircraft altitudes during the 49 flights. On most flights, the code gave predictions that agreed to the measured data (within +/- 25%), providing confidence in the use of PCAIRE to predict aircrew exposure to galactic cosmic radiation. An empirical correlation, based on ground-level neutron monitoring data, has also been developed for the estimation of aircrew exposure from solar energetic particle (SEP) events. This model has been used to determine the significance of SEP exposure on a theoretical jet altitude flight during GLE 42.

  9. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  10. Adaptive mesh simulations of astrophysical detonations using the ASCI flash code

    NASA Astrophysics Data System (ADS)

    Fryxell, B.; Calder, A. C.; Dursi, L. J.; Lamb, D. Q.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Truran, J. W.; Tufo, H. M.; Zingale, M.

    2001-08-01

    The Flash code was developed at the University of Chicago as part of the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The code was designed specifically to simulate thermonuclear flashes in compact stars (white dwarfs and neutron stars). This paper will give a brief introduction to the astrophysics problems we wish to address, followed by a description of the current version of the Flash code. Finally, we discuss two simulations of astrophysical detonations that we have carried out with the code. The first is of a helium detonation in an X-ray burst. The other simulation models a carbon detonation in a Type Ia supernova explosion. .

  11. De novo computational prediction of non-coding RNA genes in prokaryotic genomes

    PubMed Central

    Tran, Thao T.; Zhou, Fengfeng; Marshburn, Sarah; Stead, Mark; Kushner, Sidney R.; Xu, Ying

    2009-01-01

    Motivation: The computational identification of non-coding RNA (ncRNA) genes represents one of the most important and challenging problems in computational biology. Existing methods for ncRNA gene prediction rely mostly on homology information, thus limiting their applications to ncRNA genes with known homologues. Results: We present a novel de novo prediction algorithm for ncRNA genes using features derived from the sequences and structures of known ncRNA genes in comparison to decoys. Using these features, we have trained a neural network-based classifier and have applied it to Escherichia coli and Sulfolobus solfataricus for genome-wide prediction of ncRNAs. Our method has an average prediction sensitivity and specificity of 68% and 70%, respectively, for identifying windows with potential for ncRNA genes in E.coli. By combining windows of different sizes and using positional filtering strategies, we predicted 601 candidate ncRNAs and recovered 41% of known ncRNAs in E.coli. We experimentally investigated six novel candidates using Northern blot analysis and found expression of three candidates: one represents a potential new ncRNA, one is associated with stable mRNA decay intermediates and one is a case of either a potential riboswitch or transcription attenuator involved in the regulation of cell division. In general, our approach enables the identification of both cis- and trans-acting ncRNAs in partially or completely sequenced microbial genomes without requiring homology or structural conservation. Availability: The source code and results are available at http://csbl.bmb.uga.edu/publications/materials/tran/. Contact: xyn@bmb.uga.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19744996

  12. Non-coding RNAs Enabling Prognostic Stratification and Prediction of Therapeutic Response in Colorectal Cancer Patients.

    PubMed

    Perakis, Samantha O; Thomas, Joseph E; Pichler, Martin

    2016-01-01

    Colorectal cancer (CRC) is a heterogeneous disease and current treatment options for patients are associated with a wide range of outcomes and tumor responses. Although the traditional TNM staging system continues to serve as a crucial tool for estimating CRC prognosis and for stratification of treatment choices and long-term survival, it remains limited as it relies on macroscopic features and cases of surgical resection, fails to incorporate new molecular data and information, and cannot perfectly predict the variety of outcomes and responses to treatment associated with tumors of the same stage. Although additional histopathologic features have recently been applied in order to better classify individual tumors, the future might incorporate the use of novel molecular and genetic markers in order to maximize therapeutic outcome and to provide accurate prognosis. Such novel biomarkers, in addition to individual patient tumor phenotyping and other validated genetic markers, could facilitate the prediction of risk of progression in CRC patients and help assess overall survival. Recent findings point to the emerging role of non-protein-coding regions of the genome in their contribution to the progression of cancer and tumor formation. Two major subclasses of non-coding RNAs (ncRNAs), microRNAs and long non-coding RNAs, are often dysregulated in CRC and have demonstrated their diagnostic and prognostic potential as biomarkers. These ncRNAs are promising molecular classifiers and could assist in the stratification of patients into appropriate risk groups to guide therapeutic decisions and their expression patterns could help determine prognosis and predict therapeutic options in CRC. PMID:27573901

  13. Non-coding RNAs Enabling Prognostic Stratification and Prediction of Therapeutic Response in Colorectal Cancer Patients.

    PubMed

    Perakis, Samantha O; Thomas, Joseph E; Pichler, Martin

    2016-01-01

    Colorectal cancer (CRC) is a heterogeneous disease and current treatment options for patients are associated with a wide range of outcomes and tumor responses. Although the traditional TNM staging system continues to serve as a crucial tool for estimating CRC prognosis and for stratification of treatment choices and long-term survival, it remains limited as it relies on macroscopic features and cases of surgical resection, fails to incorporate new molecular data and information, and cannot perfectly predict the variety of outcomes and responses to treatment associated with tumors of the same stage. Although additional histopathologic features have recently been applied in order to better classify individual tumors, the future might incorporate the use of novel molecular and genetic markers in order to maximize therapeutic outcome and to provide accurate prognosis. Such novel biomarkers, in addition to individual patient tumor phenotyping and other validated genetic markers, could facilitate the prediction of risk of progression in CRC patients and help assess overall survival. Recent findings point to the emerging role of non-protein-coding regions of the genome in their contribution to the progression of cancer and tumor formation. Two major subclasses of non-coding RNAs (ncRNAs), microRNAs and long non-coding RNAs, are often dysregulated in CRC and have demonstrated their diagnostic and prognostic potential as biomarkers. These ncRNAs are promising molecular classifiers and could assist in the stratification of patients into appropriate risk groups to guide therapeutic decisions and their expression patterns could help determine prognosis and predict therapeutic options in CRC.

  14. A Peak Power Reduction Method with Adaptive Inversion of Clustered Parity-Carriers in BCH-Coded OFDM Systems

    NASA Astrophysics Data System (ADS)

    Muta, Osamu; Akaiwa, Yoshihiko

    In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.

  15. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  16. Cosmic ray measurements at aircraft altitudes and comparison with predictions of computer codes.

    PubMed

    Zhou, D; O'Sullivan, D; Xu, B; Flood, E

    2003-01-01

    Extensive measurements of dose exposure of aircrew have been carried out in recent years using passive detectors on subsonic and supersonic air routes by DIAS (Dublin Institute for Advanced Studies). Studies were based on measurement of LET spectra using nuclear recoils produced in CR-39 nuclear track detectors by high energy neutrons and protons. The detectors were calibrated using energetic heavy ions. Data obtained were compared with the predictions of the EPCARD and CARI-6 codes. Good agreement has been found between the experimental and theoretical values.

  17. Multisensor multipulse Linear Predictive Coding (LPC) analysis in noise for medium rate speech transmission

    NASA Astrophysics Data System (ADS)

    Preuss, R. D.

    1985-12-01

    The theory of multipulse linear predictive coding (LPC) analysis is extended to include the possible presence of acoustic noise, as for a telephone near a busy road. Models are developed assuming two signals are provided: the primary signal is the output of a microphone which samples the combined acoustic fields of the noise and the speech, while the secondary signal is the output of a microphone which samples the acoustic field of the noise alone. Analysis techniques to extract the multipulse LPC parameters from these two signals are developed; these techniques are developed as approximations to maximum likelihood analysis for the given model.

  18. Fast Prediction of HCCI Combustion with an Artificial Neural Network Linked to a Fluid Mechanics Code

    SciTech Connect

    Aceves, S M; Flowers, D L; Chen, J; Babaimopoulos, A

    2006-08-29

    We have developed an artificial neural network (ANN) based combustion model and have integrated it into a fluid mechanics code (KIVA3V) to produce a new analysis tool (titled KIVA3V-ANN) that can yield accurate HCCI predictions at very low computational cost. The neural network predicts ignition delay as a function of operating parameters (temperature, pressure, equivalence ratio and residual gas fraction). KIVA3V-ANN keeps track of the time history of the ignition delay during the engine cycle to evaluate the ignition integral and predict ignition for each computational cell. After a cell ignites, chemistry becomes active, and a two-step chemical kinetic mechanism predicts composition and heat generation in the ignited cells. KIVA3V-ANN has been validated by comparison with isooctane HCCI experiments in two different engines. The neural network provides reasonable predictions for HCCI combustion and emissions that, although typically not as good as obtained with the more physically representative multi-zone model, are obtained at a much reduced computational cost. KIVA3V-ANN can perform reasonably accurate HCCI calculations while requiring only 10% more computational effort than a motored KIVA3V run. It is therefore considered a valuable tool for evaluation of engine maps or other performance analysis tasks requiring multiple individual runs.

  19. Clinal adaptation and adaptive plasticity in Artemisia californica: implications for the response of a foundation species to predicted climate change.

    PubMed

    Pratt, Jessica D; Mooney, Kailen A

    2013-08-01

    Local adaptation and plasticity pose significant obstacles to predicting plant responses to future climates. Although local adaptation and plasticity in plant functional traits have been documented for many species, less is known about population-level variation in plasticity and whether such variation is driven by adaptation to environmental variation. We examined clinal variation in traits and performance - and plastic responses to environmental change - for the shrub Artemisia californica along a 700 km gradient characterized (from south to north) by a fourfold increase in precipitation and a 61% decrease in interannual precipitation variation. Plants cloned from five populations along this gradient were grown for 3 years in treatments approximating the precipitation regimes of the north and south range margins. Most traits varying among populations did so clinally; northern populations (vs. southern) had higher water-use efficiencies and lower growth rates, C : N ratios and terpene concentrations. Notably, there was variation in plasticity for plant performance that was strongly correlated with source site interannual precipitation variability. The high-precipitation treatment (vs. low) increased growth and flower production more for plants from southern populations (181% and 279%, respectively) than northern populations (47% and 20%, respectively). Overall, precipitation variability at population source sites predicted 86% and 99% of variation in plasticity in growth and flowering, respectively. These striking, clinal patterns in plant traits and plasticity are indicative of adaptation to both the mean and variability of environmental conditions. Furthermore, our analysis of long-term coastal climate data in turn indicates an increase in interannual precipitation variation consistent with most global change models and, unexpectedly, this increased variation is especially pronounced at historically stable, northern sites. Our findings demonstrate the

  20. Clinal adaptation and adaptive plasticity in Artemisia californica: implications for the response of a foundation species to predicted climate change.

    PubMed

    Pratt, Jessica D; Mooney, Kailen A

    2013-08-01

    Local adaptation and plasticity pose significant obstacles to predicting plant responses to future climates. Although local adaptation and plasticity in plant functional traits have been documented for many species, less is known about population-level variation in plasticity and whether such variation is driven by adaptation to environmental variation. We examined clinal variation in traits and performance - and plastic responses to environmental change - for the shrub Artemisia californica along a 700 km gradient characterized (from south to north) by a fourfold increase in precipitation and a 61% decrease in interannual precipitation variation. Plants cloned from five populations along this gradient were grown for 3 years in treatments approximating the precipitation regimes of the north and south range margins. Most traits varying among populations did so clinally; northern populations (vs. southern) had higher water-use efficiencies and lower growth rates, C : N ratios and terpene concentrations. Notably, there was variation in plasticity for plant performance that was strongly correlated with source site interannual precipitation variability. The high-precipitation treatment (vs. low) increased growth and flower production more for plants from southern populations (181% and 279%, respectively) than northern populations (47% and 20%, respectively). Overall, precipitation variability at population source sites predicted 86% and 99% of variation in plasticity in growth and flowering, respectively. These striking, clinal patterns in plant traits and plasticity are indicative of adaptation to both the mean and variability of environmental conditions. Furthermore, our analysis of long-term coastal climate data in turn indicates an increase in interannual precipitation variation consistent with most global change models and, unexpectedly, this increased variation is especially pronounced at historically stable, northern sites. Our findings demonstrate the

  1. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is

  2. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    SciTech Connect

    Geffraye, G.; Bazin, P.; Pichon, P.

    1995-09-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit.

  3. Comparison of aerosol code predictions with experimental observations on the behavior of aerosols in steam

    SciTech Connect

    Tobias, M.L.

    1983-01-01

    Several computer codes have been developed to predict the behavior of aerosols in steam, a situation which is expected to occur in a light-water reactor accident. Among the codes with capabilities in this respect are MAEROS (Sandia) AEROMECH (University of Missouri) and NAUA-Mod4 (Karlsruhe). NAUA was specifically developed to model steam-aerosol behavior in hypothetical accidents in pressurized water reactors. It is based upon a series of experiments, in which aerosols of uranium oxide, platinum oxide, and sodium nitrate were generated in a steam atmosphere. Several series of experiments have been conducted in the NSPP vessel using aerosols generated by plasma torch in a steam-air atmosphere. From these experiments we have selected those performed with iron oxide or uranium oxide for comparison with various computer codes, principally NAUA-Mod4. Comparisons of particle size are displayed also. Results of parameter studies to determine the sensitivity of the calculated results to the steam input level, initial particle size, deposition parameters, and assumed particle density are presented.

  4. FDNS code to predict wall heat fluxes or wall temperatures in rocket nozzles

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1993-01-01

    This report summarizes the findings on the NASA contract NAG8-212, Task No. 3. The overall project consists of three tasks, all of which have been successfully completed. In addition, some supporting supplemental work, not required by the contract, has been performed and is documented herein. Task 1 involved the modification of the wall functions in the code FDNS to use a Reynolds Analogy-based method. Task 2 involved the verification of the code against experimentally available data. The data chosen for comparison was from an experiment involving the injection of helium from a wall jet. Results obtained in completing this task also show the sensitivity of the FDNS code to unknown conditions at the injection slot. Task 3 required computation of the flow of hot exhaust gases through the P&W 40K subscale nozzle. Computations were performed both with and without film coolant injection. The FDNS program tends to overpredict heat fluxes, but, with suitable modeling of backside cooling, may give reasonable wall temperature predictions. For film cooling in the P&W 40K calorimeter subscale nozzle, the average wall temperature is reduced from 1750 R to about 1050 R by the film cooling. The average wall heat flux is reduced by a factor of three.

  5. Status and Plans for the TRANSP Interpretive and Predictive Simulation Code

    NASA Astrophysics Data System (ADS)

    Kaye, Stanley; Andre, Robert; Marina, Gorelenkova; Yuan, Xingqui; Hawryluk, Richard; Jardin, Steven; Poli, Francesca

    2015-11-01

    TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT_SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP also incorporates such source models as NUBEAM for neutral beam injection, GENRAY, TORAY, TORBEAM, TORIC and CQL3D for ICRH, LHCD, ECH and HHFW. The implementation of selected components makes efficient use of MPI for speed up of code calculations. TRANSP has a wide international user-base, and it is run on the FusionGrid to allow for timely support and quick turnaround by the PPPL Computational Plasma Physics Group. It is being used as a basis for both analysis and development of control algorithms and discharge operational scenarios, including simulation of ITER plasmas. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Progress on implementing TRANSP as a component in the ITER IMAS will also be described. This research was supported by the U.S. Department of Energy under contracts DE-AC02-09CH11466.

  6. IN-MACA-MCC: Integrated Multiple Attractor Cellular Automata with Modified Clonal Classifier for Human Protein Coding and Promoter Prediction

    PubMed Central

    Pokkuluri, Kiran Sree; Inampudi, Ramesh Babu; Nedunuri, S. S. S. N. Usha Devi

    2014-01-01

    Protein coding and promoter region predictions are very important challenges of bioinformatics (Attwood and Teresa, 2000). The identification of these regions plays a crucial role in understanding the genes. Many novel computational and mathematical methods are introduced as well as existing methods that are getting refined for predicting both of the regions separately; still there is a scope for improvement. We propose a classifier that is built with MACA (multiple attractor cellular automata) and MCC (modified clonal classifier) to predict both regions with a single classifier. The proposed classifier is trained and tested with Fickett and Tung (1992) datasets for protein coding region prediction for DNA sequences of lengths 54, 108, and 162. This classifier is trained and tested with MMCRI datasets for protein coding region prediction for DNA sequences of lengths 252 and 354. The proposed classifier is trained and tested with promoter sequences from DBTSS (Yamashita et al., 2006) dataset and nonpromoters from EID (Saxonov et al., 2000) and UTRdb (Pesole et al., 2002) datasets. The proposed model can predict both regions with an average accuracy of 90.5% for promoter and 89.6% for protein coding region predictions. The specificity and sensitivity values of promoter and protein coding region predictions are 0.89 and 0.92, respectively. PMID:25132849

  7. IN-MACA-MCC: Integrated Multiple Attractor Cellular Automata with Modified Clonal Classifier for Human Protein Coding and Promoter Prediction.

    PubMed

    Pokkuluri, Kiran Sree; Inampudi, Ramesh Babu; Nedunuri, S S S N Usha Devi

    2014-01-01

    Protein coding and promoter region predictions are very important challenges of bioinformatics (Attwood and Teresa, 2000). The identification of these regions plays a crucial role in understanding the genes. Many novel computational and mathematical methods are introduced as well as existing methods that are getting refined for predicting both of the regions separately; still there is a scope for improvement. We propose a classifier that is built with MACA (multiple attractor cellular automata) and MCC (modified clonal classifier) to predict both regions with a single classifier. The proposed classifier is trained and tested with Fickett and Tung (1992) datasets for protein coding region prediction for DNA sequences of lengths 54, 108, and 162. This classifier is trained and tested with MMCRI datasets for protein coding region prediction for DNA sequences of lengths 252 and 354. The proposed classifier is trained and tested with promoter sequences from DBTSS (Yamashita et al., 2006) dataset and nonpromoters from EID (Saxonov et al., 2000) and UTRdb (Pesole et al., 2002) datasets. The proposed model can predict both regions with an average accuracy of 90.5% for promoter and 89.6% for protein coding region predictions. The specificity and sensitivity values of promoter and protein coding region predictions are 0.89 and 0.92, respectively. PMID:25132849

  8. Prediction of contact forces of underactuated finger by adaptive neuro fuzzy approach

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Abbasi, Almas; Kiani, Kourosh; Al-Shammari, Eiman Tamah

    2015-12-01

    To obtain adaptive finger passive underactuation can be used. Underactuation principle can be used to adapt shapes of the fingers for grasping objects. The fingers with underactuation do not require control algorithm. In this study a kinetostatic model of the underactuated finger mechanism was analyzed. The underactuation is achieved by adding the compliance in every finger joint. Since the contact forces of the finger depend on contact position of the finger and object, it is suitable to make a prediction model for the contact forces in function of contact positions of the finger and grasping objects. In this study prediction of the contact forces was established by a soft computing approach. Adaptive neuro-fuzzy inference system (ANFIS) was applied as the soft computing method to perform the prediction of the finger contact forces.

  9. Rate-adaptive FSO links over atmospheric turbulence channels by jointly using repetition coding and silence periods.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2010-11-22

    In this paper, a new and simple rate-adaptive transmission scheme for free-space optical (FSO) communication systems with intensity modulation and direct detection (IM/DD) over atmospheric turbulence channels is analyzed. This scheme is based on the joint use of repetition coding and variable silence periods, exploiting the potential time-diversity order (TDO) available in the turbulent channel as well as allowing the increase of the peak-to-average optical power ratio (PAOPR). Here, repetition coding is firstly used in order to accommodate the transmission rate to the channel conditions until the whole time diversity order available in the turbulent channel by interleaving is exploited. Then, once no more diversity gain is available, the rate reduction can be increased by using variable silence periods in order to increase the PAOPR. Novel closed-form expressions for the average bit-error rate (BER) as well as their corresponding asymptotic expressions are presented when the irradiance of the transmitted optical beam follows negative exponential and gamma-gamma distributions, covering a wide range of atmospheric turbulence conditions. Obtained results show a diversity order as in the corresponding rate-adaptive transmission scheme only based on repetition codes but providing a relevant improvement in coding gain. Simulation results are further demonstrated to confirm the analytical results. Here, not only rectangular pulses are considered but also OOK formats with any pulse shape, corroborating the advantage of using pulses with high PAOPR, such as gaussian or squared hyperbolic secant pulses. We also determine the achievable information rate for the rate-adaptive transmission schemes here analyzed.

  10. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder.

    PubMed

    Gonzalez-Gadea, Maria Luz; Chennu, Srivas; Bekinschtein, Tristan A; Rattazzi, Alexia; Beraudi, Ana; Tripicchio, Paula; Moyano, Beatriz; Soffita, Yamila; Steinberg, Laura; Adolfi, Federico; Sigman, Mariano; Marino, Julian; Manes, Facundo; Ibanez, Agustin

    2015-11-01

    Predictive coding has been proposed as a framework to understand neural processes in neuropsychiatric disorders. We used this approach to describe mechanisms responsible for attentional abnormalities in autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). We monitored brain dynamics of 59 children (8-15 yr old) who had ASD or ADHD or who were control participants via high-density electroencephalography. We performed analysis at the scalp and source-space levels while participants listened to standard and deviant tone sequences. Through task instructions, we manipulated top-down expectation by presenting expected and unexpected deviant sequences. Children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events. Moreover, neural abnormalities were associated with specific control mechanisms, namely, inhibitory control in ASD and set-shifting in ADHD. Based on the predictive coding account, top-down expectation abnormalities could be attributed to a disproportionate reliance (precision) allocated to prior beliefs in ASD and to sensory input in ADHD.

  11. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder.

    PubMed

    Gonzalez-Gadea, Maria Luz; Chennu, Srivas; Bekinschtein, Tristan A; Rattazzi, Alexia; Beraudi, Ana; Tripicchio, Paula; Moyano, Beatriz; Soffita, Yamila; Steinberg, Laura; Adolfi, Federico; Sigman, Mariano; Marino, Julian; Manes, Facundo; Ibanez, Agustin

    2015-11-01

    Predictive coding has been proposed as a framework to understand neural processes in neuropsychiatric disorders. We used this approach to describe mechanisms responsible for attentional abnormalities in autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). We monitored brain dynamics of 59 children (8-15 yr old) who had ASD or ADHD or who were control participants via high-density electroencephalography. We performed analysis at the scalp and source-space levels while participants listened to standard and deviant tone sequences. Through task instructions, we manipulated top-down expectation by presenting expected and unexpected deviant sequences. Children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events. Moreover, neural abnormalities were associated with specific control mechanisms, namely, inhibitory control in ASD and set-shifting in ADHD. Based on the predictive coding account, top-down expectation abnormalities could be attributed to a disproportionate reliance (precision) allocated to prior beliefs in ASD and to sensory input in ADHD. PMID:26311184

  12. Multi-level adaptive particle mesh (MLAPM): a c code for cosmological simulations

    NASA Astrophysics Data System (ADS)

    Knebe, Alexander; Green, Andrew; Binney, James

    2001-08-01

    We present a computer code written in c that is designed to simulate structure formation from collisionless matter. The code is purely grid-based and uses a recursively refined Cartesian grid to solve Poisson's equation for the potential, rather than obtaining the potential from a Green's function. Refinements can have arbitrary shapes and in practice closely follow the complex morphology of the density field that evolves. The time-step shortens by a factor of 2 with each successive refinement. Competing approaches to N-body simulation are discussed from the point of view of the basic theory of N-body simulation. It is argued that an appropriate choice of softening length ɛ is of great importance and that ɛ should be at all points an appropriate multiple of the local interparticle separation. Unlike tree and P3M codes, multigrid codes automatically satisfy this requirement. We show that at early times and low densities in cosmological simulations, ɛ needs to be significantly smaller relative to the interparticle separation than in virialized regions. Tests of the ability of the code's Poisson solver to recover the gravitational fields of both virialized haloes and Zel'dovich waves are presented, as are tests of the code's ability to reproduce analytic solutions for plane-wave evolution. The times required to conduct a ΛCDM cosmological simulation for various configurations are compared with the times required to complete the same simulation with the ART, AP3M and GADGET codes. The power spectra, halo mass functions and halo-halo correlation functions of simulations conducted with different codes are compared. The code is available from http://www-thphys.physics.ox.ac.uk/users/MLAPM.

  13. Predicting cortical bone adaptation to axial loading in the mouse tibia

    PubMed Central

    Pereira, A. F.; Javaheri, B.; Pitsillides, A. A.; Shefelbine, S. J.

    2015-01-01

    The development of predictive mathematical models can contribute to a deeper understanding of the specific stages of bone mechanobiology and the process by which bone adapts to mechanical forces. The objective of this work was to predict, with spatial accuracy, cortical bone adaptation to mechanical load, in order to better understand the mechanical cues that might be driving adaptation. The axial tibial loading model was used to trigger cortical bone adaptation in C57BL/6 mice and provide relevant biological and biomechanical information. A method for mapping cortical thickness in the mouse tibia diaphysis was developed, allowing for a thorough spatial description of where bone adaptation occurs. Poroelastic finite-element (FE) models were used to determine the structural response of the tibia upon axial loading and interstitial fluid velocity as the mechanical stimulus. FE models were coupled with mechanobiological governing equations, which accounted for non-static loads and assumed that bone responds instantly to local mechanical cues in an on–off manner. The presented formulation was able to simulate the areas of adaptation and accurately reproduce the distributions of cortical thickening observed in the experimental data with a statistically significant positive correlation (Kendall's τ rank coefficient τ = 0.51, p < 0.001). This work demonstrates that computational models can spatially predict cortical bone mechanoadaptation to a time variant stimulus. Such models could be used in the design of more efficient loading protocols and drug therapies that target the relevant physiological mechanisms. PMID:26311315

  14. Age-Related Changes in Predictive Capacity Versus Internal Model Adaptability: Electrophysiological Evidence that Individual Differences Outweigh Effects of Age.

    PubMed

    Bornkessel-Schlesewsky, Ina; Philipp, Markus; Alday, Phillip M; Kretzschmar, Franziska; Grewe, Tanja; Gumpert, Maike; Schumacher, Petra B; Schlesewsky, Matthias

    2015-01-01

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age-related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60-81 years; n = 40) as they read sentences of the form "The opposite of black is white/yellow/nice." Replicating previous work in young adults, results showed a target-related P300 for the expected antonym ("white"; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, "nice," versus the incongruous associated condition, "yellow"). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that - at both a neurophysiological and a functional level - the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance. PMID

  15. Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity

    PubMed Central

    Latinus, Marianne; Belin, Pascal

    2011-01-01

    We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype. PMID:21847384

  16. Affinity regression predicts the recognition code of nucleic acid binding proteins

    PubMed Central

    Pelossof, Raphael; Singh, Irtisha; Yang, Julie L.; Weirauch, Matthew T.; Hughes, Timothy R.; Leslie, Christina S.

    2016-01-01

    Predicting the affinity profiles of nucleic acid-binding proteins directly from the protein sequence is a major unsolved problem. We present a statistical approach for learning the recognition code of a family of transcription factors (TFs) or RNA-binding proteins (RBPs) from high-throughput binding assays. Our method, called affinity regression, trains on protein binding microarray (PBM) or RNA compete experiments to learn an interaction model between proteins and nucleic acids, using only protein domain and probe sequences as inputs. By training on mouse homeodomain PBM profiles, our model correctly identifies residues that confer DNA-binding specificity and accurately predicts binding motifs for an independent set of divergent homeodomains. Similarly, learning from RNA compete profiles for diverse RBPs, our model can predict the binding affinities of held-out proteins and identify key RNA-binding residues. More broadly, we envision applying our method to model and predict biological interactions in any setting where there is a high-throughput ‘affinity’ readout. PMID:26571099

  17. Reading the second code: mapping epigenomes to understand plant growth, development, and adaptation to the environment.

    PubMed

    2012-06-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution.

  18. Evaluation of damage-induced permeability using a three-dimensional Adaptive Continuum/Discontinuum Code (AC/DC)

    NASA Astrophysics Data System (ADS)

    Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger

    Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency

  19. Resting-state functional connectivity predicts longitudinal change in autistic traits and adaptive functioning in autism.

    PubMed

    Plitt, Mark; Barnes, Kelly Anne; Wallace, Gregory L; Kenworthy, Lauren; Martin, Alex

    2015-12-01

    Although typically identified in early childhood, the social communication symptoms and adaptive behavior deficits that are characteristic of autism spectrum disorder (ASD) persist throughout the lifespan. Despite this persistence, even individuals without cooccurring intellectual disability show substantial heterogeneity in outcomes. Previous studies have found various behavioral assessments [such as intelligence quotient (IQ), early language ability, and baseline autistic traits and adaptive behavior scores] to be predictive of outcome, but most of the variance in functioning remains unexplained by such factors. In this study, we investigated to what extent functional brain connectivity measures obtained from resting-state functional connectivity MRI (rs-fcMRI) could predict the variance left unexplained by age and behavior (follow-up latency and baseline autistic traits and adaptive behavior scores) in two measures of outcome--adaptive behaviors and autistic traits at least 1 y postscan (mean follow-up latency = 2 y, 10 mo). We found that connectivity involving the so-called salience network (SN), default-mode network (DMN), and frontoparietal task control network (FPTCN) was highly predictive of future autistic traits and the change in autistic traits and adaptive behavior over the same time period. Furthermore, functional connectivity involving the SN, which is predominantly composed of the anterior insula and the dorsal anterior cingulate, predicted reliable improvement in adaptive behaviors with 100% sensitivity and 70.59% precision. From rs-fcMRI data, our study successfully predicted heterogeneity in outcomes for individuals with ASD that was unaccounted for by simple behavioral metrics and provides unique evidence for networks underlying long-term symptom abatement.

  20. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  1. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music.

    PubMed

    Vuust, Peter; Witek, Maria A G

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain's Bayesian minimization of the error between the input to the brain and the brain's prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard ("rhythm") and the brain's anticipatory structuring of music ("meter"). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain's general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  2. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females.

  3. DNApi: A De Novo Adapter Prediction Algorithm for Small RNA Sequencing Data

    PubMed Central

    Tsuji, Junko; Weng, Zhiping

    2016-01-01

    With the rapid accumulation of publicly available small RNA sequencing datasets, third-party meta-analysis across many datasets is becoming increasingly powerful. Although removing the 3´ adapter is an essential step for small RNA sequencing analysis, the adapter sequence information is not always available in the metadata. The information can be also erroneous even when it is available. In this study, we developed DNApi, a lightweight Python software package that predicts the 3´ adapter sequence de novo and provides the user with cleansed small RNA sequences ready for down stream analysis. Tested on 539 publicly available small RNA libraries accompanied with 3´ adapter sequences in their metadata, DNApi shows near-perfect accuracy (98.5%) with fast runtime (~2.85 seconds per library) and efficient memory usage (~43 MB on average). In addition to 3´ adapter prediction, it is also important to classify whether the input small RNA libraries were already processed, i.e. the 3´ adapters were removed. DNApi perfectly judged that given another batch of datasets, 192 publicly available processed libraries were “ready-to-map” small RNA sequence. DNApi is compatible with Python 2 and 3, and is available at https://github.com/jnktsj/DNApi. The 731 small RNA libraries used for DNApi evaluation were from human tissues and were carefully and manually collected. This study also provides readers with the curated datasets that can be integrated into their studies. PMID:27736901

  4. Starvation stress during larval development reveals predictive adaptive response in adult worker honey bees (Apis mellifera)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A variety of organisms exhibit developmental plasticity that results in differences in adult morphology, physiology or behavior. This variation in the phenotype, called “Predictive Adaptive Response (PAR),” gives a selective advantage in an adult's environment if the adult experiences environments s...

  5. An adaptive handover prediction scheme for seamless mobility based wireless networks.

    PubMed

    Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.

  6. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490

  7. Prognostic and predictive values of long non-coding RNA LINC00472 in breast cancer.

    PubMed

    Shen, Yi; Katsaros, Dionyssios; Loo, Lenora W M; Hernandez, Brenda Y; Chong, Clayton; Canuto, Emilie Marion; Biglia, Nicoletta; Lu, Lingeng; Risch, Harvey; Chu, Wen-Ming; Yu, Herbert

    2015-04-20

    LINC00472 is a novel long intergenic non-coding RNA. We evaluated LINC00472 expression in breast tumor samples using RT-qPCR, performed a meta-analysis of over 20 microarray datasets from the Gene Expression Omnibus (GEO) database, and investigated the effect of LINC00472 expression on cell proliferation and migration in breast cancer cells transfected with a LINC00472-expressing vector. Our qPCR results showed that high LINC00472 expression was associated with less aggressive breast tumors and more favorable disease outcomes. Patients with high expression of LINC00472 had significantly reduced risk of relapse and death compared to those with low expression. Patients with high LINC00472 expression also had better responses to adjuvant chemo- or hormonal therapy than did patients with low expression. Results of meta-analysis on multiple studies from the GEO database were in agreement with the findings of our study. High LINC00472 was also associated with favorable molecular subtypes, Luminal A or normal-like tumors. Cell culture experiments showed that up-regulation of LINC00472 expression could suppress breast cancer cell proliferation and migration. Collectively, our clinical and in vitro studies suggest that LINC00472 is a tumor suppressor in breast cancer. Evaluating this long non-coding RNA in breast tumors may have prognostic and predictive value in the clinical management of breast cancer.

  8. Prognostic and predictive values of long non-coding RNA LINC00472 in breast cancer

    PubMed Central

    Shen, Yi; Katsaros, Dionyssios; Loo, Lenora W. M.; Hernandez, Brenda Y.; Chong, Clayton; Canuto, Emilie Marion; Biglia, Nicoletta; Lu, Lingeng; Risch, Harvey; Chu, Wen-Ming; Yu, Herbert

    2015-01-01

    LINC00472 is a novel long intergenic non-coding RNA. We evaluated LINC00472 expression in breast tumor samples using RT-qPCR, performed a meta-analysis of over 20 microarray datasets from the Gene Expression Omnibus (GEO) database, and investigated the effect of LINC00472 expression on cell proliferation and migration in breast cancer cells transfected with a LINC00472-expressing vector. Our qPCR results showed that high LINC00472 expression was associated with less aggressive breast tumors and more favorable disease outcomes. Patients with high expression of LINC00472 had significantly reduced risk of relapse and death compared to those with low expression. Patients with high LINC00472 expression also had better responses to adjuvant chemo- or hormonal therapy than did patients with low expression. Results of meta-analysis on multiple studies from the GEO database were in agreement with the findings of our study. High LINC00472 was also associated with favorable molecular subtypes, Luminal A or normal-like tumors. Cell culture experiments showed that up-regulation of LINC00472 expression could suppress breast cancer cell proliferation and migration. Collectively, our clinical and in vitro studies suggest that LINC00472 is a tumor suppressor in breast cancer. Evaluating this long non-coding RNA in breast tumors may have prognostic and predictive value in the clinical management of breast cancer. PMID:25865225

  9. Predictive coding and multisensory integration: an attentional account of the multisensory mind

    PubMed Central

    Talsma, Durk

    2015-01-01

    Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top–down and bottom–up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top–down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation. PMID:25859192

  10. Prediction of explosive cylinder tests using equations of state from the PANDA code

    SciTech Connect

    Kerley, G.I.; Christian-Frear, T.L.

    1993-09-28

    The PANDA code is used to construct tabular equations of state (EOS) for the detonation products of 24 explosives having CHNO compositions. These EOS, together with a reactive burn model, are used in numerical hydrocode calculations of cylinder tests. The predicted detonation properties and cylinder wall velocities are found to give very good agreement with experimental data. Calculations of flat plate acceleration tests for the HMX-based explosive LX14 are also made and shown to agree well with the measurements. The effects of the reaction zone on both the cylinder and flat plate tests are discussed. For TATB-based explosives, the differences between experiment and theory are consistently larger than for other compositions and may be due to nonideal (finite dimameter) behavior.

  11. Discrete coding of stimulus value, reward expectation, and reward prediction error in the dorsal striatum.

    PubMed

    Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro

    2015-11-01

    To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201

  12. An optimal decision population code that accounts for correlated variability unambiguously predicts a subject's choice.

    PubMed

    Carnevale, Federico; de Lafuente, Victor; Romo, Ranulfo; Parga, Néstor

    2013-12-18

    Decisions emerge from the concerted activity of neuronal populations distributed across brain circuits. However, the analytical tools best suited to decode decision signals from neuronal populations remain unknown. Here we show that knowledge of correlated variability between pairs of cortical neurons allows perfect decoding of decisions from population firing rates. We recorded pairs of neurons from secondary somatosensory (S2) and premotor (PM) cortices while monkeys reported the presence or absence of a tactile stimulus. We found that while populations of S2 and sensory-like PM neurons are only partially correlated with behavior, those PM neurons active during a delay period preceding the motor report predict unequivocally the animal's decision report. Thus, a population rate code that optimally reveals a subject's perceptual decisions can be implemented just by knowing the correlations of PM neurons representing decision variables.

  13. A computer code (SKINTEMP) for predicting transient missile and aircraft heat transfer characteristics

    NASA Astrophysics Data System (ADS)

    Cummings, Mary L.

    1994-09-01

    A FORTRAN computer code (SKINTEMP) has been developed to calculate transient missile/aircraft aerodynamic heating parameters utilizing basic flight parameters such as altitude, Mach number, and angle of attack. The insulated skin temperature of a vehicle surface on either the fuselage (axisymmetric body) or wing (two-dimensional body) is computed from a basic heat balance relationship throughout the entire spectrum (subsonic, transonic, supersonic, hypersonic) of flight. This calculation method employs a simple finite difference procedure which considers radiation, forced convection, and non-reactive chemistry. Surface pressure estimates are based on a modified Newtonian flow model. Eckert's reference temperature method is used as the forced convection heat transfer model. SKINTEMP predictions are compared with a limited number of test cases. SKINTEMP was developed as a tool to enhance the conceptual design process of high speed missiles and aircraft. Recommendations are made for possible future development of SKINTEMP to further support the design process.

  14. Adaptive immunity increases the pace and predictability of evolutionary change in commensal gut bacteria

    PubMed Central

    Barroso-Batista, João; Demengeot, Jocelyne; Gordo, Isabel

    2015-01-01

    Co-evolution between the mammalian immune system and the gut microbiota is believed to have shaped the microbiota's astonishing diversity. Here we test the corollary hypothesis that the adaptive immune system, directly or indirectly, influences the evolution of commensal species. We compare the evolution of Escherichia coli upon colonization of the gut of wild-type and Rag2−/− mice, which lack lymphocytes. We show that bacterial adaptation is slower in immune-compromised animals, a phenomenon explained by differences in the action of natural selection within each host. Emerging mutations exhibit strong beneficial effects in healthy hosts but substantial antagonistic pleiotropy in immune-deficient mice. This feature is due to changes in the composition of the gut microbiota, which differs according to the immune status of the host. Our results indicate that the adaptive immune system influences the tempo and predictability of E. coli adaptation to the mouse gut. PMID:26615893

  15. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  16. Contribution to the Prediction of the Fold Code: Application to Immunoglobulin and Flavodoxin Cases

    PubMed Central

    Banach, Mateusz; Prudhomme, Nicolas; Carpentier, Mathilde; Duprat, Elodie; Papandreou, Nikolaos; Kalinowska, Barbara; Chomilier, Jacques; Roterman, Irena

    2015-01-01

    Background Folding nucleus of globular proteins formation starts by the mutual interaction of a group of hydrophobic amino acids whose close contacts allow subsequent formation and stability of the 3D structure. These early steps can be predicted by simulation of the folding process through a Monte Carlo (MC) coarse grain model in a discrete space. We previously defined MIRs (Most Interacting Residues), as the set of residues presenting a large number of non-covalent neighbour interactions during such simulation. MIRs are good candidates to define the minimal number of residues giving rise to a given fold instead of another one, although their proportion is rather high, typically [15-20]% of the sequences. Having in mind experiments with two sequences of very high levels of sequence identity (up to 90%) but different folds, we combined the MIR method, which takes sequence as single input, with the “fuzzy oil drop” (FOD) model that requires a 3D structure, in order to estimate the residues coding for the fold. FOD assumes that a globular protein follows an idealised 3D Gaussian distribution of hydrophobicity density, with the maximum in the centre and minima at the surface of the “drop”. If the actual local density of hydrophobicity around a given amino acid is as high as the ideal one, then this amino acid is assigned to the core of the globular protein, and it is assumed to follow the FOD model. Therefore one obtains a distribution of the amino acids of a protein according to their agreement or rejection with the FOD model. Results We compared and combined MIR and FOD methods to define the minimal nucleus, or keystone, of two populated folds: immunoglobulin-like (Ig) and flavodoxins (Flav). The combination of these two approaches defines some positions both predicted as a MIR and assigned as accordant with the FOD model. It is shown here that for these two folds, the intersection of the predicted sets of residues significantly differs from random selection

  17. Modeling light adaptation in circadian clock: prediction of the response that stabilizes entrainment.

    PubMed

    Tsumoto, Kunichika; Kurosawa, Gen; Yoshinaga, Tetsuya; Aihara, Kazuyuki

    2011-01-01

    Periods of biological clocks are close to but often different from the rotation period of the earth. Thus, the clocks of organisms must be adjusted to synchronize with day-night cycles. The primary signal that adjusts the clocks is light. In Neurospora, light transiently up-regulates the expression of specific clock genes. This molecular response to light is called light adaptation. Does light adaptation occur in other organisms? Using published experimental data, we first estimated the time course of the up-regulation rate of gene expression by light. Intriguingly, the estimated up-regulation rate was transient during light period in mice as well as Neurospora. Next, we constructed a computational model to consider how light adaptation had an effect on the entrainment of circadian oscillation to 24-h light-dark cycles. We found that cellular oscillations are more likely to be destabilized without light adaption especially when light intensity is very high. From the present results, we predict that the instability of circadian oscillations under 24-h light-dark cycles can be experimentally observed if light adaptation is altered. We conclude that the functional consequence of light adaptation is to increase the adjustability to 24-h light-dark cycles and then adapt to fluctuating environments in nature.

  18. Respiratory motion prediction by using the adaptive neuro fuzzy inference system (ANFIS)

    NASA Astrophysics Data System (ADS)

    Kakar, Manish; Nyström, Håkan; Rye Aarup, Lasse; Jakobi Nøttrup, Trine; Rune Olsen, Dag

    2005-10-01

    The quality of radiation therapy delivered for treating cancer patients is related to set-up errors and organ motion. Due to the margins needed to ensure adequate target coverage, many breast cancer patients have been shown to develop late side effects such as pneumonitis and cardiac damage. Breathing-adapted radiation therapy offers the potential for precise radiation dose delivery to a moving target and thereby reduces the side effects substantially. However, the basic requirement for breathing-adapted radiation therapy is to track and predict the target as precisely as possible. Recent studies have addressed the problem of organ motion prediction by using different methods including artificial neural network and model based approaches. In this study, we propose to use a hybrid intelligent system called ANFIS (the adaptive neuro fuzzy inference system) for predicting respiratory motion in breast cancer patients. In ANFIS, we combine both the learning capabilities of a neural network and reasoning capabilities of fuzzy logic in order to give enhanced prediction capabilities, as compared to using a single methodology alone. After training ANFIS and checking for prediction accuracy on 11 breast cancer patients, it was found that the RMSE (root-mean-square error) can be reduced to sub-millimetre accuracy over a period of 20 s provided the patient is assisted with coaching. The average RMSE for the un-coached patients was 35% of the respiratory amplitude and for the coached patients 6% of the respiratory amplitude.

  19. Genome-Wide Association Analysis of Adaptation Using Environmentally Predicted Traits

    PubMed Central

    van Zanten, Martijn

    2015-01-01

    Current methods for studying the genetic basis of adaptation evaluate genetic associations with ecologically relevant traits or single environmental variables, under the implicit assumption that natural selection imposes correlations between phenotypes, environments and genotypes. In practice, observed trait and environmental data are manifestations of unknown selective forces and are only indirectly associated with adaptive genetic variation. In theory, improved estimation of these forces could enable more powerful detection of loci under selection. Here we present an approach in which we approximate adaptive variation by modeling phenotypes as a function of the environment and using the predicted trait in multivariate and univariate genome-wide association analysis (GWAS). Based on computer simulations and published flowering time data from the model plant Arabidopsis thaliana, we find that environmentally predicted traits lead to higher recovery of functional loci in multivariate GWAS and are more strongly correlated to allele frequencies at adaptive loci than individual environmental variables. Our results provide an example of the use of environmental data to obtain independent and meaningful information on adaptive genetic variation. PMID:26496492

  20. Genome-Wide Association Analysis of Adaptation Using Environmentally Predicted Traits.

    PubMed

    van Heerwaarden, Joost; van Zanten, Martijn; Kruijer, Willem

    2015-10-01

    Current methods for studying the genetic basis of adaptation evaluate genetic associations with ecologically relevant traits or single environmental variables, under the implicit assumption that natural selection imposes correlations between phenotypes, environments and genotypes. In practice, observed trait and environmental data are manifestations of unknown selective forces and are only indirectly associated with adaptive genetic variation. In theory, improved estimation of these forces could enable more powerful detection of loci under selection. Here we present an approach in which we approximate adaptive variation by modeling phenotypes as a function of the environment and using the predicted trait in multivariate and univariate genome-wide association analysis (GWAS). Based on computer simulations and published flowering time data from the model plant Arabidopsis thaliana, we find that environmentally predicted traits lead to higher recovery of functional loci in multivariate GWAS and are more strongly correlated to allele frequencies at adaptive loci than individual environmental variables. Our results provide an example of the use of environmental data to obtain independent and meaningful information on adaptive genetic variation.

  1. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    SciTech Connect

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-08

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  2. Performance of a municipal solid waste (MSW) incinerator predicted with a computational fluid dynamics (CFD) code

    SciTech Connect

    Anglesio, P.; Negreanu, G.P.

    1998-07-01

    The purpose of this paper is to investigate by the means of numerical simulation the performance of the MSW incinerator with of Vercelli (Italy). FLUENT, a finite-volumes commercial code for Fluid Dynamics has been used to predict the 3-D reacting flows (gaseous phase) within the incinerator geometry, in order to estimate if the three conditions settled by the Italian law (P.D. 915 / 82) are respected: (a) Flue gas temperature at the input of the secondary combustion chamber must exceed 950 C. (b) Oxygen concentration in the same section must exceed 6 %. (c) Residence time for the flue gas in the secondary combustion chamber must exceed 2 seconds. The model of the incinerator has been created using the software pre-processing facilities (wall, input, outlet and live cells), together with the set-up of boundary conditions. There are also imposed the combustion constants (stoichiometry, heat of combustion, air excess). The solving procedure transforms at the level of each live cell the partial derivative equations in algebraic equations, computing the velocities field, the temperatures, gases concentration, etc. These predicted values were compared with the design properties, and the conclusion was that the conditions (a), (b), (c), are respected in normal operation. The powerful graphic interface helps the user to visualize the magnitude of the computed parameters. These results may be successfully used for the design and operation improvements for MSW incinerators. This fact will substantially increase the efficiency, reduce pollutant emissions and optimize the plant overall performance.

  3. The WISGSK: A computer code for the prediction of a multistage axial compressor performance with water ingestion

    NASA Technical Reports Server (NTRS)

    Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.

  4. Crater lake habitat predicts morphological diversity in adaptive radiations of cichlid fishes.

    PubMed

    Recknagel, Hans; Elmer, Kathryn R; Meyer, Axel

    2014-07-01

    Adaptive radiations provide an excellent opportunity for studying the correlates and causes for the origin of biodiversity. In these radiations, species diversity may be influenced by either the ecological and physical environment, intrinsic lineage effects, or both. Disentangling the relative contributions of these factors in generating biodiversity remains a major challenge in understanding why a lineage does or does not radiate. Here, we examined morphological variation in body shape for replicate flocks of Nicaraguan Midas cichlid fishes and tested its association with biological and physical characteristics of their crater lakes. We found that variability of body elongation, an adaptive trait in freshwater fishes, is mainly predicted by average lake depth (N = 6, P < 0.001, R(2) = 0.96). Other factors considered, including lake age, surface area, littoral zone area, number of co-occurring fish species, and genetic diversity of the Midas flock, did not significantly predict morphological variability. We also showed that lakes with a larger littoral zone have on average higher bodied Midas cichlids, indicating that Midas cichlid flocks are locally adapted to their crater lake habitats. In conclusion, we found that a lake's habitat predicts the magnitude and the diversity of body elongation in repeated cichlid adaptive radiations.

  5. Efficient, Adaptive Clinical Validation of Predictive Biomarkers in Cancer Therapeutic Development.

    PubMed

    Beckman, Robert A; Chen, Cong

    2015-01-01

    Predictive biomarkers, defined as biomarkers that can be used to identify patient populations who will optimally benefit from therapy, are an important part of the future of oncology. They have the potential to reduce the size and cost of clinical development programs for oncology therapy, while increasing their probability of success and the ultimate value of cancer medicines. But predictive biomarkers do not always work, and under these circumstances they add cost, complexity, and time to drug development. This chapter describes Phase 2 and 3 development methods which efficiently and adaptively evaluate the ability of the biomarker to predict clinical outcomes. In the end, the biomarker is emphasized to the extent that it is actually predictive. This allows clinical cancer drug developers to manage uncertainty in the validity of biomarkers, leading to maximal value for predictive biomarkers and their associated oncology therapies.

  6. Predicting the evolutionary dynamics of seasonal adaptation to novel climates in Arabidopsis thaliana.

    PubMed

    Fournier-Level, Alexandre; Perry, Emily O; Wang, Jonathan A; Braun, Peter T; Migneault, Andrew; Cooper, Martha D; Metcalf, C Jessica E; Schmitt, Johanna

    2016-05-17

    Predicting whether and how populations will adapt to rapid climate change is a critical goal for evolutionary biology. To examine the genetic basis of fitness and predict adaptive evolution in novel climates with seasonal variation, we grew a diverse panel of the annual plant Arabidopsis thaliana (multiparent advanced generation intercross lines) in controlled conditions simulating four climates: a present-day reference climate, an increased-temperature climate, a winter-warming only climate, and a poleward-migration climate with increased photoperiod amplitude. In each climate, four successive seasonal cohorts experienced dynamic daily temperature and photoperiod variation over a year. We measured 12 traits and developed a genomic prediction model for fitness evolution in each seasonal environment. This model was used to simulate evolutionary trajectories of the base population over 50 y in each climate, as well as 100-y scenarios of gradual climate change following adaptation to a reference climate. Patterns of plastic and evolutionary fitness response varied across seasons and climates. The increased-temperature climate promoted genetic divergence of subpopulations across seasons, whereas in the winter-warming and poleward-migration climates, seasonal genetic differentiation was reduced. In silico "resurrection experiments" showed limited evolutionary rescue compared with the plastic response of fitness to seasonal climate change. The genetic basis of adaptation and, consequently, the dynamics of evolutionary change differed qualitatively among scenarios. Populations with fewer founding genotypes and populations with genetic diversity reduced by prior selection adapted less well to novel conditions, demonstrating that adaptation to rapid climate change requires the maintenance of sufficient standing variation. PMID:27140640

  7. Soliciting scientific information and beliefs in predictive modeling and adaptive management

    NASA Astrophysics Data System (ADS)

    Glynn, P. D.; Voinov, A. A.; Shapiro, C. D.

    2015-12-01

    Post-normal science requires public engagement and adaptive corrections in addressing issues with high complexity and uncertainty. An adaptive management framework is presented for the improved management of natural resources and environments through a public participation process. The framework solicits the gathering and transformation and/or modeling of scientific information but also explicitly solicits the expression of participant beliefs. Beliefs and information are compared, explicitly discussed for alignments or misalignments, and ultimately melded back together as a "knowledge" basis for making decisions. An effort is made to recognize the human or participant biases that may affect the information base and the potential decisions. In a separate step, an attempt is made to recognize and predict the potential "winners" and "losers" (perceived or real) of any decision or action. These "winners" and "losers" include present human communities with different spatial, demographic or socio-economic characteristics as well as more dispersed or more diffusely characterized regional or global communities. "Winners" and "losers" may also include future human communities as well as communities of other biotic species. As in any adaptive management framework, assessment of predictions, iterative follow-through and adaptation of policies or actions is essential, and commonly very difficult or impossible to achieve. Recognizing beforehand the limits of adaptive management is essential. More generally, knowledge of the behavioral and economic sciences and of ethics and sociology will be key to a successful implementation of this adaptive management framework. Knowledge of biogeophysical processes will also be essential, but by definition of the issues being addressed, will always be incomplete and highly uncertain. The human dimensions of the issues addressed and the participatory processes used carry their own complexities and uncertainties. Some ideas and principles are

  8. Benchmarking and qualification of the NUFREQ-NPW code for best estimate prediction of multi-channel core stability margins

    SciTech Connect

    Taleyarkhan, R.; Lahey, R.T. Jr.; McFarlane, A.F.; Podowski, M.Z.

    1988-01-01

    The NUFREQ-NPW code was modified and set up at Westinghouse, USA for mixed fuel type multi-channel core-wide stability analysis. The resulting code, NUFREQ-NPW, allows for variable axial power profiles between channel groups and can handle mixed fuel types. Various models incorporated into NUFREQ-NPW were systematically compared against the Westinghouse channel stability analysis code MAZDA-NF, for which the mathematical model was developed, in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All thirteen Peach Bottom-2 EOC-2/3 low flow stability tests were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between NUFREQ-NPW predictions and data. Moreover, predictions were generally on the conservative side. The results of detailed direct comparisons with experimental data using the NUFREQ-NPW code; have demonstrated that BWR core stability margins are conservatively predicted, and all data trends are captured with good accuracy. The methodology is thus suitable for BWR design and licensing purposes. 11 refs., 12 figs., 2 tabs.

  9. An assessment of hydrogeochemical computer codes applied toward modeling and predicting post-mining pit water geochemistry

    SciTech Connect

    Bird, D.A.; Lyons, W.B. . Hydrology Program); Miller, G.C. . Dept. of Environmental Resource Sciences)

    1993-04-01

    Geochemists for the mining industry utilize a variety of computer codes to model and predict post-mining pit water chemogenesis. This study surveys several of the PC-supported hydrogeochemical codes, applies them to specific open pit mine scenarios, and evaluates their suitability to predicting post-mining pit and groundwater hydro-geochemistry. The prediction of pit water geochemistry is important because of the potential adverse effects of mine drainage, which include acidity, trace metal contamination, pit water stratification, and sludge accumulation. The WATEQ codes of the USGS can calculate speciation and saturation states of a pit water or groundwater sample, but are not designed to model forward rock/water reactions. NETPATH can calculate the chemical mass transfer (inverse modeling) that has occurred during rock/water interaction, but again is not designed to model forward pit water chemogenesis. Several mining industry modelers use EPA's MINTEQA2 code, which has shown to be very flexible with its large database and ability to model adsorption. Reaction path codes, like PHREEQE and EQ3/6, can model reactions on an incremental basis as the pit fills over time, but also may require much user manipulation. New coupled codes like PHREEQM and HYDROGEOCHEM can simulate movement and reaction of groundwater through the aquifer as it approaches and inundates the pit. One aspect of post-mining hydrogeochemical modeling that has received little attention is the effect groundwater will have down gradient after it flows from the pit into the aquifer.

  10. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel

    SciTech Connect

    Chen, K.F.; Olson, C.A.

    1983-09-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction.

  11. Integration of Expressed Sequence Tag Data Flanking Predicted RNA Secondary Structures Facilitates Novel Non-Coding RNA Discovery

    PubMed Central

    Krzyzanowski, Paul M.; Price, Feodor D.; Muro, Enrique M.; Rudnicki, Michael A.; Andrade-Navarro, Miguel A.

    2011-01-01

    Many computational methods have been used to predict novel non-coding RNAs (ncRNAs), but none, to our knowledge, have explicitly investigated the impact of integrating existing cDNA-based Expressed Sequence Tag (EST) data that flank structural RNA predictions. To determine whether flanking EST data can assist in microRNA (miRNA) prediction, we identified genomic sites encoding putative miRNAs by combining functional RNA predictions with flanking ESTs data in a model consistent with miRNAs undergoing cleavage during maturation. In both human and mouse genomes, we observed that the inclusion of flanking ESTs adjacent to and not overlapping predicted miRNAs significantly improved the performance of various methods of miRNA prediction, including direct high-throughput sequencing of small RNA libraries. We analyzed the expression of hundreds of miRNAs predicted to be expressed during myogenic differentiation using a customized microarray and identified several known and predicted myogenic miRNA hairpins. Our results indicate that integrating ESTs flanking structural RNA predictions improves the quality of cleaved miRNA predictions and suggest that this strategy can be used to predict other non-coding RNAs undergoing cleavage during maturation. PMID:21698286

  12. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music

    PubMed Central

    Vuust, Peter; Witek, Maria A. G.

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813

  13. Predictive coding accounts of shared representations in parieto-insular networks.

    PubMed

    Ishida, Hiroaki; Suzuki, Keisuke; Grandi, Laura Clara

    2015-04-01

    The discovery of mirror neurons in the ventral premotor cortex (area F5) and inferior parietal cortex (area PFG) in the macaque monkey brain has provided the physiological evidence for direct matching of the intrinsic motor representations of the self and the visual image of the actions of others. The existence of mirror neurons implies that the brain has mechanisms reflecting shared self and other action representations. This may further imply that the neural basis self-body representations may also incorporate components that are shared with other-body representations. It is likely that such a mechanism is also involved in predicting other's touch sensations and emotions. However, the neural basis of shared body representations has remained unclear. Here, we propose a neural basis of body representation of the self and of others in both human and non-human primates. We review a series of behavioral and physiological findings which together paint a picture that the systems underlying such shared representations require integration of conscious exteroception and interoception subserved by a cortical sensory-motor network involving parieto-inner perisylvian circuits (the ventral intraparietal area [VIP]/inferior parietal area [PFG]-secondary somatosensory cortex [SII]/posterior insular cortex [pIC]/anterior insular cortex [aIC]). Based on these findings, we propose a computational mechanism of the shared body representation in the predictive coding (PC) framework. Our mechanism proposes that processes emerging from generative models embedded in these specific neuronal circuits play a pivotal role in distinguishing a self-specific body representation from a shared one. The model successfully accounts for normal and abnormal shared body phenomena such as mirror-touch synesthesia and somatoparaphrenia. In addition, it generates a set of testable experimental predictions. PMID:25447372

  14. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  15. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  16. Remembering forward: Neural correlates of memory and prediction in human motor adaptation

    PubMed Central

    Scheidt, Robert A; Zimbelman, Janice L; Salowitz, Nicole M G; Suminski, Aaron J; Mosier, Kristine M; Houk, James; Simo, Lucia

    2011-01-01

    We used functional MR imaging (FMRI), a robotic manipulandum and systems identification techniques to examine neural correlates of predictive compensation for spring-like loads during goal-directed wrist movements in neurologically-intact humans. Although load changed unpredictably from one trial to the next, subjects nevertheless used sensorimotor memories from recent movements to predict and compensate upcoming loads. Prediction enabled subjects to adapt performance so that the task was accomplished with minimum effort. Population analyses of functional images revealed a distributed, bilateral network of cortical and subcortical activity supporting predictive load compensation during visual target capture. Cortical regions - including prefrontal, parietal and hippocampal cortices - exhibited trial-by-trial fluctuations in BOLD signal consistent with the storage and recall of sensorimotor memories or “states” important for spatial working memory. Bilateral activations in associative regions of the striatum demonstrated temporal correlation with the magnitude of kinematic performance error (a signal that could drive reward-optimizing reinforcement learning and the prospective scaling of previously learned motor programs). BOLD signal correlations with load prediction were observed in the cerebellar cortex and red nuclei (consistent with the idea that these structures generate adaptive fusimotor signals facilitating cancellation of expected proprioceptive feedback, as required for conditional feedback adjustments to ongoing motor commands and feedback error learning). Analysis of single subject images revealed that predictive activity was at least as likely to be observed in more than one of these neural systems as in just one. We conclude therefore that motor adaptation is mediated by predictive compensations supported by multiple, distributed, cortical and subcortical structures. PMID:21840405

  17. ER-Worker: A computer code to predict remediation worker exposure and safety hazards

    SciTech Connect

    Blaylock, B.P.; Campbell, A.C.; Hutchison, J.F.; Simek, M.A.P.; Sutherland, J.F.; Legg, J.L.

    1994-12-31

    The U.S. Department of Energy (DOE) has generated and disposed of large quantities of waste as a result of 50 years of nuclear weapons production. This waste has been disposed of in waste sites such as burial grounds, waste pits, holding ponds, and landfills. Many of these waste sites have begun to release contamination offsite and potentially pose risks to humans living or working in the vicinity of these sites. By 2019, DOE must meet its goals to achieve timely compliance with all applicable environmental requirements, clean up the 1989 inventory of hazardous and radioactive wastes at inactive sites and facilities, and safely and efficiently treat, store, and dispose of the waste generated by remediation and operating facilities. Remediation of DOE`s 13,000 facilities, and management of the current and future waste streams, will require the effort of thousands of workers. Workers, as defined here, are persons who directly participate in the cleanup or remediation of DOE sites. Remediation activities include the use of remediation technologies such as bioremediation, surface water controls, and contaminated soil excavation. This document describes a worker health risk evaluation methodology and computer code designed to predict risks associated with Environmental Restoration (ER) activities that are yet to be undertaken. The computer code, designated ER-WORKER, can be used to estimate worker risks across the DOE complex on a site-specific, installation-wide, or programmatic level. This approach generally follows the guidance suggested in the Risk Assessment Guidance for Superfund (RAGS) (EPA 1989a). Key principles from other important Environmental Protection Agency (EPA) and DOE guidance documents are incorporated into the methodology.

  18. Probability of coding of a DNA sequence: an algorithm to predict translated reading frames from their thermodynamic characteristics.

    PubMed Central

    Tramontano, A; Macchiato, M F

    1986-01-01

    An algorithm to determine the probability that a reading frame codifies for a protein is presented. It is based on the results of our previous studies on the thermodynamic characteristics of a translated reading frame. We also develop a prediction procedure to distinguish between coding and non-coding reading frames. The procedure is based on the characteristics of the putative product of the DNA sequence and not on periodicity characteristics of the sequence, so the prediction is not biased by the presence of overlapping translated reading frames or by the presence of translated reading frames on the complementary DNA strand. PMID:3753761

  19. Adaptive cluster expansion approach for predicting the structure evolution of graphene oxide

    SciTech Connect

    Li, Xi-Bo; Guo, Pan; Wang, D.; Liu, Li-Min; Zhang, Yongsheng

    2014-12-14

    An adaptive cluster expansion (CE) method is used to explore surface adsorption and growth processes. Unlike the traditional CE method, suitable effective cluster interaction (ECI) parameters are determined, and then the selected fixed number of ECIs is continually optimized to predict the stable configurations with gradual increase of adatom coverage. Comparing with traditional CE method, the efficiency of the adaptive CE method could be greatly enhanced. As an application, the adsorption and growth of oxygen atoms on one side of pristine graphene was carefully investigated using this method in combination with first-principles calculations. The calculated results successfully uncover the structural evolution of graphene oxide for the different numbers of oxygen adatoms on graphene. The aggregation behavior of the stable configurations for different oxygen adatom coverages is revealed for increasing coverages of oxygen atoms. As a targeted method, adaptive CE can also be applied to understand the evolution of other surface adsorption and growth processes.

  20. SWAT system performance predictions. Project report. [SWAT (Short-Wavelength Adaptive Techniques)

    SciTech Connect

    Parenti, R.R.; Sasiela, R.J.

    1993-03-10

    In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.... Adaptive optics, Phase conjugation, Atmospheric turbulence Synthetic beacon, Laser guide star.

  1. An insula-frontostriatal network mediates flexible cognitive control by adaptively predicting changing control demands

    PubMed Central

    Jiang, Jiefeng; Beck, Jeffrey; Heller, Katherine; Egner, Tobias

    2015-01-01

    The anterior cingulate and lateral prefrontal cortices have been implicated in implementing context-appropriate attentional control, but the learning mechanisms underlying our ability to flexibly adapt the control settings to changing environments remain poorly understood. Here we show that human adjustments to varying control demands are captured by a reinforcement learner with a flexible, volatility-driven learning rate. Using model-based functional magnetic resonance imaging, we demonstrate that volatility of control demand is estimated by the anterior insula, which in turn optimizes the prediction of forthcoming demand in the caudate nucleus. The caudate's prediction of control demand subsequently guides the implementation of proactive and reactive attentional control in dorsal anterior cingulate and dorsolateral prefrontal cortices. These data enhance our understanding of the neuro-computational mechanisms of adaptive behaviour by connecting the classic cingulate-prefrontal cognitive control network to a subcortical control-learning mechanism that infers future demands by flexibly integrating remote and recent past experiences. PMID:26391305

  2. LPTA: location predictive and time adaptive data gathering scheme with mobile sink for wireless sensor networks.

    PubMed

    Zhu, Chuan; Wang, Yao; Han, Guangjie; Rodrigues, Joel J P C; Lloret, Jaime

    2014-01-01

    This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327

  3. Feasibility of using adaptive logic networks to predict compressor unit failure

    SciTech Connect

    Armstrong, W.W.; Chungying Chu; Thomas, M.M.

    1995-12-31

    In this feasibility study, an adaptive logic network (ALN) was trained to predict failures of turbine-driven compressor units using a large database of measurements. No expert knowledge about compressor systems was involved. The predictions used only the statistical properties of the measurements and the indications of failure types. A fuzzy set was used to model measurements typical of normal operation. It was constrained by a requirement imposed during ALN training, that it should have a shape similar to a Gaussian density, more precisely, that its logarithm should be convex-up. Initial results obtained using this approach to knowledge discovery in the database were encouraging.

  4. A thermal NO(x) prediction model - Scalar computation module for CFD codes with fluid and kinetic effects

    NASA Technical Reports Server (NTRS)

    Mcbeath, Giorgio; Ghorashi, Bahman; Chun, Kue

    1993-01-01

    A thermal NO(x) prediction model is developed to interface with a CFD, k-epsilon based code. A converged solution from the CFD code is the input to the postprocessing model for prediction of thermal NO(x). The model uses a decoupled analysis to estimate the equilibrium level of (NO(x))e which is the constant rate limit. This value is used to estimate the flame (NO(x)) and in turn predict the rate of formation at each node using a two-step Zeldovich mechanism. The rate is fixed on the NO(x) production rate plot by estimating the time to reach equilibrium by a differential analysis based on the reaction: O + N2 = NO + N. The rate is integrated in the nonequilibrium time space based on the residence time at each node in the computational domain. The sum of all nodal predictions yields the total NO(x) level.

  5. Bulbar Microcircuit Model Predicts Connectivity and Roles of Interneurons in Odor Coding

    PubMed Central

    Gilra, Aditya; Bhalla, Upinder S.

    2015-01-01

    Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells) linearize the input-output transformation of the principal neurons (mitral cells), unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells). Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral cells can be sometimes decorrelated as observed, despite receiving similar receptor input. We also rule out some alternative mechanisms. In our mechanism, we predict that a few distant mitral cells receiving input from different receptors, inhibit sister mitral cells differentially, by activating disjoint subsets of granule cells. This differential inhibition is strong enough to decorrelate their firing rate phases, and not merely modulate their spike timing. Thus our well-constrained model suggests novel computational roles for the two most numerous classes of interneurons in the bulb. PMID:25942312

  6. Bulbar microcircuit model predicts connectivity and roles of interneurons in odor coding.

    PubMed

    Gilra, Aditya; Bhalla, Upinder S

    2015-01-01

    Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells) linearize the input-output transformation of the principal neurons (mitral cells), unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells). Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral cells can be sometimes decorrelated as observed, despite receiving similar receptor input. We also rule out some alternative mechanisms. In our mechanism, we predict that a few distant mitral cells receiving input from different receptors, inhibit sister mitral cells differentially, by activating disjoint subsets of granule cells. This differential inhibition is strong enough to decorrelate their firing rate phases, and not merely modulate their spike timing. Thus our well-constrained model suggests novel computational roles for the two most numerous classes of interneurons in the bulb.

  7. fMR-Adaptation Reveals Invariant Coding of Biological Motion on the Human STS

    PubMed Central

    Grossman, Emily D.; Jardine, Nicole L.; Pyles, John A.

    2009-01-01

    Neuroimaging studies of biological motion perception have found a network of coordinated brain areas, the hub of which appears to be the human posterior superior temporal sulcus (STSp). Understanding the functional role of the STSp requires characterizing the response tuning of neuronal populations underlying the BOLD response. Thus far our understanding of these response properties comes from single-unit studies of the monkey anterior STS, which has individual neurons tuned to body actions, with a small population invariant to changes in viewpoint, position and size of the action being viewed. To measure for homologous functional properties on the human STS, we used fMR-adaptation to investigate action, position and size invariance. Observers viewed pairs of point-light animations depicting human actions that were either identical, differed in the action depicted, locally scrambled, or differed in the viewing perspective, the position or the size. While extrastriate hMT+ had neural signals indicative of viewpoint specificity, the human STS adapted for all of these changes, as compared to viewing two different actions. Similar findings were observed in more posterior brain areas also implicated in action recognition. Our findings are evidence for viewpoint invariance in the human STS and related brain areas, with the implication that actions are abstracted into object-centered representations during visual analysis. PMID:20431723

  8. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359

  9. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.

  10. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  11. How do different aspects of self-regulation predict successful adaptation to school?

    PubMed

    Neuenschwander, Regula; Röthlisberger, Marianne; Cimeli, Patrizia; Roebers, Claudia M

    2012-11-01

    Self-regulation plays an important role in successful adaptation to preschool and school contexts as well as in later academic achievement. The current study relates different aspects of self-regulation such as temperamental effortful control and executive functions (updating, inhibition, and shifting) to different aspects of adaptation to school such as learning-related behavior, school grades, and performance in standardized achievement tests. The relationship between executive functions/effortful control and academic achievement has been established in previous studies; however, little is known about their unique contributions to different aspects of adaptation to school and the interplay of these factors in young school children. Results of a 1-year longitudinal study (N=459) revealed that unique contributions of effortful control (parental report) to school grades were fully mediated by children's learning-related behavior. On the other hand, the unique contributions of executive functions (performance on tasks) to school grades were only partially mediated by children's learning-related behavior. Moreover, executive functions predicted performance in standardized achievement tests exclusively, with comparable predictive power for mathematical and reading/writing skills. Controlling for fluid intelligence did not change the pattern of prediction substantially, and fluid intelligence did not explain any variance above that of the two included aspects of self-regulation. Although effortful control and executive functions were not significantly related to each other, both aspects of self-regulation were shown to be important for fostering early learning and good classroom adjustment in children around transition to school. PMID:22920433

  12. Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection.

    PubMed

    Li, Xiaolong; Yang, Bin; Zeng, Tieyong

    2011-12-01

    Prediction-error expansion (PEE) is an important technique of reversible watermarking which can embed large payloads into digital images with low distortion. In this paper, the PEE technique is further investigated and an efficient reversible watermarking scheme is proposed, by incorporating in PEE two new strategies, namely, adaptive embedding and pixel selection. Unlike conventional PEE which embeds data uniformly, we propose to adaptively embed 1 or 2 bits into expandable pixel according to the local complexity. This avoids expanding pixels with large prediction-errors, and thus, it reduces embedding impact by decreasing the maximum modification to pixel values. Meanwhile, adaptive PEE allows very large payload in a single embedding pass, and it improves the capacity limit of conventional PEE. We also propose to select pixels of smooth area for data embedding and leave rough pixels unchanged. In this way, compared with conventional PEE, a more sharply distributed prediction-error histogram is obtained and a better visual quality of watermarked image is observed. With these improvements, our method outperforms conventional PEE. Its superiority over other state-of-the-art methods is also demonstrated experimentally.

  13. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  14. Predicting adaptive phenotypes from multilocus genotypes in Sitka spruce (Picea sitchensis) using random forest.

    PubMed

    Holliday, Jason A; Wang, Tongli; Aitken, Sally

    2012-09-01

    Climate is the primary driver of the distribution of tree species worldwide, and the potential for adaptive evolution will be an important factor determining the response of forests to anthropogenic climate change. Although association mapping has the potential to improve our understanding of the genomic underpinnings of climatically relevant traits, the utility of adaptive polymorphisms uncovered by such studies would be greatly enhanced by the development of integrated models that account for the phenotypic effects of multiple single-nucleotide polymorphisms (SNPs) and their interactions simultaneously. We previously reported the results of association mapping in the widespread conifer Sitka spruce (Picea sitchensis). In the current study we used the recursive partitioning algorithm 'Random Forest' to identify optimized combinations of SNPs to predict adaptive phenotypes. After adjusting for population structure, we were able to explain 37% and 30% of the phenotypic variation, respectively, in two locally adaptive traits--autumn budset timing and cold hardiness. For each trait, the leading five SNPs captured much of the phenotypic variation. To determine the role of epistasis in shaping these phenotypes, we also used a novel approach to quantify the strength and direction of pairwise interactions between SNPs and found such interactions to be common. Our results demonstrate the power of Random Forest to identify subsets of markers that are most important to climatic adaptation, and suggest that interactions among these loci may be widespread.

  15. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  16. Improving the Salammbo code modelling and using it to better predict radiation belts dynamics

    NASA Astrophysics Data System (ADS)

    Maget, Vincent; Sicard-Piet, Angelica; Grimald, Sandrine Rochel; Boscher, Daniel

    2016-07-01

    In the framework of the FP7-SPACESTORM project, one objective is to improve the reliability of the model-based predictions performed of the radiation belt dynamics (first developed during the FP7-SPACECAST project). In this purpose we have analyzed and improved the way the simulations using the ONERA Salammbô code are performed, especially in : - Better controlling the driving parameters of the simulation; - Improving the initialization of the simulation in order to be more accurate at most energies for L values between 4 to 6; - Improving the physics of the model. For first point a statistical analysis of the accuracy of the Kp index has been conducted. For point two we have based our method on a long duration simulation in order to extract typical radiation belt states depending on the solar wind stress and geomagnetic activity. For last point we have first improved separately the modelling of different processes acting in the radiation belts and then, we have analyzed the global improvements obtained when simulating them together. We'll discuss here on all these points and on the balance that has to be taken into account between modeled processes to globally improve the radiation belt modelling.

  17. A 4.8 kbps code-excited linear predictive coder

    NASA Technical Reports Server (NTRS)

    Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.

    1988-01-01

    A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.

  18. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  19. Temporal integration of multisensory stimuli in autism spectrum disorder: a predictive coding perspective.

    PubMed

    Chan, Jason S; Langer, Anne; Kaiser, Jochen

    2016-08-01

    Recently, a growing number of studies have examined the role of multisensory temporal integration in people with autism spectrum disorder (ASD). Some studies have used temporal order judgments or simultaneity judgments to examine the temporal binding window, while others have employed multisensory illusions, such as the sound-induced flash illusion (SiFi). The SiFi is an illusion created by presenting two beeps along with one flash. Participants perceive two flashes if the stimulus-onset asynchrony (SOA) between the two flashes is brief. The temporal binding window can be measured by modulating the SOA between the beeps. Each of these tasks has been used to compare the temporal binding window in people with ASD and typically developing individuals; however, the results have been mixed. While temporal order and simultaneity judgment tasks have shown little temporal binding window differences between groups, studies using the SiFi have found a wider temporal binding window in ASD compared to controls. In this paper, we discuss these seemingly contradictory findings and suggest that predictive coding may be able to explain the differences between these tasks. PMID:27324803

  20. The effect of LPC (Linear Predictive Coding) processing on the recognition of unfamiliar speakers

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, A.; Stern, K. R.

    1985-09-01

    The effect of narrowband digital processing, using a linear predictive coding (LPC) algorithm at 2400 bits/s, on the recognition of previously unfamiliar speakers was investigated. Three sets of five speakers each (two sets of males differing in rated voice distinctiveness and one set of females) were tested for speaker recognition in two separate experiments using a familiarization-test procedure. In the first experiment three groups of listeners each heard a single set of speakers in both voice processing conditions, and in the second two groups of listeners each heard all three sets of speakers in a single voice processing condition. There were significant differences among speaker sets both with and without LPC processing, with the low distinctive males generally more poorly recognized than the other groups. There was also an interaction of speaker set and voice processing condition; the low distinctive males were no less recognizable over LPC than they were unprocessed, and one speaker in particular was actually better recognized over LPC. Although it seems that on the whole LPC processing reduces speaker recognition, the reverse may be the case for some speakers in some contexts. This suggests that one should be cautious about comparing speaker recognition for different voi ce systems of the basis of a single set of speakers. It also presents a serious obstacle to the development of a reliable standardized test of speaker recognizability.

  1. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  2. Age-Related Changes in Predictive Capacity Versus Internal Model Adaptability: Electrophysiological Evidence that Individual Differences Outweigh Effects of Age

    PubMed Central

    Bornkessel-Schlesewsky, Ina; Philipp, Markus; Alday, Phillip M.; Kretzschmar, Franziska; Grewe, Tanja; Gumpert, Maike; Schumacher, Petra B.; Schlesewsky, Matthias

    2015-01-01

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to

  3. Simulation of transport in the ignited ITER with 1.5-D predictive code

    NASA Astrophysics Data System (ADS)

    Becker, G.

    1995-01-01

    The confinement in the bulk and scrape-off layer plasmas of the ITER EDA and CDA is investigated with special versions of the 1.5-D BALDUR predictive transport code for the case of peaked density profiles (Cu=1.0). The code self-consistently computes 2-D equilibria and solves 1-D transport equations with empirical transport coefficients for the ohmic, L and ELMy H mode regimes. Self-sustained steady state thermonuclear burn is demonstrated for up to 500 s. It is shown to be compatible with the strong radiation losses for divertor heat load reduction caused by the seeded impurities iron, neon and argon. The corresponding global and local energy and particle transport are presented. The required radiation corrected energy confinement times of the EDA and CDA are found to be close to 4 s, which is attainable according to the ITER ELMy H mode scalings. In the reference cases, the steady state helium fraction is 7%, which already causes significant dilution of the DT fuel. The fractions of iron, neon and argon needed for the prescribed radiative power loss are given. It is shown that high radiative losses from the confinement zone, mainly by bremsstrahlung, cannot be avoided. The radiation profiles of iron and argon are found to be the same, with two thirds of the total radiation being emitted from closed flux surfaces. Fuel dilution due to iron and argon is small. The neon radiation is more peripheral, since only half of the total radiative power is lost within the separatrix. But neon is found to cause high fuel. Dilution. The combined dilution effect by helium and neon conflicts with burn control, self-sustained burn and divertor power reduction. Raising the helium fraction above 10% leads to the same difficulties owing to fuel dilution. The high helium levels of the present EDA design are thus unacceptable. For the reference EDA case, the self-consistent electron density and temperature at the separatrix are 5.6*1019 m-3 and 130 eV, respectively. The bootstrap

  4. Adapting SOYGRO V5.42 for prediction under climate change conditions

    SciTech Connect

    Pickering, N.B.; Jones, J.W.; Boote, K.J.

    1995-12-31

    In some studies of the impacts of climate change on global crop production, crop growth models were empirically adapted to improve their response to increased CO{sub 2} concentration and air temperature. This chapter evaluates the empirical adaptations of the photosynthesis and evapotranspiration (ET) algorithms used in the soybean [Glycine max (L.) Merr.] model, SOYGRO V5.42, by comparing it with a new model that includes mechanistic approaches for these two processes. The new evapotranspiration-photosynthesis sub-model (ETPHOT) uses a hedgerow light interception algorithm, a C{sub 3}-leaf biochemical photosynthesis submodel, and predicts canopy ET and temperatures using a three-zone energy balance. ETPHOT uses daily weather data, has an internal hourly time step, and sums hourly predictions to obtain daily gross photosynthesis and ET. The empirical ET and photosynthesis curves included in SOYGRO V5.42 for climate change prediction were similar to those predicted by the ETPHOT model. Under extreme conditions that promote high leaf temperatures, like in the humid tropics. SOYGRO V5.42 overestimated daily gross photosynthesis response to CO{sub 2} compared with the ETPHOT model. SOYGRO V5.42 also slightly overestimated daily gross photosynthesis at intermediate air temperatures and ambient CO{sub 2} concentrations. 80 refs., 12 figs.

  5. Adaptive neuro-fuzzy and expert systems for power quality analysis and prediction of abnormal operation

    NASA Astrophysics Data System (ADS)

    Ibrahim, Wael Refaat Anis

    The present research involves the development of several fuzzy expert systems for power quality analysis and diagnosis. Intelligent systems for the prediction of abnormal system operation were also developed. The performance of all intelligent modules developed was either enhanced or completely produced through adaptive fuzzy learning techniques. Neuro-fuzzy learning is the main adaptive technique utilized. The work presents a novel approach to the interpretation of power quality from the perspective of the continuous operation of a single system. The research includes an extensive literature review pertaining to the applications of intelligent systems to power quality analysis. Basic definitions and signature events related to power quality are introduced. In addition, detailed discussions of various artificial intelligence paradigms as well as wavelet theory are included. A fuzzy-based intelligent system capable of identifying normal from abnormal operation for a given system was developed. Adaptive neuro-fuzzy learning was applied to enhance its performance. A group of fuzzy expert systems that could perform full operational diagnosis were also developed successfully. The developed systems were applied to the operational diagnosis of 3-phase induction motors and rectifier bridges. A novel approach for learning power quality waveforms and trends was developed. The technique, which is adaptive neuro fuzzy-based, learned, compressed, and stored the waveform data. The new technique was successfully tested using a wide variety of power quality signature waveforms, and using real site data. The trend-learning technique was incorporated into a fuzzy expert system that was designed to predict abnormal operation of a monitored system. The intelligent system learns and stores, in compressed format, trends leading to abnormal operation. The system then compares incoming data to the retained trends continuously. If the incoming data matches any of the learned trends, an

  6. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  7. The Predictive Utility of Narcissism among Children and Adolescents: Evidence for a Distinction between Adaptive and Maladaptive Narcissism

    ERIC Educational Resources Information Center

    Barry, Christopher T.; Frick, Paul J.; Adler, Kristy K.; Grafeman, Sarah J.

    2007-01-01

    We examined the predictive utility of narcissism among a community sample of children and adolescents (N=98) longitudinally. Analyses focused on the differential utility between maladaptive and adaptive narcissism for predicting later delinquency. Maladaptive narcissism significantly predicted self-reported delinquency at one-, two-, and…

  8. Adaptability and Prediction of Anticipatory Muscular Activity Parameters to Different Movements in the Sitting Position.

    PubMed

    Chikh, Soufien; Watelain, Eric; Faupin, Arnaud; Pinti, Antonio; Jarraya, Mohamed; Garnier, Cyril

    2016-08-01

    Voluntary movement often causes postural perturbation that requires an anticipatory postural adjustment to minimize perturbation and increase the efficiency and coordination during execution. This systematic review focuses specifically on the relationship between the parameters of anticipatory muscular activities and movement finality in sitting position among adults, to study the adaptability and predictability of anticipatory muscular activities parameters to different movements and conditions in sitting position in adults. A systematic literature search was performed using PubMed, Science Direct, Web of Science, Springer-Link, Engineering Village, and EbscoHost. Inclusion and exclusion criteria were applied to retain the most rigorous and specific studies, yielding 76 articles, Seventeen articles were excluded at first reading, and after the application of inclusion and exclusion criteria, 23 were retained. In a sitting position, central nervous system activity precedes movement by diverse anticipatory muscular activities and shows the ability to adapt anticipatory muscular activity parameters to the movement direction, postural stability, or charge weight. In addition, these parameters could be adapted to the speed of execution, as found for the standing position. Parameters of anticipatory muscular activities (duration, order, and amplitude of muscle contractions constituting the anticipatory muscular activity) could be used as a predictive indicator of forthcoming movement. In addition, this systematic review may improve methodology in empirical studies and assistive technology for people with disabilities. PMID:27440765

  9. Adaptability and Prediction of Anticipatory Muscular Activity Parameters to Different Movements in the Sitting Position.

    PubMed

    Chikh, Soufien; Watelain, Eric; Faupin, Arnaud; Pinti, Antonio; Jarraya, Mohamed; Garnier, Cyril

    2016-08-01

    Voluntary movement often causes postural perturbation that requires an anticipatory postural adjustment to minimize perturbation and increase the efficiency and coordination during execution. This systematic review focuses specifically on the relationship between the parameters of anticipatory muscular activities and movement finality in sitting position among adults, to study the adaptability and predictability of anticipatory muscular activities parameters to different movements and conditions in sitting position in adults. A systematic literature search was performed using PubMed, Science Direct, Web of Science, Springer-Link, Engineering Village, and EbscoHost. Inclusion and exclusion criteria were applied to retain the most rigorous and specific studies, yielding 76 articles, Seventeen articles were excluded at first reading, and after the application of inclusion and exclusion criteria, 23 were retained. In a sitting position, central nervous system activity precedes movement by diverse anticipatory muscular activities and shows the ability to adapt anticipatory muscular activity parameters to the movement direction, postural stability, or charge weight. In addition, these parameters could be adapted to the speed of execution, as found for the standing position. Parameters of anticipatory muscular activities (duration, order, and amplitude of muscle contractions constituting the anticipatory muscular activity) could be used as a predictive indicator of forthcoming movement. In addition, this systematic review may improve methodology in empirical studies and assistive technology for people with disabilities.

  10. Relapse-related long non-coding RNA signature to improve prognosis prediction of lung adenocarcinoma

    PubMed Central

    Zhao, Hengqiang; Wang, Zhenzhen; Shi, Hongbo; Cheng, Liang; Sun, Jie

    2016-01-01

    Increasing evidence has highlighted the important roles of dysregulated long non-coding RNA (lncRNA) expression in tumorigenesis, tumor progression and metastasis. However, lncRNA expression patterns and their prognostic value for tumor relapse in lung adenocarcinoma (LUAD) patients have not been systematically elucidated. In this study, we evaluated lncRNA expression profiles by repurposing the publicly available microarray expression profiles from a large cohort of LUAD patients and identified specific lncRNA signature closely associated with tumor relapse in LUAD from significantly altered lncRNAs using the weighted voting algorithm and cross-validation strategy, which was able to discriminate between relapsed and non-relapsed LUAD patients with sensitivity of 90.9% and specificity of 81.8%. From the discovery dataset, we developed a risk score model represented by the nine relapse-related lncRNAs for prognosis prediction, which classified patients into high-risk and low-risk subgroups with significantly different recurrence-free survival (HR=45.728, 95% CI=6.241-335.1; p=1.69e-04). The prognostic value of this relapse-related lncRNA signature was confirmed in the testing dataset and other two independent datasets. Multivariable Cox regression analysis and stratified analysis showed that the relapse-related lncRNA signature was independent of other clinical variables. Integrative in silico functional analysis suggested that these nine relapse-related lncRNAs revealed biological relevance to disease relapse, such as cell cycle, DNA repair and damage and cell death. Our study demonstrated that the relapse-related lncRNA signature may not only help to identify LUAD patients at high risk of relapse benefiting from adjuvant therapy but also could provide novel insights into the understanding of molecular mechanism of recurrent disease. PMID:27105492

  11. Exhaustive prediction of disease susceptibility to coding base changes in the human genome

    PubMed Central

    Kulkarni, Vinayak; Errami, Mounir; Barber, Robert; Garner, Harold R

    2008-01-01

    Background Single Nucleotide Polymorphisms (SNPs) are the most abundant form of genomic variation and can cause phenotypic differences between individuals, including diseases. Bases are subject to various levels of selection pressure, reflected in their inter-species conservation. Results We propose a method that is not dependant on transcription information to score each coding base in the human genome reflecting the disease probability associated with its mutation. Twelve factors likely to be associated with disease alleles were chosen as the input for a support vector machine prediction algorithm. The analysis yielded 83% sensitivity and 84% specificity in segregating disease like alleles as found in the Human Gene Mutation Database from non-disease like alleles as found in the Database of Single Nucleotide Polymorphisms. This algorithm was subsequently applied to each base within all known human genes, exhaustively confirming that interspecies conservation is the strongest factor for disease association. For each gene, the length normalized average disease potential score was calculated. Out of the 30 genes with the highest scores, 21 are directly associated with a disease. In contrast, out of the 30 genes with the lowest scores, only one is associated with a disease as found in published literature. The results strongly suggest that the highest scoring genes are enriched for those that might contribute to disease, if mutated. Conclusion This method provides valuable information to researchers to identify sensitive positions in genes that have a high disease probability, enabling them to optimize experimental designs and interpret data emerging from genetic and epidemiological studies. PMID:18793467

  12. Nucleotide Sequence Analyses and Predicted Coding of Bunyavirus Genome RNA Species

    PubMed Central

    Clerx-van Haaster, Corrie M.; Akashi, Hiroomi; Auperin, David D.; Bishop, David H. L.

    1982-01-01

    We performed 3′ RNA sequence analyses of [32P]pCp-end-labeled La Crosse (LAC) virus, alternate LAC virus isolate L74, and snowshoe hare bunyavirus large (L), medium (M), and small (S) negative-stranded viral RNA species to determine the coding capabilities of these species. These analyses were confirmed by dideoxy primer extension studies in which we used a synthetic oligodeoxynucleotide primer complementary to the conserved 3′-terminal decanucleotide of the three viral RNA species (Clerx-van Haaster and Bishop, Virology 105:564-574, 1980). The deduced sequences predicted translation of two S-RNA gene products that were read in overlapping reading frames. So far, only single contiguous open reading frames have been identified for the viral M- and L-RNA species. For the negative-stranded M-RNA species of all three viruses, the single reading frame developed from the first 3′-proximal UAC triplet. Likewise, for the L-RNA of the alternate LAC isolate, a single open reading frame developed from the first 3′-proximal UAC triplet. The corresponding L-RNA sequences of prototype LAC and snowshoe hare viruses initiated open reading frames; however, for both viral L-RNA species there was a preceding 3′-proximal UAC triplet in another reading frame that was followed shortly afterward by a termination codon. A comparison of the sequence data obtained for snowshoe hare virus, LAC virus, and the alternate LAC virus isolate showed that the identified nucleotide substitutions were sufficient to account for some of the fingerprint differences in the L-, M-, and S-RNA species of the three viruses. Unlike the distribution of the L- and M-RNA substitutions, significantly fewer nucleotide substitutions occurred after the initial UAC triplet of the S-RNA species than before this triplet, implying that the overlapping genes of the S RNA provided a constraint against evolution by point mutation. The comparative sequence analyses predicted amino acid differences among the

  13. Genomic Measures to Predict Adaptation to Novel Sensorimotor Environments and Improve Personalization of Countermeasure Design

    NASA Technical Reports Server (NTRS)

    Kreutzberg, G. A.; Zanello, S.; Seidler, R. D.; Peters, B.; De Dios, Y. E.; Gadd, N. E.; Bloomberg, J. J.; Mulavara, A. P.

    2016-01-01

    Introduction. Astronauts experience sensorimotor disturbances during their initial exposure to microgravity and during the re-adaptation phase following a return to an Earth-gravitational environment. These alterations may affect crewmembers' ability to perform mission-critical functional tasks. Interestingly, astronauts have shown significant inter-subject variation in adaptive capability during gravitational transitions. The ability to predict the manner and degree to which individual astronauts would be affected would improve the efficacy of personalized countermeasure training programs designed to enhance sensorimotor adaptability. The success of such an approach depends on the development of predictive measures of sensorimotor adaptation, which would ascertain each crewmember's adaptive capacity. The goal of this study is to determine whether specific genetic polymorphisms have significant influence on sensorimotor adaptability, which can help inform the design of personalized training countermeasures. Methods. Subjects (n=15) were tested on their ability to negotiate a complex obstacle course for ten test trials while wearing up-down vision-displacing goggles. This presented a visuomotor challenge while doing a full body task. The first test trial time and the recovery rate over the ten trials were used as adaptability performance metrics. Four single nucleotide polymorphisms (SNPs) were selected for their role in neural pathways underlying sensorimotor adaptation and were identified in subjects' DNA extracted from saliva samples: catechol-O-methyl transferase (COMT, rs4680), dopamine receptor D2 (DRD2, rs1076560), brain-derived neurotrophic factor genes (BDNF, rs6265), and the DraI polymorphism of the alpha-2 adrenergic receptor. The relationship between the SNPs and test performance was assessed by assigning subjects a rank score based on their adaptability performance metrics and comparing gene expression between the top half and bottom half performers

  14. Verification of computational aerodynamic predictions for complex hypersonic vehicles using the INCA{trademark} code

    SciTech Connect

    Payne, J.L.; Walker, M.A.

    1995-01-01

    This paper describes a process of combining two state-of-the-art CFD tools, SPRINT and INCA, in a manner which extends the utility of both codes beyond what is possible from either code alone. The speed and efficiency of the PNS code, SPRING, has been combined with the capability of a Navier-Stokes code to model fully elliptic, viscous separated regions on high performance, high speed flight systems. The coupled SPRINT/INCA capability is applicable for design and evaluation of high speed flight vehicles in the supersonic to hypersonic speed regimes. This paper describes the codes involved, the interface process and a few selected test cases which illustrate the SPRINT/INCA coupling process. Results have shown that the combination of SPRINT and INCA produces correct results and can lead to improved computational analyses for complex, three-dimensional problems.

  15. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  16. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  17. CSI feedback-based CS for underwater acoustic adaptive modulation OFDM system with channel prediction

    NASA Astrophysics Data System (ADS)

    Kuai, Xiao-yan; Sun, Hai-xin; Qi, Jie; Cheng, En; Xu, Xiao-ka; Guo, Yu-hui; Chen, You-gan

    2014-06-01

    In this paper, we investigate the performance of adaptive modulation (AM) orthogonal frequency division multiplexing (OFDM) system in underwater acoustic (UWA) communications. The aim is to solve the problem of large feedback overhead for channel state information (CSI) in every subcarrier. A novel CSI feedback scheme is proposed based on the theory of compressed sensing (CS). We propose a feedback from the receiver that only feedback the sparse channel parameters. Additionally, prediction of the channel state is proposed every several symbols to realize the AM in practice. We describe a linear channel prediction algorithm which is used in adaptive transmission. This system has been tested in the real underwater acoustic channel. The linear channel prediction makes the AM transmission techniques more feasible for acoustic channel communications. The simulation and experiment show that significant improvements can be obtained both in bit error rate (BER) and throughput in the AM scheme compared with the fixed Quadrature Phase Shift Keying (QPSK) modulation scheme. Moreover, the performance with standard CS outperforms the Discrete Cosine Transform (DCT) method.

  18. Incremental Validity of Personality Measures in Predicting Underwater Performance and Adaptation.

    PubMed

    Colodro, Joaquín; Garcés-de-Los-Fayos, Enrique J; López-García, Juan J; Colodro-Conde, Lucía

    2015-01-01

    Intelligence and personality traits are currently considered effective predictors of human behavior and job performance. However, there are few studies about their relevance in the underwater environment. Data from a sample of military personnel performing scuba diving courses were analyzed with regression techniques, testing the contribution of individual differences and ascertaining the incremental validity of the personality in an environment with extreme psychophysical demands. The results confirmed the incremental validity of personality traits (ΔR 2 = .20, f 2 = .25) over the predictive contribution of general mental ability (ΔR 2 = .07, f 2 = .08) in divers' performance. Moreover, personality (R(L)2 = .34) also showed a higher validity to predict underwater adaptation than general mental ability ( R(L)2 = .09). The ROC curve indicated 86% of the maximum possible discrimination power for the prediction of underwater adaptation, AUC = .86, p < .001, 95% CI (.82-.90). These findings confirm the shift and reversal of incremental validity of dispositional traits in the underwater environment and the relevance of personality traits as predictors of an effective response to the changing circumstances of military scuba diving. They also may improve the understanding of the behavioral effects and psychophysiological complications of diving and can also provide guidance for psychological intervention and prevention of risk in this extreme environment. PMID:26055931

  19. Protein Secondary Structure Prediction Using Local Adaptive Techniques in Training Neural Networks

    NASA Astrophysics Data System (ADS)

    Aik, Lim Eng; Zainuddin, Zarita; Joseph, Annie

    2008-01-01

    One of the most significant problems in computer molecular biology today is how to predict a protein's three-dimensional structure from its one-dimensional amino acid sequence or generally call the protein folding problem and difficult to determine the corresponding protein functions. Thus, this paper involves protein secondary structure prediction using neural network in order to solve the protein folding problem. The neural network used for protein secondary structure prediction is multilayer perceptron (MLP) of the feed-forward variety. The training set are taken from the protein data bank which are 120 proteins while 60 testing set is the proteins which were chosen randomly from the protein data bank. Multiple sequence alignment (MSA) is used to get the protein similar sequence and Position Specific Scoring matrix (PSSM) is used for network input. The training process of the neural network involves local adaptive techniques. Local adaptive techniques used in this paper comprises Learning rate by sign changes, SuperSAB, Quickprop and RPROP. From the simulation, the performance for learning rate by Rprop and Quickprop are superior to all other algorithms with respect to the convergence time. However, the best result was obtained using Rprop algorithm.

  20. Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.

  1. Incremental Validity of Personality Measures in Predicting Underwater Performance and Adaptation.

    PubMed

    Colodro, Joaquín; Garcés-de-Los-Fayos, Enrique J; López-García, Juan J; Colodro-Conde, Lucía

    2015-03-17

    Intelligence and personality traits are currently considered effective predictors of human behavior and job performance. However, there are few studies about their relevance in the underwater environment. Data from a sample of military personnel performing scuba diving courses were analyzed with regression techniques, testing the contribution of individual differences and ascertaining the incremental validity of the personality in an environment with extreme psychophysical demands. The results confirmed the incremental validity of personality traits (ΔR 2 = .20, f 2 = .25) over the predictive contribution of general mental ability (ΔR 2 = .07, f 2 = .08) in divers' performance. Moreover, personality (R(L)2 = .34) also showed a higher validity to predict underwater adaptation than general mental ability ( R(L)2 = .09). The ROC curve indicated 86% of the maximum possible discrimination power for the prediction of underwater adaptation, AUC = .86, p < .001, 95% CI (.82-.90). These findings confirm the shift and reversal of incremental validity of dispositional traits in the underwater environment and the relevance of personality traits as predictors of an effective response to the changing circumstances of military scuba diving. They also may improve the understanding of the behavioral effects and psychophysiological complications of diving and can also provide guidance for psychological intervention and prevention of risk in this extreme environment.

  2. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    SciTech Connect

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-07-05

    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in [1,2] wherein Verification & Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI [3,4] is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for numerical models

  3. Heart Motion Prediction Based on Adaptive Estimation Algorithms for Robotic Assisted Beating Heart Surgery

    PubMed Central

    Tuna, E. Erdem; Franke, Timothy J.; Bebek, Özkan; Shiose, Akira; Fukamachi, Kiyotaka; Çavuşoğlu, M. Cenk

    2013-01-01

    Robotic assisted beating heart surgery aims to allow surgeons to operate on a beating heart without stabilizers as if the heart is stationary. The robot actively cancels heart motion by closely following a point of interest (POI) on the heart surface—a process called Active Relative Motion Canceling (ARMC). Due to the high bandwidth of the POI motion, it is necessary to supply the controller with an estimate of the immediate future of the POI motion over a prediction horizon in order to achieve sufficient tracking accuracy. In this paper, two least-square based prediction algorithms, using an adaptive filter to generate future position estimates, are implemented and studied. The first method assumes a linear system relation between the consecutive samples in the prediction horizon. On the contrary, the second method performs this parametrization independently for each point over the whole the horizon. The effects of predictor parameters and variations in heart rate on tracking performance are studied with constant and varying heart rate data. The predictors are evaluated using a 3 degrees of freedom test-bed and prerecorded in-vivo motion data. Then, the one-step prediction and tracking performances of the presented approaches are compared with an Extended Kalman Filter predictor. Finally, the essential features of the proposed prediction algorithms are summarized. PMID:23976889

  4. Bioinformatics Approach for Prediction of Functional Coding/Noncoding Simple Polymorphisms (SNPs/Indels) in Human BRAF Gene.

    PubMed

    Hassan, Mohamed M; Omer, Shaza E; Khalf-Allah, Rahma M; Mustafa, Razaz Y; Ali, Isra S; Mohamed, Sofia B

    2016-01-01

    This study was carried out for Homo sapiens single variation (SNPs/Indels) in BRAF gene through coding/non-coding regions. Variants data was obtained from database of SNP even last update of November, 2015. Many bioinformatics tools were used to identify functional SNPs and indels in proteins functions, structures and expressions. Results shown, for coding polymorphisms, 111 SNPs predicted as highly damaging and six other were less. For UTRs, showed five SNPs and one indel were altered in micro RNAs binding sites (3' UTR), furthermore nil SNP or indel have functional altered in transcription factor binding sites (5' UTR). In addition for 5'/3' splice sites, analysis showed that one SNP within 5' splice site and one Indel in 3' splice site showed potential alteration of splicing. In conclude these previous functional identified SNPs and indels could lead to gene alteration, which may be directly or indirectly contribute to the occurrence of many diseases. PMID:27478437

  5. A Predictive Approach to Nonparametric Inference for Adaptive Sequential Sampling of Psychophysical Experiments

    PubMed Central

    Benner, Philipp; Elze, Tobias

    2012-01-01

    We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem. PMID:22822269

  6. Prediction of ultrasonic pulse velocity for enhanced peat bricks using adaptive neuro-fuzzy methodology.

    PubMed

    Motamedi, Shervin; Roy, Chandrabhushan; Shamshirband, Shahaboddin; Hashim, Roslan; Petković, Dalibor; Song, Ki-Il

    2015-08-01

    Ultrasonic pulse velocity is affected by defects in material structure. This study applied soft computing techniques to predict the ultrasonic pulse velocity for various peats and cement content mixtures for several curing periods. First, this investigation constructed a process to simulate the ultrasonic pulse velocity with adaptive neuro-fuzzy inference system. Then, an ANFIS network with neurons was developed. The input and output layers consisted of four and one neurons, respectively. The four inputs were cement, peat, sand content (%) and curing period (days). The simulation results showed efficient performance of the proposed system. The ANFIS and experimental results were compared through the coefficient of determination and root-mean-square error. In conclusion, use of ANFIS network enhances prediction and generation of strength. The simulation results confirmed the effectiveness of the suggested strategies.

  7. Prediction of ultrasonic pulse velocity for enhanced peat bricks using adaptive neuro-fuzzy methodology.

    PubMed

    Motamedi, Shervin; Roy, Chandrabhushan; Shamshirband, Shahaboddin; Hashim, Roslan; Petković, Dalibor; Song, Ki-Il

    2015-08-01

    Ultrasonic pulse velocity is affected by defects in material structure. This study applied soft computing techniques to predict the ultrasonic pulse velocity for various peats and cement content mixtures for several curing periods. First, this investigation constructed a process to simulate the ultrasonic pulse velocity with adaptive neuro-fuzzy inference system. Then, an ANFIS network with neurons was developed. The input and output layers consisted of four and one neurons, respectively. The four inputs were cement, peat, sand content (%) and curing period (days). The simulation results showed efficient performance of the proposed system. The ANFIS and experimental results were compared through the coefficient of determination and root-mean-square error. In conclusion, use of ANFIS network enhances prediction and generation of strength. The simulation results confirmed the effectiveness of the suggested strategies. PMID:25957464

  8. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  9. Fan Noise Prediction System Development: Source/Radiation Field Coupling and Workstation Conversion for the Acoustic Radiation Code

    NASA Technical Reports Server (NTRS)

    Meyer, H. D.

    1993-01-01

    The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.

  10. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  11. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  12. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  13. Predicting organismal vulnerability to climate warming: roles of behaviour, physiology and adaptation.

    PubMed

    Huey, Raymond B; Kearney, Michael R; Krockenberger, Andrew; Holtum, Joseph A M; Jess, Mellissa; Williams, Stephen E

    2012-06-19

    A recently developed integrative framework proposes that the vulnerability of a species to environmental change depends on the species' exposure and sensitivity to environmental change, its resilience to perturbations and its potential to adapt to change. These vulnerability criteria require behavioural, physiological and genetic data. With this information in hand, biologists can predict organisms most at risk from environmental change. Biologists and managers can then target organisms and habitats most at risk. Unfortunately, the required data (e.g. optimal physiological temperatures) are rarely available. Here, we evaluate the reliability of potential proxies (e.g. critical temperatures) that are often available for some groups. Several proxies for ectotherms are promising, but analogous ones for endotherms are lacking. We also develop a simple graphical model of how behavioural thermoregulation, acclimation and adaptation may interact to influence vulnerability over time. After considering this model together with the proxies available for physiological sensitivity to climate change, we conclude that ectotherms sharing vulnerability traits seem concentrated in lowland tropical forests. Their vulnerability may be exacerbated by negative biotic interactions. Whether tropical forest (or other) species can adapt to warming environments is unclear, as genetic and selective data are scant. Nevertheless, the prospects for tropical forest ectotherms appear grim.

  14. Predictive analytics of environmental adaptability in multi-omic network models.

    PubMed

    Angione, Claudio; Lió, Pietro

    2015-01-01

    Bacterial phenotypic traits and lifestyles in response to diverse environmental conditions depend on changes in the internal molecular environment. However, predicting bacterial adaptability is still difficult outside of laboratory controlled conditions. Many molecular levels can contribute to the adaptation to a changing environment: pathway structure, codon usage, metabolism. To measure adaptability to changing environmental conditions and over time, we develop a multi-omic model of Escherichia coli that accounts for metabolism, gene expression and codon usage at both transcription and translation levels. After the integration of multiple omics into the model, we propose a multiobjective optimization algorithm to find the allowable and optimal metabolic phenotypes through concurrent maximization or minimization of multiple metabolic markers. In the condition space, we propose Pareto hypervolume and spectral analysis as estimators of short term multi-omic (transcriptomic and metabolic) evolution, thus enabling comparative analysis of metabolic conditions. We therefore compare, evaluate and cluster different experimental conditions, models and bacterial strains according to their metabolic response in a multidimensional objective space, rather than in the original space of microarray data. We finally validate our methods on a phenomics dataset of growth conditions. Our framework, named METRADE, is freely available as a MATLAB toolbox.

  15. Predicting organismal vulnerability to climate warming: roles of behaviour, physiology and adaptation

    PubMed Central

    Huey, Raymond B.; Kearney, Michael R.; Krockenberger, Andrew; Holtum, Joseph A. M.; Jess, Mellissa; Williams, Stephen E.

    2012-01-01

    A recently developed integrative framework proposes that the vulnerability of a species to environmental change depends on the species' exposure and sensitivity to environmental change, its resilience to perturbations and its potential to adapt to change. These vulnerability criteria require behavioural, physiological and genetic data. With this information in hand, biologists can predict organisms most at risk from environmental change. Biologists and managers can then target organisms and habitats most at risk. Unfortunately, the required data (e.g. optimal physiological temperatures) are rarely available. Here, we evaluate the reliability of potential proxies (e.g. critical temperatures) that are often available for some groups. Several proxies for ectotherms are promising, but analogous ones for endotherms are lacking. We also develop a simple graphical model of how behavioural thermoregulation, acclimation and adaptation may interact to influence vulnerability over time. After considering this model together with the proxies available for physiological sensitivity to climate change, we conclude that ectotherms sharing vulnerability traits seem concentrated in lowland tropical forests. Their vulnerability may be exacerbated by negative biotic interactions. Whether tropical forest (or other) species can adapt to warming environments is unclear, as genetic and selective data are scant. Nevertheless, the prospects for tropical forest ectotherms appear grim. PMID:22566674

  16. Predictive analytics of environmental adaptability in multi-omic network models

    PubMed Central

    Angione, Claudio; Lió, Pietro

    2015-01-01

    Bacterial phenotypic traits and lifestyles in response to diverse environmental conditions depend on changes in the internal molecular environment. However, predicting bacterial adaptability is still difficult outside of laboratory controlled conditions. Many molecular levels can contribute to the adaptation to a changing environment: pathway structure, codon usage, metabolism. To measure adaptability to changing environmental conditions and over time, we develop a multi-omic model of Escherichia coli that accounts for metabolism, gene expression and codon usage at both transcription and translation levels. After the integration of multiple omics into the model, we propose a multiobjective optimization algorithm to find the allowable and optimal metabolic phenotypes through concurrent maximization or minimization of multiple metabolic markers. In the condition space, we propose Pareto hypervolume and spectral analysis as estimators of short term multi-omic (transcriptomic and metabolic) evolution, thus enabling comparative analysis of metabolic conditions. We therefore compare, evaluate and cluster different experimental conditions, models and bacterial strains according to their metabolic response in a multidimensional objective space, rather than in the original space of microarray data. We finally validate our methods on a phenomics dataset of growth conditions. Our framework, named METRADE, is freely available as a MATLAB toolbox. PMID:26482106

  17. A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health

    PubMed Central

    Zafra-Cabeza, Ascensión; Rivera, Daniel E.; Collins, Linda M.; Ridao, Miguel A.; Camacho, Eduardo F.

    2010-01-01

    This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm. PMID:21643450

  18. A Novel Model Predictive Control Formulation for Hybrid Systems With Application to Adaptive Behavioral Interventions

    PubMed Central

    Nandola, Naresh N.; Rivera, Daniel E.

    2010-01-01

    This paper presents a novel model predictive control (MPC) formulation for linear hybrid systems. The algorithm relies on a multiple-degree-of-freedom formulation that enables the user to adjust the speed of setpoint tracking, measured disturbance rejection and unmeasured disturbance rejection independently in the closed-loop system. Consequently, controller tuning is more flexible and intuitive than relying on move suppression weights as traditionally used in MPC schemes. The formulation is motivated by the need to achieve robust performance in using the algorithm in emerging applications, for instance, as a decision policy for adaptive, time-varying interventions used in behavioral health. The proposed algorithm is demonstrated on a hypothetical adaptive intervention problem inspired by the Fast Track program, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results in the presence of simultaneous disturbances and significant plant-model mismatch are presented. These demonstrate that a hybrid MPC-based approach for this class of interventions can be tuned for desired performance under demanding conditions that resemble participant variability that is experienced in practice when applying an adaptive intervention to a population. PMID:20830213

  19. Presence of Motor-Intentional Aiming Deficit Predicts Functional Improvement of Spatial Neglect with Prism Adaptation

    PubMed Central

    Goedert, Kelly M.; Chen, Peii; Boston, Raymond C.; Foundas, Anne L.; Barrett, A. M.

    2013-01-01

    Spatial neglect is a debilitating disorder for which there is no agreed upon course of rehabilitation. The lack of consensus on treatment may result from systematic differences in the syndromes’ characteristics, with spatial cognitive deficits potentially affecting perceptual-attentional Where or motor-intentional Aiming spatial processing. Heterogeneity of response to treatment might be explained by different treatment impact on these dissociated deficits: prism adaptation, for example, might reduce Aiming deficits without affecting Where spatial deficits. Here, we tested the hypothesis that classifying patients by their profile of Where-vs-Aiming spatial deficit would predict response to prism adaptation, and specifically that patients with Aiming bias would have better recovery than those with isolated Where bias. We classified the spatial errors of 24 sub-acute right-stroke survivors with left spatial neglect as: 1) isolated Where bias, 2) isolated Aiming bias or 3) both. Participants then completed two weeks of prism adaptation treatment. They also completed the Behavioral Inattention Test (BIT) and Catherine Bergego Scale (CBS) tests of neglect recovery weekly for six weeks. As hypothesized, participants with only Aiming deficits improved on the CBS, whereas, those with only Where deficits did not improve. Participants with both deficits demonstrated intermediate improvement. These results support behavioral classification of spatial neglect patients as a potential valuable tool for assigning targeted, effective early rehabilitation. PMID:24376064

  20. Predicting coral bleaching hotspots: the role of regional variability in thermal stress and potential adaptation rates

    NASA Astrophysics Data System (ADS)

    Teneva, Lida; Karnauskas, Mandy; Logan, Cheryl A.; Bianucci, Laura; Currie, Jock C.; Kleypas, Joan A.

    2012-03-01

    Sea surface temperature fields (1870-2100) forced by CO2-induced climate change under the IPCC SRES A1B CO2 scenario, from three World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (WCRP CMIP3) models (CCSM3, CSIRO MK 3.5, and GFDL CM 2.1), were used to examine how coral sensitivity to thermal stress and rates of adaption affect global projections of coral-reef bleaching. The focus of this study was two-fold, to: (1) assess how the impact of Degree-Heating-Month (DHM) thermal stress threshold choice affects potential bleaching predictions and (2) examine the effect of hypothetical adaptation rates of corals to rising temperature. DHM values were estimated using a conventional threshold of 1°C and a variability-based threshold of 2σ above the climatological maximum Coral adaptation rates were simulated as a function of historical 100-year exposure to maximum annual SSTs with a dynamic rather than static climatological maximum based on the previous 100 years, for a given reef cell. Within CCSM3 simulations, the 1°C threshold predicted later onset of mild bleaching every 5 years for the fraction of reef grid cells where 1°C > 2σ of the climatology time series of annual SST maxima (1961-1990). Alternatively, DHM values using both thresholds, with CSIRO MK 3.5 and GFDL CM 2.1 SSTs, did not produce drastically different onset timing for bleaching every 5 years. Across models, DHMs based on 1°C thermal stress threshold show the most threatened reefs by 2100 could be in the Central and Western Equatorial Pacific, whereas use of the variability-based threshold for DHMs yields the Coral Triangle and parts of Micronesia and Melanesia as bleaching hotspots. Simulations that allow corals to adapt to increases in maximum SST drastically reduce the rates of bleaching. These findings highlight the importance of considering the thermal stress threshold in DHM estimates as well as potential adaptation models in future coral bleaching projections.

  1. A predictive model to inform adaptive management of double-crested cormorants and fisheries in Michigan

    USGS Publications Warehouse

    Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.

    2015-01-01

    The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.

  2. The biology of developmental plasticity and the Predictive Adaptive Response hypothesis

    PubMed Central

    Bateson, Patrick; Gluckman, Peter; Hanson, Mark

    2014-01-01

    Many forms of developmental plasticity have been observed and these are usually beneficial to the organism. The Predictive Adaptive Response (PAR) hypothesis refers to a form of developmental plasticity in which cues received in early life influence the development of a phenotype that is normally adapted to the environmental conditions of later life. When the predicted and actual environments differ, the mismatch between the individual's phenotype and the conditions in which it finds itself can have adverse consequences for Darwinian fitness and, later, for health. Numerous examples exist of the long-term effects of cues indicating a threatening environment affecting the subsequent phenotype of the individual organism. Other examples consist of the long-term effects of variations in environment within a normal range, particularly in the individual's nutritional environment. In mammals the cues to developing offspring are often provided by the mother's plane of nutrition, her body composition or stress levels. This hypothetical effect in humans is thought to be important by some scientists and controversial by others. In resolving the conflict, distinctions should be drawn between PARs induced by normative variations in the developmental environment and the ill effects on development of extremes in environment such as a very poor or very rich nutritional environment. Tests to distinguish between different developmental processes impacting on adult characteristics are proposed. Many of the mechanisms underlying developmental plasticity involve molecular epigenetic processes, and their elucidation in the context of PARs and more widely has implications for the revision of classical evolutionary theory. PMID:24882817

  3. Predicting neutron diffusion eigenvalues with a query-based adaptive neural architecture.

    PubMed

    Lysenko, M G; Wong, H I; Maldonado, G I

    1999-01-01

    A query-based approach for adaptively retraining and restructuring a two-hidden-layer artificial neural network (ANN) has been developed for the speedy prediction of the fundamental mode eigenvalue of the neutron diffusion equation, a standard nuclear reactor core design calculation which normally requires the iterative solution of a large-scale system of nonlinear partial differential equations (PDE's). The approach developed focuses primarily upon the adaptive selection of training and cross-validation data and on artificial neural-network (ANN) architecture adjustments, with the objective of improving the accuracy and generalization properties of ANN-based neutron diffusion eigenvalue predictions. For illustration, the performance of a "bare bones" feedforward multilayer perceptron (MLP) is upgraded through a variety of techniques; namely, nonrandom initial training set selection, adjoint function input weighting, teacher-student membership and equivalence queries for generation of appropriate training data, and a dynamic node architecture (DNA) implementation. The global methodology is flexible in that it can "wrap around" any specific training algorithm selected for the static calculations (i.e., training iterations with a fixed training set and architecture). Finally, the improvements obtained are carefully contrasted against past works reported in the literature.

  4. Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data

    PubMed Central

    Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  5. Self-adaptive MOEA feature selection for classification of bankruptcy prediction data.

    PubMed

    Gaspar-Cunha, A; Recio, G; Costa, L; Estébanez, C

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  6. Adaptive network based on fuzzy inference system for equilibrated urea concentration prediction.

    PubMed

    Azar, Ahmad Taher

    2013-09-01

    Post-dialysis urea rebound (PDUR) has been attributed mostly to redistribution of urea from different compartments, which is determined by variations in regional blood flows and transcellular urea mass transfer coefficients. PDUR occurs after 30-90min of short or standard hemodialysis (HD) sessions and after 60min in long 8-h HD sessions, which is inconvenient. This paper presents adaptive network based on fuzzy inference system (ANFIS) for predicting intradialytic (Cint) and post-dialysis urea concentrations (Cpost) in order to predict the equilibrated (Ceq) urea concentrations without any blood sampling from dialysis patients. The accuracy of the developed system was prospectively compared with other traditional methods for predicting equilibrated urea (Ceq), post dialysis urea rebound (PDUR) and equilibrated dialysis dose (eKt/V). This comparison is done based on root mean squares error (RMSE), normalized mean square error (NRMSE), and mean absolute percentage error (MAPE). The ANFIS predictor for Ceq achieved mean RMSE values of 0.3654 and 0.4920 for training and testing, respectively. The statistical analysis demonstrated that there is no statistically significant difference found between the predicted and the measured values. The percentage of MAE and RMSE for testing phase is 0.63% and 0.96%, respectively. PMID:23806679

  7. Prediction of antimicrobial peptides based on the adaptive neuro-fuzzy inference system application.

    PubMed

    Fernandes, Fabiano C; Rigden, Daniel J; Franco, Octavio L

    2012-01-01

    Antimicrobial peptides (AMPs) are widely distributed defense molecules and represent a promising alternative for solving the problem of antibiotic resistance. Nevertheless, the experimental time required to screen putative AMPs makes computational simulations based on peptide sequence analysis and/or molecular modeling extremely attractive. Artificial intelligence methods acting as simulation and prediction tools are of great importance in helping to efficiently discover and design novel AMPs. In the present study, state-of-the-art published outcomes using different prediction methods and databases were compared to an adaptive neuro-fuzzy inference system (ANFIS) model. Data from our study showed that ANFIS obtained an accuracy of 96.7% and a Matthew's Correlation Coefficient (MCC) of0.936, which proved it to be an efficient model for pattern recognition in antimicrobial peptide prediction. Furthermore, a lower number of input parameters were needed for the ANFIS model, improving the speed and ease of prediction. In summary, due to the fuzzy nature ofAMP physicochemical properties, the ANFIS approach presented here can provide an efficient solution for screening putative AMP sequences and for exploration of properties characteristic of AMPs. PMID:23193592

  8. Code requirements document: MODFLOW 2. 1: A program for predicting moderator flow patterns

    SciTech Connect

    Peterson, P.F. . Dept. of Nuclear Engineering); Paik, I.K. )

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors.

  9. Code requirements document: MODFLOW 2.1: A program for predicting moderator flow patterns

    SciTech Connect

    Peterson, P.F.; Paik, I.K.

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors.

  10. Fast subpel motion estimation for H.264/advanced video coding with an adaptive motion vector accuracy decision

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Jung, Bongsoo; Jung, Jooyoung; Jeon, Byeungwoo

    2012-11-01

    The quarter-pel motion vector accuracy supported by H.264/advanced video coding (AVC) in motion estimation (ME) and compensation (MC) provides high compression efficiency. However, it also increases the computational complexity. While various well-known fast integer-pel ME methods are already available, lack of a good, fast subpel ME method results in problems associated with relatively high computational complexity. This paper presents one way of solving the complexity problem of subpel ME by making adaptive motion vector (MV) accuracy decisions in inter-mode selection. The proposed MV accuracy decision is made using inter-mode selection of a macroblock with two decision criteria. Pixels are classified as stationary (and/or homogeneous) or nonstationary (and/or nonhomogeneous). In order to avoid unnecessary interpolation and processing, a proper subpel ME level is chosen among four different combinations, each of which has a different MV accuracy and number of subpel ME iterations based on the classification. Simulation results using an open source x264 software encoder show that without any noticeable degradation (by -0.07 dB on average), the proposed method reduces total encoding time and subpel ME time, respectively, by 51.78% and by 76.49% on average, as compared to the conventional full-pel pixel search.

  11. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model

    PubMed Central

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  12. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model.

    PubMed

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  13. Thermal treatments of foods: a predictive general-purpose code for heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Barba, Anna Angela

    2005-05-01

    Thermal treatments of foods required accurate processing protocols. In this context, mathematical modeling of heat and mass transfer can play an important role in the control and definition of the process parameters as well as to design processing systems. In this work a code able to simulate heat and mass transfer phenomena within solid bodies has been developed. The code has been written with the ability of describing different geometries and it can account for any kind of different initial/boundary conditions. Transport phenomena within multi-layer bodies can be described, and time/position dependent material parameters can be implemented. Finally, the code has been validated by comparison with a problem for which the analytical solution is known, and by comparison with a differential scanning calorimetry signal that described the heating treatment of a raw potato (Solanum tuberosum).

  14. Interest Level in 2-Year-Olds with Autism Spectrum Disorder Predicts Rate of Verbal, Nonverbal, and Adaptive Skill Acquisition

    ERIC Educational Resources Information Center

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2015-01-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2?years predicts subsequent rates of verbal, nonverbal, and adaptive skill…

  15. Predicting animal δ18O: Accounting for diet and physiological adaptation

    NASA Astrophysics Data System (ADS)

    Kohn, Matthew J.

    1996-12-01

    Theoretical predictions and measured isotope variations indicate that diet and physiological adaptation have a significant impact on animals δ18O and cannot be ignored. A generalized model is therefore developed for the prediction of animal body water and phosphate δ18O to incorporate these factors quantitatively. Application of the model reproduces most published compositions and compositional trends for mammals and birds. A moderate dependence of animal δ18O on humidity is predicted for drought-tolerant animals, and the correlation between humidity and North American deer bone composition as corrected for local meteoric water is predicted within the scatter of the data. In contrast to an observed strong correlation between kangaroo δ18O and humidity (Δδ18O/Δh ∼ 2.5± 0.4‰/10%r.h.), the predicted humidity dependence is only 1.3 - 1.7‰/10% r.h., and it is inferred that drinking water in hot dry areas of Australia is enriched in 18O over rainwater. Differences in physiology and water turnover readily explain the observed differences in δ18O for several herbivore genera in East Africa, excepting antelopes. Antelope models are more sensitive to biological fractionations, and adjustments to the flux of transcutaneous water vapor within experimentally measured ranges allows their δ18O values to be matched. Models of the seasonal changes of forage composition for two regions with dissimilar climates show that significant seasonal variations in animal isotope composition are expected, and that animals with different physiologies and diets track climate differently. Analysis of different genera with disparate sensitivities to surface water and humidity will allow the most accurate quantification of past climate changes.

  16. Vision: Efficient Adaptive Coding

    PubMed Central

    Burr, David; Cicchini, Guido Marco

    2016-01-01

    Recent studies show that perception is driven not only by the stimuli currently impinging on our senses, but also by the immediate past history. The influence of recent perceptual history on the present reflects the action of efficient mechanisms that exploit temporal redundancies in natural scenes. PMID:25458222

  17. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  18. An adaptive distance-based group contribution method for thermodynamic property prediction.

    PubMed

    He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing

    2016-09-14

    In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory. PMID:27522953

  19. Predicting demographically sustainable rates of adaptation: can great tit breeding time keep pace with climate change?

    PubMed

    Gienapp, Phillip; Lof, Marjolein; Reed, Thomas E; McNamara, John; Verhulst, Simon; Visser, Marcel E

    2013-01-19

    Populations need to adapt to sustained climate change, which requires micro-evolutionary change in the long term. A key question is how the rate of this micro-evolutionary change compares with the rate of environmental change, given that theoretically there is a 'critical rate of environmental change' beyond which increased maladaptation leads to population extinction. Here, we parametrize two closely related models to predict this critical rate using data from a long-term study of great tits (Parus major). We used stochastic dynamic programming to predict changes in optimal breeding time under three different climate scenarios. Using these results we parametrized two theoretical models to predict critical rates. Results from both models agreed qualitatively in that even 'mild' rates of climate change would be close to these critical rates with respect to great tit breeding time, while for scenarios close to the upper limit of IPCC climate projections the calculated critical rates would be clearly exceeded with possible consequences for population persistence. We therefore tentatively conclude that micro-evolution, together with plasticity, would rescue only the population from mild rates of climate change, although the models make many simplifying assumptions that remain to be tested.

  20. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    NASA Astrophysics Data System (ADS)

    Karahan, Aydın; Buongiorno, Jacopo

    2010-01-01

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO2-PuO2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium reactors

  1. Prediction of Corrosion Resistance of Some Dental Metallic Materials with an Adaptive Regression Model

    NASA Astrophysics Data System (ADS)

    Chelariu, Romeu; Suditu, Gabriel Dan; Mareci, Daniel; Bolat, Georgiana; Cimpoesu, Nicanor; Leon, Florin; Curteanu, Silvia

    2015-04-01

    The aim of this study is to investigate the electrochemical behavior of some dental metallic materials in artificial saliva for different pH (5.6 and 3.4), NaF content (500 ppm, 1000 ppm, and 2000 ppm), and with albumin protein addition (0.6 wt.%) for pH 3.4. The corrosion resistance of the alloys was quantitatively evaluated by polarization resistance, estimated by electrochemical impedance spectroscopy method. An adaptive k-nearest-neighbor regression method was applied for evaluating the corrosion resistance of the alloys by simulation, depending on the operation conditions. The predictions provided by the model are useful for experimental practice, as they can replace or, at least, help to plan the experiments. The accurate results obtained prove that the developed model is reliable and efficient.

  2. Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2016-09-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.

  3. Predicted performance benefits of an adaptive digital engine control system of an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Myers, L. P.; Ray, R. J.

    1985-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrating engine-airframe control systems. Currently this is accomplished on the NASA Ames Research Center's F-15 airplane. The two control modes used to implement the systems are an integrated flightpath management mode and in integrated adaptive engine control system (ADECS) mode. The ADECS mode is a highly integrated mode in which the airplane flight conditions, the resulting inlet distortion, and the available engine stall margin are continually computed. The excess stall margin is traded for thrust. The predicted increase in engine performance due to the ADECS mode is presented in this report.

  4. Adaptive Scheduling for QoS Virtual Machines under Different Resource Allocation - Performance Effects and Predictability

    NASA Astrophysics Data System (ADS)

    Sodan, Angela C.

    Virtual machines have become an important approach to provide performance isolation and performance guarantees (QoS) on cluster servers and on many-core SMP servers. Many-core CPUs are a current trend in CPU design and require jobs to be parallel for exploitation of the performance potential. Very promising for batch job scheduling with virtual machines on both cluster servers and many-core SMP servers is adaptive scheduling which can adjust sizes of parallel jobs to consider different load situations and different resource availability. Then, the resource allocation and resource partitioning can be determined at virtual-machine level and be propagated down to the job sizes. The paper investigates job re-sizing and virtual-machine resizing, and the effects which the efficiency curve of the jobs has on the resulting performance. Additionally, the paper presents a simple, yet effective queuing-model approach for predicting performance under different resource allocation.

  5. Adaptive neuro-fuzzy prediction of modulation transfer function of optical lens system

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Md Nasir, Mohd Hairul Nizam; Pavlović, Nenad T.; Akib, Shatirah

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to predict MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using MATLAB/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  6. A simple computational principle predicts vocal adaptation dynamics across age and error size

    PubMed Central

    Kelly, Conor W.; Sober, Samuel J.

    2014-01-01

    The brain uses sensory feedback to correct errors in behavior. Songbirds and humans acquire vocal behaviors by imitating the sounds produced by adults and rely on auditory feedback to correct vocal errors throughout their lifetimes. In both birds and humans, acoustic variability decreases steadily with age following the acquisition of vocal behavior. Prior studies in adults have shown that while sensory errors that fall within the limits of vocal variability evoke robust motor corrections, larger errors do not induce learning. Although such results suggest that younger animals, which have greater vocal variability, might correct large errors more readily than older individuals, it is unknown whether age-dependent changes in variability are accompanied by changes in the speed or magnitude of vocal error correction. We tested the hypothesis that auditory errors evoke greater vocal changes in younger animals and that a common computation determines how sensory information drives motor learning across different ages and error sizes. Consistent with our hypothesis, we found that in songbirds the speed and extent of error correction changes dramatically with age and that age-dependent differences in learning were predicted by a model in which the overlap between sensory errors and the distribution of prior sensory feedback determines the dynamics of adaptation. Our results suggest that the brain employs a simple and robust computational principle to calibrate the rate and magnitude of vocal adaptation across age-dependent changes in behavioral performance and in response to different sensory errors. PMID:25324740

  7. Computer code for predicting coolant flow and heat transfer in turbomachinery

    NASA Technical Reports Server (NTRS)

    Meitner, Peter L.

    1990-01-01

    A computer code was developed to analyze any turbomachinery coolant flow path geometry that consist of a single flow passage with a unique inlet and exit. Flow can be bled off for tip-cap impingement cooling, and a flow bypass can be specified in which coolant flow is taken off at one point in the flow channel and reintroduced at a point farther downstream in the same channel. The user may either choose the coolant flow rate or let the program determine the flow rate from specified inlet and exit conditions. The computer code integrates the 1-D momentum and energy equations along a defined flow path and calculates the coolant's flow rate, temperature, pressure, and velocity and the heat transfer coefficients along the passage. The equations account for area change, mass addition or subtraction, pumping, friction, and heat transfer.

  8. VISA: a computer code for predicting the probability of reactor pressure-vessel failure. [PWR

    SciTech Connect

    Stevens, D.L.; Simonen, F.A.; Strosnider, J. Jr.; Klecker, R.W.; Engel, D.W.; Johnson, K.I.

    1983-09-01

    The VISA (Vessel Integrity Simulation Analysis) code was developed as part of the NRC staff evaluation of pressurized thermal shock. VISA uses Monte Carlo simulation to evaluate the failure probability of a pressurized water reactor (PWR) pressure vessel subjected to a pressure and thermal transient specified by the user. Linear elastic fracture mechanics are used to model crack initiation and propagation. parameters for initial crack size, copper content, initial RT/sub NDT/, fluence, crack-initiation fracture toughness, and arrest fracture toughness are treated as random variables. This report documents the version of VISA used in the NRC staff report (Policy Issue from J.W. Dircks to NRC Commissioners, Enclosure A: NRC Staff Evaluation of Pressurized Thermal Shock, November 1982, SECY-82-465) and includes a user's guide for the code.

  9. WINCOF-I code for prediction of fan compressor unit with water ingestion

    NASA Technical Reports Server (NTRS)

    Murthy, S. N. B.; Mullican, A.

    1990-01-01

    The PURDUE-WINCOF code, which provides a numerical method of obtaining the performance of a fan-compressor unit of a jet engine with water ingestion into the inlet, was modified to take into account: (1) the scoop factor, (2) the time required for the setting-in of a quasi-steady distribution of water, and (3) the heat and mass transfer processes over the time calculated under 2. The modified code, named WINCOF-I was utilized to obtain the performance of a fan-compressor unit of a generic jet engine. The results illustrate the manner in which quasi-equilibrium conditions become established in the machine and the redistribution of ingested water in various stages in the form of a film out of the casing wall, droplets across the span, and vapor due to mass transfer.

  10. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE

    PubMed Central

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J.; Gutman, Boris A.; Chen, Kewei; Reiman, Eric M.; Thompson, Paul M.; Ye, Jieping; Wang, Yalin

    2016-01-01

    Alzheimer’s disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance. PMID:27499829

  11. Stability of executive function and predictions to adaptive behavior from middle childhood to pre-adolescence

    PubMed Central

    Harms, Madeline B.; Zayas, Vivian; Meltzoff, Andrew N.; Carlson, Stephanie M.

    2014-01-01

    The shift from childhood to adolescence is characterized by rapid remodeling of the brain and increased risk-taking behaviors. Current theories hypothesize that developmental enhancements in sensitivity to affective environmental cues in adolescence may undermine executive function (EF) and increase the likelihood of problematic behaviors. In the current study, we examined the extent to which EF in childhood predicts EF in early adolescence. We also tested whether individual differences in neural responses to affective cues (rewards/punishments) in childhood serve as a biological marker for EF, sensation-seeking, academic performance, and social skills in early adolescence. At age 8, 84 children completed a gambling task while event-related potentials (ERPs) were recorded. We examined the extent to which selections resulting in rewards or losses in this task elicited (i) the P300, a post-stimulus waveform reflecting the allocation of attentional resources toward a stimulus, and (ii) the SPN, a pre-stimulus anticipatory waveform reflecting a neural representation of a “hunch” about an outcome that originates in insula and ventromedial PFC. Children also completed a Dimensional Change Card-Sort (DCCS) and Flanker task to measure EF. At age 12, 78 children repeated the DCCS and Flanker and completed a battery of questionnaires. Flanker and DCCS accuracy at age 8 predicted Flanker and DCCS performance at age 12, respectively. Individual differences in the magnitude of P300 (to losses vs. rewards) and SPN (preceding outcomes with a high probability of punishment) at age 8 predicted self-reported sensation seeking (lower) and teacher-rated academic performance (higher) at age 12. We suggest there is stability in EF from age 8 to 12, and that childhood neural sensitivity to reward and punishment predicts individual differences in sensation seeking and adaptive behaviors in children entering adolescence. PMID:24795680

  12. Prediction of Scour Depth around Bridge Piers using Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos; Zhang, Hanqing

    2014-05-01

    Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health of river systems but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the application of a Machine Learning model that has been successfully employed in Water Engineering, namely an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation.

  13. Predictive Fallout Composition Modeling: Improvements and Applications of the Defense Land Fallout Interpretive Code

    SciTech Connect

    Hooper, David A; Jodoin, Vincent J; Lee, Ronald W; Monterial, Mateusz

    2012-01-01

    This paper outlines several improvements to the Particle Activity Module of the Defense Land Fallout Interpretive Code (DELFIC). The modeling of each phase of the fallout process is discussed within DELFIC to demonstrate the capabilities and limitations with the code for modeling and simulation. Expansion of the DELFIC isotopic library to include actinides and light elements is shown. Several key features of the new library are demonstrated, including compliance with ENDF/B-VII standards, augmentation of hardwired activated soil and actinide decay calculations with exact Bateman calculations, and full physical and chemical fractionation of all material inventories. Improvements to the radionuclide source term are demonstrated, including the ability to specify heterogeneous fission types and the ability to import source terms from irradiation calculations using the Oak Ridge Isotope Generation (ORIGEN) code. Additionally, the dose, kerma, and effective dose conversion factors are revised. Finally, the application of DELFIC for consequence management planning and forensic analysis is presented. For consequence management, DELFIC is shown to provide disaster recovery teams with simulations of real-time events, including the location, composition, time of arrival, activity rates, and dose rates of fallout, accounting for site-specific atmospheric effects. The results from DELFIC are also demonstrated for use by nuclear forensics teams to plan collection routes (including the determination of optimal collection locations), estimate dose rates to collectors, and anticipate the composition of material at collection sites. These capabilities give mission planners the ability to maximize their effectiveness in the field while minimizing risk to their collectors.

  14. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  15. Improved NASA-ANOPP Noise Prediction Computer Code for Advanced Subsonic Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Kontos, K. B.; Janardan, B. A.; Gliebe, P. R.

    1996-01-01

    Recent experience using ANOPP to predict turbofan engine flyover noise suggests that it over-predicts overall EPNL by a significant amount. An improvement in this prediction method is desired for system optimization and assessment studies of advanced UHB engines. An assessment of the ANOPP fan inlet, fan exhaust, jet, combustor, and turbine noise prediction methods is made using static engine component noise data from the CF6-8OC2, E(3), and QCSEE turbofan engines. It is shown that the ANOPP prediction results are generally higher than the measured GE data, and that the inlet noise prediction method (Heidmann method) is the most significant source of this overprediction. Fan noise spectral comparisons show that improvements to the fan tone, broadband, and combination tone noise models are required to yield results that more closely simulate the GE data. Suggested changes that yield improved fan noise predictions but preserve the Heidmann model structure are identified and described. These changes are based on the sets of engine data mentioned, as well as some CFM56 engine data that was used to expand the combination tone noise database. It should be noted that the recommended changes are based on an analysis of engines that are limited to single stage fans with design tip relative Mach numbers greater than one.

  16. GeneValidator: identify problems with protein-coding gene predictions

    PubMed Central

    Drăgan, Monica-Andreea; Moghul, Ismail; Priyam, Anurag; Bustos, Claudio; Wurm, Yannick

    2016-01-01

    Summary: Genomes of emerging model organisms are now being sequenced at very low cost. However, obtaining accurate gene predictions remains challenging: even the best gene prediction algorithms make substantial errors and can jeopardize subsequent analyses. Therefore, many predicted genes must be time-consumingly visually inspected and manually curated. We developed GeneValidator (GV) to automatically identify problematic gene predictions and to aid manual curation. For each gene, GV performs multiple analyses based on comparisons to gene sequences from large databases. The resulting report identifies problematic gene predictions and includes extensive statistics and graphs for each prediction to guide manual curation efforts. GV thus accelerates and enhances the work of biocurators and researchers who need accurate gene predictions from newly sequenced genomes. Availability and implementation: GV can be used through a web interface or in the command-line. GV is open-source (AGPL), available at https://wurmlab.github.io/tools/genevalidator. Contact: y.wurm@qmul.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26787666

  17. Small Engine Technology (SET) Task 23 ANOPP Noise Prediction for Small Engines, Wing Reflection Code

    NASA Technical Reports Server (NTRS)

    Lieber, Lysbeth; Brown, Daniel; Golub, Robert A. (Technical Monitor)

    2000-01-01

    The work performed under Task 23 consisted of the development and demonstration of improvements for the NASA Aircraft Noise Prediction Program (ANOPP), specifically targeted to the modeling of engine noise enhancement due to wing reflection. This report focuses on development of the model and procedure to predict the effects of wing reflection, and the demonstration of the procedure, using a representative wing/engine configuration.

  18. Method for predicting pump-induced acoustic pressures in fluid-handling systems. [ACSTIC code

    SciTech Connect

    Schwirian, R.E.; Shockling, L.A.; Singleton, N.R.; Riddell, R.A.

    1982-01-01

    A method is described for predicting the amplitudes of pump-induced acoustic pressures in fluid-handling systems using a node-flow path discretization methodology and a harmonic analysis algorithm. A computer model of a Westinghouse test loop using the volumetric forcing function model of the pump is presented. Comparisons of measured pressure amplitude profiles in the loop with model prediction are shown to be in good agreement for both the first and second pump blade-passing frequencies. 10 refs.

  19. Maps, codes, and sequence elements: can we predict the protein output from an alternatively spliced locus?

    PubMed

    Sharma, Shalini; Black, Douglas L

    2006-11-22

    Alternative splicing choices are governed by splicing regulatory protein interactions with splicing silencer and enhancer elements present in the pre-mRNA. However, the prediction of these choices from genomic sequence is difficult, in part because the regulators can act as either enhancers or silencers. A recent study describes how for a particular neuronal splicing regulatory protein, Nova, the location of its binding sites is highly predictive of the protein's effect on an exon's splicing.

  20. Effects of Protein Conformation in Docking: Improved Pose Prediction through Protein Pocket Adaptation

    PubMed Central

    Jain, Ajay N.

    2009-01-01

    Computational methods for docking ligands have been shown to be remarkably dependent on precise protein conformation, where acceptable results in pose prediction have been generally possible only in the artificial case of re-docking a ligand into a protein binding site whose conformation was determined in the presence of the same ligand (the “cognate” docking problem). In such cases, on well curated protein/ligand complexes, accurate dockings can be returned as top-scoring over 75% of the time using tools such as Surflex-Dock. A critical application of docking in modeling for lead optimization requires accurate pose prediction for novel ligands, ranging from simple synthetic analogs to very different molecular scaffolds. Typical results for widely used programs in the “cross-docking case” (making use of a single fixed protein conformation) have rates closer to 20% success. By making use of protein conformations from multiple complexes, Surflex-Dock yields an average success rate of 61% across eight pharmaceutically relevant targets. Following docking, protein pocket adaptation and rescoring identifies single pose families that are correct an average of 67% of the time. Consideration of the best of two pose families (from alternate scoring regimes) yields a 75% mean success rate. PMID:19340588

  1. Eye-pupil displacement and prediction: effects on residual wavefront in adaptive optics retinal imaging.

    PubMed

    Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea

    2016-03-01

    This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems.

  2. Eye-pupil displacement and prediction: effects on residual wavefront in adaptive optics retinal imaging

    PubMed Central

    Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea

    2016-01-01

    This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems. PMID:27231607

  3. Predictive simulation of wind turbine wake interaction with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2015-11-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed the first version of a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The presentation will describe the employed algorithms and present relevant verification and validation computations. For instance, power and thrust coefficients of a Vestas V27 turbine are predicted within 5% of the manufacturer's specifications. Simulations of three Vestas V27-225kW turbines in triangular arrangement analyze the reduction in power production due to upstream wake generation for different inflow conditions.

  4. Test results of a 40-kW Stirling engine and comparison with the NASA Lewis computer code predictions

    NASA Technical Reports Server (NTRS)

    Allen, David J.; Cairelli, James E.

    1988-01-01

    A Stirling engine was tested without auxiliaries at Nasa-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were: (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations from those predicted by the code. The silicon carbide foam generators appear to be structurally suitable, but the foam matrix showed severely reduced performance.

  5. Development of a prediction model for radiosensitivity using the expression values of genes and long non-coding RNAs.

    PubMed

    Wang, Wei-An; Lai, Liang-Chuan; Tsai, Mong-Hsun; Lu, Tzu-Pin; Chuang, Eric Y

    2016-05-01

    Radiotherapy has become a popular and standard approach for treating cancer patients because it greatly improves patient survival. However, some of the patients receiving radiotherapy suffer from adverse effects and do not obtain survival benefits. This may be attributed to the fact that most radiation treatment plans are designed based on cancer type, without consideration of each individual's radiosensitivity. A model for predicting radiosensitivity would help to address this issue. In this study, the expression levels of both genes and long non-coding RNAs (lncRNAs) were used to build such a prediction model. Analysis of variance and Tukey's honest significant difference tests (P < 0.001) were utilized in immortalized B cells (GSE26835) to identify differentially expressed genes and lncRNAs after irradiation. A total of 41 genes and lncRNAs associated with radiation exposure were revealed by a network analysis algorithm. To develop a predictive model for radiosensitivity, the expression profiles of NCI-60 cell lines along, with their radiation parameters, were analyzed. A genetic algorithm was proposed to identify 20 predictors, and the support vector machine algorithm was used to evaluate their prediction performance. The model was applied to 2 datasets of glioblastoma, The Cancer Genome Atlas and GSE16011, and significantly better survival was observed in patients with greater predicted radiosensitivity.

  6. Development of a prediction model for radiosensitivity using the expression values of genes and long non-coding RNAs

    PubMed Central

    Wang, Wei-An; Lai, Liang-Chuan; Tsai, Mong-Hsun; Lu, Tzu-Pin; Chuang, Eric Y.

    2016-01-01

    Radiotherapy has become a popular and standard approach for treating cancer patients because it greatly improves patient survival. However, some of the patients receiving radiotherapy suffer from adverse effects and do not obtain survival benefits. This may be attributed to the fact that most radiation treatment plans are designed based on cancer type, without consideration of each individual's radiosensitivity. A model for predicting radiosensitivity would help to address this issue. In this study, the expression levels of both genes and long non-coding RNAs (lncRNAs) were used to build such a prediction model. Analysis of variance and Tukey's honest significant difference tests (P < 0.001) were utilized in immortalized B cells (GSE26835) to identify differentially expressed genes and lncRNAs after irradiation. A total of 41 genes and lncRNAs associated with radiation exposure were revealed by a network analysis algorithm. To develop a predictive model for radiosensitivity, the expression profiles of NCI-60 cell lines along, with their radiation parameters, were analyzed. A genetic algorithm was proposed to identify 20 predictors, and the support vector machine algorithm was used to evaluate their prediction performance. The model was applied to 2 datasets of glioblastoma, The Cancer Genome Atlas and GSE16011, and significantly better survival was observed in patients with greater predicted radiosensitivity. PMID:27050376

  7. Adaptation of Sediment Connectivity Index for Swedish catchments and application for flood prediction of roads

    NASA Astrophysics Data System (ADS)

    Cantone, Carolina; Kalantari, Zahra; Cavalli, Marco; Crema, Stefano

    2016-04-01

    Climate changes are predicted to increase precipitation intensities and occurrence of extreme rainfall events in the near future. Scandinavia has been identified as one of the most sensitive regions in Europe to such changes; therefore, an increase in the risk for flooding, landslides and soil erosion is to be expected also in Sweden. An increase in the occurrence of extreme weather events will impose greater strain on the built environment and major transport infrastructures such as roads and railways. This research aimed to identify the risk of flooding at the road-stream intersections, crucial locations where water and debris can accumulate and cause failures of the existing drainage facilities. Two regions in southwest of Sweden affected by an extreme rainfall event in August 2014, were used for calibrating and testing a statistical flood prediction model. A set of Physical Catchment Descriptors (PCDs) including road and catchment characteristics was identified for the modelling. Moreover, a GIS-based topographic Index of Sediment Connectivity (IC) was used as PCD. The novelty of this study relies on the adaptation of IC for describing sediment connectivity in lowland areas taking into account contribution of soil type, land use and different patterns of precipitation during the event. A weighting factor for IC was calculated by estimating runoff calculated with SCS Curve Number method, assuming a constant value of precipitation for a given time period, corresponding to the critical event. The Digital Elevation Model of the study site was reconditioned at the drainage facilities locations to consider the real flow path in the analysis. These modifications led to highlight the role of rainfall patterns and surface runoff for modelling sediment delivery in lowland areas. Moreover, it was observed that integrating IC into the statistic prediction model increased its accuracy and performance. After the calibration procedure in one of the study areas, the model was

  8. Structure-aided prediction of mammalian transcription factor complexes in conserved non-coding elements.

    PubMed

    Guturu, Harendra; Doxey, Andrew C; Wenger, Aaron M; Bejerano, Gill

    2013-12-19

    Mapping the DNA-binding preferences of transcription factor (TF) complexes is critical for deciphering the functions of cis-regulatory elements. Here, we developed a computational method that compares co-occurring motif spacings in conserved versus unconserved regions of the human genome to detect evolutionarily constrained binding sites of rigid TF complexes. Structural data were used to estimate TF complex physical plausibility, explore overlapping motif arrangements seldom tackled by non-structure-aware methods, and generate and analyse three-dimensional models of the predicted complexes bound to DNA. Using this approach, we predicted 422 physically realistic TF complex motifs at 18% false discovery rate, the majority of which (326, 77%) contain some sequence overlap between binding sites. The set of mostly novel complexes is enriched in known composite motifs, predictive of binding site configurations in TF-TF-DNA crystal structures, and supported by ChIP-seq datasets. Structural modelling revealed three cooperativity mechanisms: direct protein-protein interactions, potentially indirect interactions and 'through-DNA' interactions. Indeed, 38% of the predicted complexes were found to contain four or more bases in which TF pairs appear to synergize through overlapping binding to the same DNA base pairs in opposite grooves or strands. Our TF complex and associated binding site predictions are available as a web resource at http://bejerano.stanford.edu/complex.

  9. Efficient visual coding and the predictability of eye movements on natural movies.

    PubMed

    Vig, Eleonora; Dorr, Michael; Barth, Erhardt

    2009-01-01

    We deal with the analysis of eye movements made on natural movies in free-viewing conditions. Saccades are detected and used to label two classes of movie patches as attended and non-attended. Machine learning techniques are then used to determine how well the two classes can be separated, i.e., how predictable saccade targets are. Although very simple saliency measures are used and then averaged to obtain just one average value per scale, the two classes can be separated with an ROC score of around 0.7, which is higher than previously reported results. Moreover, predictability is analysed for different representations to obtain indirect evidence for the likelihood of a particular representation. It is shown that the predictability correlates with the local intrinsic dimension in a movie. PMID:19814903

  10. New evidence-based adaptive clinical trial methods for optimally integrating predictive biomarkers into oncology clinical development programs

    PubMed Central

    Beckman, Robert A.; Chen, Cong

    2013-01-01

    Predictive biomarkers are important to the future of oncology; they can be used to identify patient populations who will benefit from therapy, increase the value of cancer medicines, and decrease the size and cost of clinical trials while increasing their chance of success. But predictive biomarkers do not always work. When unsuccessful, they add cost, complexity, and time to drug development. This perspective describes phases 2 and 3 development methods that efficiently and adaptively check the ability of a biomarker to predict clinical outcomes. In the end, the biomarker is emphasized to the extent that it can actually predict. PMID:23489587

  11. Bioinformatics Approach for Prediction of Functional Coding/Noncoding Simple Polymorphisms (SNPs/Indels) in Human BRAF Gene

    PubMed Central

    Omer, Shaza E.; Khalf-allah, Rahma M.; Mustafa, Razaz Y.; Ali, Isra S.; Mohamed, Sofia B.

    2016-01-01

    This study was carried out for Homo sapiens single variation (SNPs/Indels) in BRAF gene through coding/non-coding regions. Variants data was obtained from database of SNP even last update of November, 2015. Many bioinformatics tools were used to identify functional SNPs and indels in proteins functions, structures and expressions. Results shown, for coding polymorphisms, 111 SNPs predicted as highly damaging and six other were less. For UTRs, showed five SNPs and one indel were altered in micro RNAs binding sites (3′ UTR), furthermore nil SNP or indel have functional altered in transcription factor binding sites (5′ UTR). In addition for 5′/3′ splice sites, analysis showed that one SNP within 5′ splice site and one Indel in 3′ splice site showed potential alteration of splicing. In conclude these previous functional identified SNPs and indels could lead to gene alteration, which may be directly or indirectly contribute to the occurrence of many diseases. PMID:27478437

  12. Predicting CYP2C19 catalytic parameters for enantioselective oxidations using artificial neural networks and a chirality code.

    PubMed

    Hartman, Jessica H; Cothren, Steven D; Park, Sun-Ha; Yun, Chul-Ho; Darsey, Jerry A; Miller, Grover P

    2013-07-01

    Cytochromes P450 (CYP for isoforms) play a central role in biological processes especially metabolism of chiral molecules; thus, development of computational methods to predict parameters for chiral reactions is important for advancing this field. In this study, we identified the most optimal artificial neural networks using conformation-independent chirality codes to predict CYP2C19 catalytic parameters for enantioselective reactions. Optimization of the neural networks required identifying the most suitable representation of structure among a diverse array of training substrates, normalizing distribution of the corresponding catalytic parameters (k(cat), K(m), and k(cat)/K(m)), and determining the best topology for networks to make predictions. Among different structural descriptors, the use of partial atomic charges according to the CHelpG scheme and inclusion of hydrogens yielded the most optimal artificial neural networks. Their training also required resolution of poorly distributed output catalytic parameters using a Box-Cox transformation. End point leave-one-out cross correlations of the best neural networks revealed that predictions for individual catalytic parameters (k(cat) and K(m)) were more consistent with experimental values than those for catalytic efficiency (k(cat)/K(m)). Lastly, neural networks predicted correctly enantioselectivity and comparable catalytic parameters measured in this study for previously uncharacterized CYP2C19 substrates, R- and S-propranolol. Taken together, these seminal computational studies for CYP2C19 are the first to predict all catalytic parameters for enantioselective reactions using artificial neural networks and thus provide a foundation for expanding the prediction of cytochrome P450 reactions to chiral drugs, pollutants, and other biologically active compounds.

  13. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, Aaron R.

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  14. Computational prediction of over-annotated protein-coding genes in the genome of Agrobacterium tumefaciens strain C58

    NASA Astrophysics Data System (ADS)

    Yu, Jia-Feng; Sui, Tian-Xiang; Wang, Hong-Mei; Wang, Chun-Ling; Jing, Li; Wang, Ji-Hua

    2015-12-01

    Agrobacterium tumefaciens strain C58 is a type of pathogen that can cause tumors in some dicotyledonous plants. Ever since the genome of A. tumefaciens strain C58 was sequenced, the quality of annotation of its protein-coding genes has been queried continually, because the annotation varies greatly among different databases. In this paper, the questionable hypothetical genes were re-predicted by integrating the TN curve and Z curve methods. As a result, 30 genes originally annotated as “hypothetical” were discriminated as being non-coding sequences. By testing the re-prediction program 10 times on data sets composed of the function-known genes, the mean accuracy of 99.99% and mean Matthews correlation coefficient value of 0.9999 were obtained. Further sequence analysis and COG analysis showed that the re-annotation results were very reliable. This work can provide an efficient tool and data resources for future studies of A. tumefaciens strain C58. Project supported by the National Natural Science Foundation of China (Grant Nos. 61302186 and 61271378) and the Funding from the State Key Laboratory of Bioelectronics of Southeast University.

  15. Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.

  16. Evaluation of MOSTAS computer code for predicting dynamic loads in two-bladed wind turbines

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.

    1979-01-01

    Calculated dynamic blade loads are compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-0 wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multiblade coordinate transformation for two-bladed rotors to solve the equations of motion by standard eigenanalysis. The results obtained with this approximate analysis do not agree with dynamic blade load amplifications at or close to resonance conditions. The results of the second version, which accounts for periodic coefficients while solving the equations by a time history integration, compare well with the measured data.

  17. Non-coding RNAs in crop genetic modification: considerations and predictable environmental risk assessments (ERA).

    PubMed

    Ramesh, S V

    2013-09-01

    Of late non-coding RNAs (ncRNAs)-mediated gene silencing is an influential tool deliberately deployed to negatively regulate the expression of targeted genes. In addition to the widely employed small interfering RNA (siRNA)-mediated gene silencing approach, other variants like artificial miRNA (amiRNA), miRNA mimics, and artificial transacting siRNAs (tasiRNAs) are being explored and successfully deployed in developing non-coding RNA-based genetically modified plants. The ncRNA-based gene manipulations are typified with mobile nature of silencing signals, interference from viral genome-derived suppressor proteins, and an obligation for meticulous computational analysis to prevaricate any inadvertent effects. In a broad sense, risk assessment inquiries for genetically modified plants based on the expression of ncRNAs are competently addressed by the environmental risk assessment (ERA) models, currently in vogue, designed for the first generation transgenic plants which are based on the expression of heterologous proteins. Nevertheless, transgenic plants functioning on the foundation of ncRNAs warrant due attention with respect to their unique attributes like off-target or non-target gene silencing effects, small RNAs (sRNAs) persistence, food and feed safety assessments, problems in detection and tracking of sRNAs in food, impact of ncRNAs in plant protection measures, effect of mutations etc. The role of recent developments in sequencing techniques like next generation sequencing (NGS) and the ERA paradigm of the different countries in vogue are also discussed in the context of ncRNA-based gene manipulations.

  18. Adapting WEPP (Water Erosion Prediction Project) for Forest Watershed Erosion Modeling

    NASA Astrophysics Data System (ADS)

    Dun, S.; Wu, J. Q.; Elliot, W. J.; Robichaud, P. R.; Flanagan, D. C.

    2006-12-01

    There has been an increasing public concern over forest stream pollution by excessive sedimentation resulting from human activities. Adequate and reliable erosion simulation tools are urgently needed for sound forest resources management. Computer models for predicting watershed runoff and erosion have been developed during the past. These models, however, are often limited in their applications due to inappropriate representation of the hydrological processes involved. The Water Erosion Prediction Project (WEPP) watershed model has proved useful in certain forest applications such as modeling erosion from a segment of insloped or outsloped road, harvested units, and burned units. Nevertheless, when used for modeling water flow and sediment discharge from a forest watershed of complex topography and channel systems, WEPP consistently underestimates these quantities, in particular, the water flow at the watershed outlet. The main purpose of this study was to improve the WEPP watershed model such that it can be applied to adequately simulate forest watershed hydrology and erosion. The specific objectives were to: (1) identify and correct WEPP algorithms and subroutines that inappropriately represent forest watershed hydrologic processes; and (2) assess the performance of the modified model by applying it a real forested watershed in the Pacific Northwest, USA. In modifying the WEPP model, changes were primarily made in the approach to, and algorithms for modeling deep percolation of soil water and subsurface lateral flow. The modified codes were subsequently applied to Hermada watershed, a small watershed located in the Boise National Forest in northern Idaho. The modeling results were compared with those obtained by using the original WEPP and the field-observed runoff and erosion data. Conclusions of this study include: (1) compared to the original model, the modified WEPP more realistically and properly represents the hydrologic processes in a forest setting; and

  19. Organizational Changes to Thyroid Regulation in Alligator mississippiensis: Evidence for Predictive Adaptive Responses

    PubMed Central

    Boggs, Ashley S. P.; Lowers, Russell H.; Cloy-McCoy, Jessica A.; Guillette, Louis J.

    2013-01-01

    During embryonic development, organisms are sensitive to changes in thyroid hormone signaling which can reset the hypothalamic-pituitary-thyroid axis. It has been hypothesized that this developmental programming is a ‘predictive adaptive response’, a physiological adjustment in accordance with the embryonic environment that will best aid an individual's survival in a similar postnatal environment. When the embryonic environment is a poor predictor of the external environment, the developmental changes are no longer adaptive and can result in disease states. We predicted that endocrine disrupting chemicals (EDCs) and environmentally-based iodide imbalance could lead to developmental changes to the thyroid axis. To explore whether iodide or EDCs could alter developmental programming, we collected American alligator eggs from an estuarine environment with high iodide availability and elevated thyroid-specific EDCs, a freshwater environment contaminated with elevated agriculturally derived EDCs, and a reference freshwater environment. We then incubated them under identical conditions. We examined plasma thyroxine and triiodothyronine concentrations, thyroid gland histology, plasma inorganic iodide, and somatic growth at one week (before external nutrition) and ten months after hatching (on identical diets). Neonates from the estuarine environment were thyrotoxic, expressing follicular cell hyperplasia (p = 0.01) and elevated plasma triiodothyronine concentrations (p = 0.0006) closely tied to plasma iodide concentrations (p = 0.003). Neonates from the freshwater contaminated site were hypothyroid, expressing thyroid follicular cell hyperplasia (p = 0.01) and depressed plasma thyroxine concentrations (p = 0.008). Following a ten month growth period under identical conditions, thyroid histology (hyperplasia p = 0.04; colloid depletion p = 0.01) and somatic growth (body mass p<0.0001; length p = 0.02) remained altered among the

  20. Direct-heating containment vessel interaction code (DHCVIC) and prediction of SNL SURTSEY test DCH-1

    SciTech Connect

    Ginsberg, T.; Tutu, N.

    1986-01-01

    High-pressure melt ejection from pressurized water reactor (PWR) vessels has been identified as a severe core-accident scenario which could potentially lead to early containment failure. Melt ejection, followed by dispersal of the melt by high-velocity steam in the cavity beneath the PWR vessel could, according to this scenario, lead to rapid transfer of energy from the melt droplets to the containment atmosphere. This paper describes DHCVIC, an integrated model of the thermal, chemical, and hydrodynamic interactions which are postulated to take place during high-pressure melt ejection sequences. The model, which characterizes interactions occurring within the reactor cavity, as well as in the containment vessel (or building), is applied to prediction of the Sandia National Laboratory (SNL) SURTSEY Test DCH-1 and a (post-test) prediction of that test is made.

  1. Direct heating containment vessel interactions code (DHCVIC) and prediction of SNL ''SURTSEY'' test DCH-1

    SciTech Connect

    Ginsberg, T.; Tutu, N.

    1986-01-01

    High-pressure melt ejection from PWR vessels has been identified as a severe core accident scenario which could potentially lead to ''early'' containment failure. Melt ejection, followed by dispersal of the melt by high velocity steam in the cavity beneath the PWR vessel could, according to this scenario, lead to rapid transfer of energy from the melt droplets to the containment atmosphere. This paper describes DHCVIC, an integrated model of the thermal, chemical and hydrodynamic interactions which are postulated to take place during high-pressure melt ejection sequences. The model, which characterizes vessel (or building), is applied to prediction of the Sandia National Laboratory ''SURTSEY'' Test DCH-1 and a (post-test) prediction of that test is made.

  2. Environmental prediction, risk assessment and extreme events: adaptation strategies for the developing world.

    PubMed

    Webster, Peter J; Jian, Jun

    2011-12-13

    The uncertainty associated with predicting extreme weather events has serious implications for the developing world, owing to the greater societal vulnerability to such events. Continual exposure to unanticipated extreme events is a contributing factor for the descent into perpetual and structural rural poverty. We provide two examples of how probabilistic environmental prediction of extreme weather events can support dynamic adaptation. In the current climate era, we describe how short-term flood forecasts have been developed and implemented in Bangladesh. Forecasts of impending floods with horizons of 10 days are used to change agricultural practices and planning, store food and household items and evacuate those in peril. For the first time in Bangladesh, floods were anticipated in 2007 and 2008, with broad actions taking place in advance of the floods, grossing agricultural and household savings measured in units of annual income. We argue that probabilistic environmental forecasts disseminated to an informed user community can reduce poverty caused by exposure to unanticipated extreme events. Second, it is also realized that not all decisions in the future can be made at the village level and that grand plans for water resource management require extensive planning and funding. Based on imperfect models and scenarios of economic and population growth, we further suggest that flood frequency and intensity will increase in the Ganges, Brahmaputra and Yangtze catchments as greenhouse-gas concentrations increase. However, irrespective of the climate-change scenario chosen, the availability of fresh water in the latter half of the twenty-first century seems to be dominated by population increases that far outweigh climate-change effects. Paradoxically, fresh water availability may become more critical if there is no climate change.

  3. Environmental prediction, risk assessment and extreme events: adaptation strategies for the developing world

    PubMed Central

    Webster, Peter J.; Jian, Jun

    2011-01-01

    The uncertainty associated with predicting extreme weather events has serious implications for the developing world, owing to the greater societal vulnerability to such events. Continual exposure to unanticipated extreme events is a contributing factor for the descent into perpetual and structural rural poverty. We provide two examples of how probabilistic environmental prediction of extreme weather events can support dynamic adaptation. In the current climate era, we describe how short-term flood forecasts have been developed and implemented in Bangladesh. Forecasts of impending floods with horizons of 10 days are used to change agricultural practices and planning, store food and household items and evacuate those in peril. For the first time in Bangladesh, floods were anticipated in 2007 and 2008, with broad actions taking place in advance of the floods, grossing agricultural and household savings measured in units of annual income. We argue that probabilistic environmental forecasts disseminated to an informed user community can reduce poverty caused by exposure to unanticipated extreme events. Second, it is also realized that not all decisions in the future can be made at the village level and that grand plans for water resource management require extensive planning and funding. Based on imperfect models and scenarios of economic and population growth, we further suggest that flood frequency and intensity will increase in the Ganges, Brahmaputra and Yangtze catchments as greenhouse-gas concentrations increase. However, irrespective of the climate-change scenario chosen, the availability of fresh water in the latter half of the twenty-first century seems to be dominated by population increases that far outweigh climate-change effects. Paradoxically, fresh water availability may become more critical if there is no climate change. PMID:22042897

  4. Adaptability: How Students' Responses to Uncertainty and Novelty Predict Their Academic and Non-Academic Outcomes

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Nejad, Harry G.; Colmar, Susan; Liem, Gregory Arief D.

    2013-01-01

    Adaptability is defined as appropriate cognitive, behavioral, and/or affective adjustment in the face of uncertainty and novelty. Building on prior measurement work demonstrating the psychometric properties of an adaptability construct, the present study investigates dispositional predictors (personality, implicit theories) of adaptability, and…

  5. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  6. Goal orientation and work role performance: predicting adaptive and proactive work role performance through self-leadership strategies.

    PubMed

    Marques-Quinteiro, Pedro; Curral, Luís Alberto

    2012-01-01

    This article explores the relationship between goal orientation, self-leadership dimensions, and adaptive and proactive work role performances. The authors hypothesize that learning orientation, in contrast to performance orientation, positively predicts proactive and adaptive work role performances and that this relationship is mediated by self-leadership behavior-focused strategies. It is posited that self-leadership natural reward strategies and thought pattern strategies are expected to moderate this relationship. Workers (N = 108) from a software company participated in this study. As expected, learning orientation did predict adaptive and proactive work role performance. Moreover, in the relationship between learning orientation and proactive work role performance through self-leadership behavior-focused strategies, a moderated mediation effect was found for self-leadership natural reward and thought pattern strategies. In the end, the authors discuss the results and implications are discussed and future research directions are proposed.

  7. Goal orientation and work role performance: predicting adaptive and proactive work role performance through self-leadership strategies.

    PubMed

    Marques-Quinteiro, Pedro; Curral, Luís Alberto

    2012-01-01

    This article explores the relationship between goal orientation, self-leadership dimensions, and adaptive and proactive work role performances. The authors hypothesize that learning orientation, in contrast to performance orientation, positively predicts proactive and adaptive work role performances and that this relationship is mediated by self-leadership behavior-focused strategies. It is posited that self-leadership natural reward strategies and thought pattern strategies are expected to moderate this relationship. Workers (N = 108) from a software company participated in this study. As expected, learning orientation did predict adaptive and proactive work role performance. Moreover, in the relationship between learning orientation and proactive work role performance through self-leadership behavior-focused strategies, a moderated mediation effect was found for self-leadership natural reward and thought pattern strategies. In the end, the authors discuss the results and implications are discussed and future research directions are proposed. PMID:23094471

  8. Does a Measure of Support Needs Predict Funding Need Better Than a Measure of Adaptive and Maladaptive Behavior?

    PubMed

    Arnold, Samuel R C; Riches, Vivienne C; Stancliffe, Roger J

    2015-09-01

    Internationally, various approaches are used for the allocation of individualized funding. When using a databased approach, a key question is the predictive validity of adaptive behavior versus support needs assessment. This article reports on a subset of data from a larger project that allowed for a comparison of support needs and adaptive behavior assessments when predicting person-centered funding allocation. The first phase of the project involved a trial of the Inventory for Client and Agency Planning (ICAP) adaptive behavior and Instrument for the Classification and Assessment of Support Needs (I-CAN)-Brief Research version support needs assessments. Participants were in receipt of an individual support package allocated using a person-centered planning process, and were stable in their support arrangements. Regression analysis showed that the most useful items in predicting funding allocation came from the I-CAN-Brief Research. No additional variance could be explained by adding the ICAP, or using the ICAP alone. A further unique approach of including only items from the I-CAN-Brief Research marked as funded supports showed high predictive validity. It appears support need is more effective at determining resource need than adaptive behavior. PMID:26322387

  9. Interest level in 2-year-olds with autism spectrum disorder predicts rate of verbal, nonverbal, and adaptive skill acquisition

    PubMed Central

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2016-01-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2 years predicts subsequent rates of verbal, nonverbal, and adaptive skill acquisition to the age of 3 years. A total of 70 toddlers with autism spectrum disorder, mean age of 21.9 months, were scored using Interest Level Scoring for Autism, quantifying toddlers’ interest in toys, social routines, and activities that could serve as reinforcers in an intervention. Adaptive level and mental age were measured concurrently (Time 1) and again after a mean of 16.3 months of treatment (Time 2). Interest Level Scoring for Autism score, Autism Diagnostic Observation Schedule score, adaptive age equivalent, verbal and nonverbal mental age, and intensity of intervention were entered into regression models to predict rates of skill acquisition. Interest level at Time 1 predicted subsequent acquisition rate of adaptive skills (R2 = 0.36) and verbal mental age (R2 = 0.30), above and beyond the effects of Time 1 verbal and nonverbal mental ages and Autism Diagnostic Observation Schedule scores. Interest level at Time 1 also contributed (R2 = 0.30), with treatment intensity, to variance in development of nonverbal mental age. PMID:25398893

  10. Integrity of medial temporal structures may predict better improvement of spatial neglect with prism adaptation treatment

    PubMed Central

    Goedert, Kelly M.; Shah, Priyanka; Foundas, Anne L.; Barrett, A. M.

    2013-01-01

    Prism adaptation treatment (PAT) is a promising rehabilitative method for functional recovery in persons with spatial neglect. Previous research suggests that PAT improves motor-intentional “aiming” deficits that frequently occur with frontal lesions. To test whether presence of frontal lesions predicted better improvement of spatial neglect after PAT, the current study evaluated neglect-specific improvement in functional activities (assessment with the Catherine Bergego Scale) over time in 21 right-brain-damaged stroke survivors with left-sided spatial neglect. The results demonstrated that neglect patients' functional activities improved after two weeks of PAT and continued improving for four weeks. Such functional improvement did not occur equally in all of the participants: Neglect patients with lesions involving the frontal cortex (n=13) experienced significantly better functional improvement than did those without frontal lesions (n=8). More importantly, voxel-based lesion-behavior mapping (VLBM) revealed that in comparison to the group of patients without frontal lesions, the frontal-lesioned neglect patients had intact regions in the medial temporal areas, the superior temporal areas, and the inferior longitudinal fasciculus. The medial cortical and subcortical areas in the temporal lobe were especially distinguished in the “frontal lesion” group. The findings suggest that the integrity of medial temporal structures may play an important role in supporting functional improvement after PAT. PMID:22941243

  11. Prediction of grain size of nanocrystalline nickel coatings using adaptive neuro-fuzzy inference system

    NASA Astrophysics Data System (ADS)

    Hayati, M.; Rashidi, A. M.; Rezaei, A.

    2011-01-01

    This paper presents application of adaptive neuro-fuzzy inference system (ANFIS) for prediction of the grain size of nanocrystalline nickel coatings as a function of current density, saccharin concentration and bath temperature. For developing ANFIS model, the current density, saccharin concentration and bath temperature are taken as input, and the resulting grain size of the nanocrystalline coating as the output of the model. In order to provide a consistent set of experimental data, the nanocrystalline nickel coatings have been deposited from Watts-type bath using direct current electroplating within a large range of process parameters i.e., current density, saccharin concentration and bath temperature. Variation of the grain size because of the electroplating parameters has been modeled using ANFIS, and the experimental results and theoretical approaches have been compared to each other as well. Also, we have compared the proposed ANFIS model with artificial neural network (ANN) approach. The results have shown that the ANFIS model is more accurate and reliable compared to the ANN approach.

  12. Modelling and control issues of dynamically substructured systems: adaptive forward prediction taken as an example

    PubMed Central

    Tu, Jia-Ying; Hsiao, Wei-De; Chen, Chih-Ying

    2014-01-01

    Testing techniques of dynamically substructured systems dissects an entire engineering system into parts. Components can be tested via numerical simulation or physical experiments and run synchronously. Additional actuator systems, which interface numerical and physical parts, are required within the physical substructure. A high-quality controller, which is designed to cancel unwanted dynamics introduced by the actuators, is important in order to synchronize the numerical and physical outputs and ensure successful tests. An adaptive forward prediction (AFP) algorithm based on delay compensation concepts has been proposed to deal with substructuring control issues. Although the settling performance and numerical conditions of the AFP controller are improved using new direct-compensation and singular value decomposition methods, the experimental results show that a linear dynamics-based controller still outperforms the AFP controller. Based on experimental observations, the least-squares fitting technique, effectiveness of the AFP compensation and differences between delay and ordinary differential equations are discussed herein, in order to reflect the fundamental issues of actuator modelling in relevant literature and, more specifically, to show that the actuator and numerical substructure are heterogeneous dynamic components and should not be collectively modelled as a homogeneous delay differential equation. PMID:25104902

  13. Measured and predicted pressure distributions on the AFTI/F-111 mission adaptive wing

    NASA Technical Reports Server (NTRS)

    Webb, Lannie D.; Mccain, William E.; Rose, Lucinda A.

    1988-01-01

    Flight tests have been conducted using an F-111 aircraft modified with a mission adaptive wing (MAW). The MAW has variable-camber leading and trailing edge surfaces that can change the wing camber in flight, while preserving smooth upper surface contours. This paper contains wing surface pressure measurements obtained during flight tests at Dryden Flight Research Facility of NASA Ames Research Center. Upper and lower surface steady pressure distributions were measured along four streamwise rows of static pressure orifices on the right wing for a leading-edge sweep angle of 26 deg. The airplane, wing, instrumentation, and test conditions are discussed. Steady pressure results are presented for selected wing camber deflections flown at subsonic Mach numbers up to 0.90 and an angle-of-attack range of 5 to 12 deg. The Reynolds number was 26 million, based on the mean aerodynamic chord. The MAW flight data are compared to MAW wind tunnel data, transonic aircraft technology (TACT) flight data, and predicted pressure distributions. The results provide a unique database for a smooth, variable-camber, advanced supercritical wing.

  14. Integrity of medial temporal structures may predict better improvement of spatial neglect with prism adaptation treatment.

    PubMed

    Chen, Peii; Goedert, Kelly M; Shah, Priyanka; Foundas, Anne L; Barrett, A M

    2014-09-01

    Prism adaptation treatment (PAT) is a promising rehabilitative method for functional recovery in persons with spatial neglect. Previous research suggests that PAT improves motor-intentional "aiming" deficits that frequently occur with frontal lesions. To test whether presence of frontal lesions predicted better improvement of spatial neglect after PAT, the current study evaluated neglect-specific improvement in functional activities (assessment with the Catherine Bergego Scale) over time in 21 right-brain-damaged stroke survivors with left-sided spatial neglect. The results demonstrated that neglect patients' functional activities improved after two weeks of PAT and continued improving for four weeks. Such functional improvement did not occur equally in all of the participants: Neglect patients with lesions involving the frontal cortex (n = 13) experienced significantly better functional improvement than did those without frontal lesions (n = 8). More importantly, voxel-based lesion-behavior mapping (VLBM) revealed that in comparison to the group of patients without frontal lesions, the frontal-lesioned neglect patients had intact regions in the medial temporal areas, the superior temporal areas, and the inferior longitudinal fasciculus. The medial cortical and subcortical areas in the temporal lobe were especially distinguished in the "frontal lesion" group. The findings suggest that the integrity of medial temporal structures may play an important role in supporting functional improvement after PAT.

  15. Wind-US Code Contributions to the First AIAA Shock Boundary Layer Interaction Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Vyas, Manan A.; Yoder, Dennis A.

    2013-01-01

    This report discusses the computations of a set of shock wave/turbulent boundary layer interaction (SWTBLI) test cases using the Wind-US code, as part of the 2010 American Institute of Aeronautics and Astronautics (AIAA) shock/boundary layer interaction workshop. The experiments involve supersonic flows in wind tunnels with a shock generator that directs an oblique shock wave toward the boundary layer along one of the walls of the wind tunnel. The Wind-US calculations utilized structured grid computations performed in Reynolds-averaged Navier-Stokes mode. Four turbulence models were investigated: the Spalart-Allmaras one-equation model, the Menter Baseline and Shear Stress Transport k-omega two-equation models, and an explicit algebraic stress k-omega formulation. Effects of grid resolution and upwinding scheme were also considered. The results from the CFD calculations are compared to particle image velocimetry (PIV) data from the experiments. As expected, turbulence model effects dominated the accuracy of the solutions with upwinding scheme selection indicating minimal effects.

  16. Statistical method for sparse coding of speech including a linear predictive model

    NASA Astrophysics Data System (ADS)

    Rufiner, Hugo L.; Goddard, John; Rocha, Luis F.; Torres, María E.

    2006-07-01

    Recently, different methods for obtaining sparse representations of a signal using dictionaries of waveforms have been studied. They are often motivated by the way the brain seems to process certain sensory signals. Algorithms have been developed using a specific criterion to choose the waveforms occurring in the representation. The waveforms are choosen from a fixed dictionary and some algorithms also construct them as a part of the method. In the case of speech signals, most approaches do not take into consideration the important temporal correlations that are exhibited. It is known that these correlations are well approximated by linear models. Incorporating this a priori knowledge of the signal can facilitate the search for a suitable representation solution and also can help with its interpretation. Lewicki proposed a method to solve the noisy and overcomplete independent component analysis problem. In the present paper we propose a modification of this statistical technique for obtaining a sparse representation using a generative parametric model. The representations obtained with the method proposed here and other techniques are applied to artificial data and real speech signals, and compared using different coding costs and sparsity measures. The results show that the proposed method achieves more efficient representations of these signals compared to the others. A qualitative analysis of these results is also presented, which suggests that the restriction imposed by the parametric model is helpful in discovering meaningful characteristics of the signals.

  17. A trait-based framework for predicting when and where microbial adaptation to climate change will affect ecosystem functioning

    USGS Publications Warehouse

    Wallenstein, Matthew D.; Hall, Edward K.

    2012-01-01

    As the earth system changes in response to human activities, a critical objective is to predict how biogeochemical process rates (e.g. nitrification, decomposition) and ecosystem function (e.g. net ecosystem productivity) will change under future conditions. A particular challenge is that the microbial communities that drive many of these processes are capable of adapting to environmental change in ways that alter ecosystem functioning. Despite evidence that microbes can adapt to temperature, precipitation regimes, and redox fluctuations, microbial communities are typically not optimally adapted to their local environment. For example, temperature optima for growth and enzyme activity are often greater than in situ temperatures in their environment. Here we discuss fundamental constraints on microbial adaptation and suggest specific environments where microbial adaptation to climate change (or lack thereof) is most likely to alter ecosystem functioning. Our framework is based on two principal assumptions. First, there are fundamental ecological trade-offs in microbial community traits that occur across environmental gradients (in time and space). These trade-offs result in shifting of microbial function (e.g. ability to take up resources at low temperature) in response to adaptation of another trait (e.g. limiting maintenance respiration at high temperature). Second, the mechanism and level of microbial community adaptation to changing environmental parameters is a function of the potential rate of change in community composition relative to the rate of environmental change. Together, this framework provides a basis for developing testable predictions about how the rate and degree of microbial adaptation to climate change will alter biogeochemical processes in aquatic and terrestrial ecosystems across the planet.

  18. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Technical Reports Server (NTRS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-01-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively

  19. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively

  20. Reading the Second Code: Mapping Epigenomes to Understand Plant Growth, Development, and Adaptation to the Environment[OA

    PubMed Central

    2012-01-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210