Science.gov

Sample records for adaptive predictive coding

  1. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  2. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  3. Adaptive Prediction Error Coding in the Human Midbrain and Striatum Facilitates Behavioral Adaptation and Learning Efficiency.

    PubMed

    Diederen, Kelly M J; Spencer, Tom; Vestergaard, Martin D; Fletcher, Paul C; Schultz, Wolfram

    2016-06-01

    Effective error-driven learning benefits from scaling of prediction errors to reward variability. Such behavioral adaptation may be facilitated by neurons coding prediction errors relative to the standard deviation (SD) of reward distributions. To investigate this hypothesis, we required participants to predict the magnitude of upcoming reward drawn from distributions with different SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. In line with the notion of adaptive coding, BOLD response slopes in the Substantia Nigra/Ventral Tegmental Area (SN/VTA) and ventral striatum were steeper for prediction errors occurring in distributions with smaller SDs. SN/VTA adaptation was not instantaneous but developed across trials. Adaptive prediction error coding was paralleled by behavioral adaptation, as reflected by SD-dependent changes in learning rate. Crucially, increased SN/VTA and ventral striatal adaptation was related to improved task performance. These results suggest that adaptive coding facilitates behavioral adaptation and supports efficient learning. PMID:27181060

  4. Adaptive inter color residual prediction for efficient red-green-blue intra coding

    NASA Astrophysics Data System (ADS)

    Jeong, Jinwoo; Choe, Yoonsik; Kim, Yong-Goo

    2011-07-01

    Intra coding of an RGB video is important to many high fidelity multimedia applications because video acquisition is mostly done in RGB space, and the coding of decorrelated color video loses its virtue in high quality ranges. In order to improve the compression performance of an RGB video, this paper proposes an inter color prediction using adaptive weights. For making full use of spatial, as well as inter color correlation of an RGB video, the proposed scheme is based on a residual prediction approach, and thus the incorporated prediction is performed on the transformed frequency components of spatially predicted residual data of each color plane. With the aid of efficient prediction employing frequency domain inter color residual correlation, the proposed scheme achieves up to 24.3% of bitrate reduction, compared to the common mode of H.264/AVC high 4:4:4 intra profile.

  5. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  6. A video coding scheme based on joint spatiotemporal and adaptive prediction.

    PubMed

    Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken

    2009-05-01

    We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed. PMID:19342337

  7. A 2-D orientation-adaptive prediction filter in lifting structures for image coding.

    PubMed

    Gerek, Omer N; Cetin, A Enis

    2006-01-01

    Lifting-style implementations of wavelets are widely used in image coders. A two-dimensional (2-D) edge adaptive lifting structure, which is similar to Daubechies 5/3 wavelet, is presented. The 2-D prediction filter predicts the value of the next polyphase component according to an edge orientation estimator of the image. Consequently, the prediction domain is allowed to rotate +/-45 degrees in regions with diagonal gradient. The gradient estimator is computationally inexpensive with additional costs of only six subtractions per lifting instruction, and no multiplications are required. PMID:16435541

  8. Simplified APC for Space Shuttle applications. [Adaptive Predictive Coding for speech transmission

    NASA Technical Reports Server (NTRS)

    Hutchins, S. E.; Batson, B. H.

    1975-01-01

    This paper describes an 8 kbps adaptive predictive digital speech transmission system which was designed for potential use in the Space Shuttle Program. The system was designed to provide good voice quality in the presence of both cabin noise on board the Shuttle and the anticipated bursty channel. Minimal increase in size, weight, and power over the current high data rate system was also a design objective.

  9. Telescope Adaptive Optics Code

    SciTech Connect

    Phillion, D.

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST

  10. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  11. Adaptive differential pulse-code modulation with adaptive bit allocation

    NASA Astrophysics Data System (ADS)

    Frangoulis, E. D.; Yoshida, K.; Turner, L. F.

    1984-08-01

    Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.

  12. Telescope Adaptive Optics Code

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  13. Motion-adaptive compressive coded apertures

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca

    2011-09-01

    This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.

  14. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  15. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor

  16. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  17. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  18. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  19. Local intensity adaptive image coding

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1989-01-01

    The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.

  20. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  1. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    NASA Technical Reports Server (NTRS)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  2. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  3. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  4. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  5. Adaptive prediction trees for image compression.

    PubMed

    Robinson, John A

    2006-08-01

    This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods. PMID:16900671

  6. Structural coding versus free-energy predictive coding.

    PubMed

    van der Helm, Peter A

    2016-06-01

    Focusing on visual perceptual organization, this article contrasts the free-energy (FE) version of predictive coding (a recent Bayesian approach) to structural coding (a long-standing representational approach). Both use free-energy minimization as metaphor for processing in the brain, but their formal elaborations of this metaphor are fundamentally different. FE predictive coding formalizes it by minimization of prediction errors, whereas structural coding formalizes it by minimization of the descriptive complexity of predictions. Here, both sides are evaluated. A conclusion regarding competence is that FE predictive coding uses a powerful modeling technique, but that structural coding has more explanatory power. A conclusion regarding performance is that FE predictive coding-though more detailed in its account of neurophysiological data-provides a less compelling cognitive architecture than that of structural coding, which, for instance, supplies formal support for the computationally powerful role it attributes to neuronal synchronization. PMID:26407895

  7. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  8. Adaptive predictive multiplicative autoregressive model for medical image compression.

    PubMed

    Chen, Z D; Chang, R F; Kuo, W J

    1999-02-01

    In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression. PMID:10232675

  9. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  10. Adaptive Dynamic Event Tree in RAVEN code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur

    2014-11-01

    RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me

  11. Repetition suppression and its contextual determinants in predictive coding.

    PubMed

    Auksztulewicz, Ryszard; Friston, Karl

    2016-07-01

    This paper presents a review of theoretical and empirical work on repetition suppression in the context of predictive coding. Predictive coding is a neurobiologically plausible scheme explaining how biological systems might perform perceptual inference and learning. From this perspective, repetition suppression is a manifestation of minimising prediction error through adaptive changes in predictions about the content and precision of sensory inputs. Simulations of artificial neural hierarchies provide a principled way of understanding how repetition suppression - at different time scales - can be explained in terms of inference and learning implemented under predictive coding. This formulation of repetition suppression is supported by results of numerous empirical studies of repetition suppression and its contextual determinants. PMID:26861557

  12. A Predictive Analysis Approach to Adaptive Testing.

    ERIC Educational Resources Information Center

    Kirisci, Levent; Hsu, Tse-Chi

    The predictive analysis approach to adaptive testing originated in the idea of statistical predictive analysis suggested by J. Aitchison and I.R. Dunsmore (1975). The adaptive testing model proposed is based on parameter-free predictive distribution. Aitchison and Dunsmore define statistical prediction analysis as the use of data obtained from an…

  13. Predictive coding as a model of cognition.

    PubMed

    Spratling, M W

    2016-08-01

    Previous work has shown that predictive coding can provide a detailed explanation of a very wide range of low-level perceptual processes. It is also widely believed that predictive coding can account for high-level, cognitive, abilities. This article provides support for this view by showing that predictive coding can simulate phenomena such as categorisation, the influence of abstract knowledge on perception, recall and reasoning about conceptual knowledge, context-dependent behavioural control, and naive physics. The particular implementation of predictive coding used here (PC/BC-DIM) has previously been used to simulate low-level perceptual behaviour and the neural mechanisms that underlie them. This algorithm thus provides a single framework for modelling both perceptual and cognitive brain function. PMID:27118562

  14. Design of Pel Adaptive DPCM coding based upon image partition

    NASA Astrophysics Data System (ADS)

    Saitoh, T.; Harashima, H.; Miyakawa, H.

    1982-01-01

    A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).

  15. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  16. Fast prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  17. Adaptive lifting scheme with sparse criteria for image coding

    NASA Astrophysics Data System (ADS)

    Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2012-12-01

    Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.

  18. Adaptive vehicle motion estimation and prediction

    NASA Astrophysics Data System (ADS)

    Zhao, Liang; Thorpe, Chuck E.

    1999-01-01

    Accurate motion estimation and reliable maneuver prediction enable an automated car to react quickly and correctly to the rapid maneuvers of the other vehicles, and so allow safe and efficient navigation. In this paper, we present a car tracking system which provides motion estimation, maneuver prediction and detection of the tracked car. The three strategies employed - adaptive motion modeling, adaptive data sampling, and adaptive model switching probabilities - result in an adaptive interacting multiple model algorithm (AIMM). The experimental results on simulated and real data demonstrate that our tracking system is reliable, flexible, and robust. The adaptive tracking makes the system intelligent and useful in various autonomous driving tasks.

  19. Adaptation of gasdynamical codes to the modern supercomputers

    NASA Astrophysics Data System (ADS)

    Kaygorodov, P. V.

    2016-02-01

    During last decades, supercomputer architecture has changed significantly and now it is impossible to achieve a peak performance without an adaptation of the numerical codes to modern supercomputer architectures. In this paper, I want to share my experience in adaptation of astrophysics gasdynamical numerical codes to multi-node computing clusters with multi-CPU and multi-GPU nodes.

  20. Geometric prediction structure for multiview video coding

    NASA Astrophysics Data System (ADS)

    Lee, Seok; Wey, Ho-Cheon; Park, Du-Sik

    2010-02-01

    One of the critical issues to successful service of 3D video is how to compress huge amount of multi-view video data efficiently. In this paper, we described about geometric prediction structure for multi-view video coding. By exploiting the geometric relations between each camera pose, we can make prediction pair which maximizes the spatial correlation of each view. To analyze the relationship of each camera pose, we defined the mathematical view center and view distance in 3D space. We calculated virtual center pose by getting mean rotation matrix and mean translation vector. We proposed an algorithm for establishing the geometric prediction structure based on view center and view distance. Using this prediction structure, inter-view prediction is performed to camera pair of maximum spatial correlation. In our prediction structure, we also considered the scalability in coding and transmitting the multi-view videos. Experiments are done using JMVC (Joint Multiview Video Coding) software on MPEG-FTV test sequences. Overall performance of proposed prediction structure is measured in the PSNR and subjective image quality measure such as PSPNR.

  1. Predictive Control of Speededness in Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2009-01-01

    An adaptive testing method is presented that controls the speededness of a test using predictions of the test takers' response times on the candidate items in the pool. Two different types of predictions are investigated: posterior predictions given the actual response times on the items already administered and posterior predictions that use the…

  2. Adaptation of bit error rate by coding

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Sorton, G.

    1984-07-01

    The use of coding in spacecraft wideband communication to reduce power transmission, save bandwith, and lower antenna specifications was studied. The feasibility of a coder decoder functioning at a bit rate of 10 Mb/sec with a raw bit error rate (BER) of 0.001 and an output BER of 0.000000001 is demonstrated. A single block code protection, and two coding levels protection are examined. A single level protection BCH code with 5 errors correction capacity, 16% redundancy, and interleaving depth 4 giving a coded block of 1020 bits is simple to implement, but has BER = 0.000000007. A single level BCH code with 7 errors correction capacity and 12% redundancy meets specifications, but is more difficult to implement. Two level protection with 9% BCH outer and 10% BCH inner codes, both levels with 3 errors correction capacity and 8% redundancy for a coded block of 7050 bits is the most complex, but offers performance advantages.

  3. Predictive coding of music--brain responses to rhythmic incongruity.

    PubMed

    Vuust, Peter; Ostergaard, Leif; Pallesen, Karen Johanne; Bailey, Christopher; Roepstorff, Andreas

    2009-01-01

    During the last decades, models of music processing in the brain have mainly discussed the specificity of brain modules involved in processing different musical components. We argue that predictive coding offers an explanatory framework for functional integration in musical processing. Further, we provide empirical evidence for such a network in the analysis of event-related MEG-components to rhythmic incongruence in the context of strong metric anticipation. This is seen in a mismatch negativity (MMNm) and a subsequent P3am component, which have the properties of an error term and a subsequent evaluation in a predictive coding framework. There were both quantitative and qualitative differences in the evoked responses in expert jazz musicians compared with rhythmically unskilled non-musicians. We propose that these differences trace a functional adaptation and/or a genetic pre-disposition in experts which allows for a more precise rhythmic prediction. PMID:19054506

  4. An Adaptive Code for Radial Stellar Model Pulsations

    NASA Astrophysics Data System (ADS)

    Buchler, J. Robert; Kolláth, Zoltán; Marom, Ariel

    1997-09-01

    We describe an implicit 1-D adaptive mesh hydrodynamics code that is specially tailored for radial stellar pulsations. In the Lagrangian limit the code reduces to the well tested Fraley scheme. The code has the useful feature that unwanted, long lasting transients can be avoided by smoothly switching on the adaptive mesh features starting from the Lagrangean code. Thus, a limit cycle pulsation that can readily be computed with the relaxation method of Stellingwerf will converge in a few tens of pulsation cycles when put into the adaptive mesh code. The code has been checked with two shock problems, viz. Noh and Sedov, for which analytical solutions are known, and it has been found to be both accurate and stable. Superior results were obtained through the solution of the total energy (gravitational + kinetic + internal) equation rather than that of the internal energy only.

  5. Results of investigation of adaptive speech codes

    NASA Astrophysics Data System (ADS)

    Nekhayev, A. L.; Pertseva, V. A.; Sitnyakovskiy, I. V.

    1984-06-01

    A search for ways of increasing the effectiveness of speech signals in digital form lead to the appearance of various methods of encoding, to reduce the excessiveness of specific properties of the speech signal. It is customary to divide speech codes into two large classes: codes of signal parameters (or vocoders), and codes of the signal form (CSF. In telephony, preference is given to a second class of systems, which maintains naturalness of sound. The class of CSF expanded considerably because of the development of codes based on the frequency representation of a signal. The greatest interest is given to such methods of encoding as pulse modulation (PCM), differential PCM (DPCM), and delta modulation (DM). However, developers of digital transmission systems find it difficult to compile a complete pattern of the applicability of specific types of codes. The best known versions of the codes are evaluated by means of subjective-statistical measurements of their characteristics. The results obtained help developers to draw conclusions regarding the applicability of the codes considered in various communication systems.

  6. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  7. JPEG 2000 coding of image data over adaptive refinement grids

    NASA Astrophysics Data System (ADS)

    Gamito, Manuel N.; Dias, Miguel S.

    2003-06-01

    An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.

  8. Adaptive Coding and Modulation Scheme for Ka Band Space Communications

    NASA Astrophysics Data System (ADS)

    Lee, Jaeyoon; Yoon, Dongweon; Lee, Wooju

    2010-06-01

    Rain attenuation can cause a serious problem that an availability of space communication link on Ka band becomes low. To reduce the effect of rain attenuation on the error performance of space communications in Ka band, an adaptive coding and modulation (ACM) scheme is required. In this paper, to achieve a reliable telemetry data transmission, we propose an adaptive coding and modulation level using turbo code recommended by the consultative committee for space data systems (CCSDS) and various modulation methods (QPSK, 8PSK, 4+12 APSK, and 4+12+16 APSK) adopted in the digital video broadcasting-satellite2 (DVB-S2).

  9. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  10. Visual mismatch negativity: a predictive coding view.

    PubMed

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control "refractoriness" effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  11. Visual mismatch negativity: a predictive coding view

    PubMed Central

    Stefanics, Gábor; Kremláček, Jan; Czigler, István

    2014-01-01

    An increasing number of studies investigate the visual mismatch negativity (vMMN) or use the vMMN as a tool to probe various aspects of human cognition. This paper reviews the theoretical underpinnings of vMMN in the light of methodological considerations and provides recommendations for measuring and interpreting the vMMN. The following key issues are discussed from the experimentalist's point of view in a predictive coding framework: (1) experimental protocols and procedures to control “refractoriness” effects; (2) methods to control attention; (3) vMMN and veridical perception. PMID:25278859

  12. Adaptive Modulation and Coding for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  13. Adaptive error correction codes for face identification

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2012-06-01

    Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.

  14. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  15. Can predictive coding explain repetition suppression?

    PubMed

    Grotheer, Mareike; Kovács, Gyula

    2016-07-01

    While in earlier work various local or bottom-up neural mechanisms were proposed to give rise to repetition suppression (RS), current theories suggest that top-down processes play a role in determining the repetition related reduction of the neural responses. In the current review we summarise those results, which support the role of these top-down processes, concentrating on the Bayesian models of predictive coding (PC). Such models assume that RS is related to the statistical probabilities of prior stimulus occurrences and to the future predictability of these stimuli. Here we review the current results that support or argue against this explanation. We point out that the heterogeneity of experimental manipulations that are thought to reflect predictive processes are likely to measure different processing steps, making their direct comparison difficult. In addition we emphasize the importance of identifying these sub-processes and clarifying their role in explaining RS. Finally, we propose a two-stage model for explaining the relationships of repetition and expectation phenomena in the human cortex. PMID:26861559

  16. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  17. The multiform motor cortical output: Kinematic, predictive and response coding.

    PubMed

    Sartori, Luisa; Betti, Sonia; Chinellato, Eris; Castiello, Umberto

    2015-09-01

    Observing actions performed by others entails a subliminal activation of primary motor cortex reflecting the components encoded in the observed action. One of the most debated issues concerns the role of this output: Is it a mere replica of the incoming flow of information (kinematic coding), is it oriented to anticipate the forthcoming events (predictive coding) or is it aimed at responding in a suitable fashion to the actions of others (response coding)? The aim of the present study was to disentangle the relative contribution of these three levels and unify them into an integrated view of cortical motor coding. We combined transcranial magnetic stimulation (TMS) and electromyography recordings at different timings to probe the excitability of corticospinal projections to upper and lower limb muscles of participants observing a soccer player performing: (i) a penalty kick straight in their direction and then coming to a full stop, (ii) a penalty kick straight in their direction and then continuing to run, (iii) a penalty kick to the side and then continuing to run. The results show a modulation of the observer's corticospinal excitability in different effectors at different times reflecting a multiplicity of motor coding. The internal replica of the observed action, the predictive activation, and the adaptive integration of congruent and non-congruent responses to the actions of others can coexist in a not mutually exclusive way. Such a view offers reconciliation among different (and apparently divergent) frameworks in action observation literature, and will promote a more complete and integrated understanding of recent findings on motor simulation, motor resonance and automatic imitation. PMID:25727547

  18. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  19. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  20. Intelligent robots that adapt, learn, and predict

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Liao, X.; Ghaffari, M.; Alhaj Ali, S. M.

    2005-10-01

    The purpose of this paper is to describe the concept and architecture for an intelligent robot system that can adapt, learn and predict the future. This evolutionary approach to the design of intelligent robots is the result of several years of study on the design of intelligent machines that could adapt using computer vision or other sensory inputs, learn using artificial neural networks or genetic algorithms, exhibit semiotic closure with a creative controller and perceive present situations by interpretation of visual and voice commands. This information processing would then permit the robot to predict the future and plan its actions accordingly. In this paper we show that the capability to adapt, and learn naturally leads to the ability to predict the future state of the environment which is just another form of semiotic closure. That is, predicting a future state without knowledge of the future is similar to making a present action without knowledge of the present state. The theory will be illustrated by considering the situation of guiding a mobile robot through an unstructured environment for a rescue operation. The significance of this work is in providing a greater understanding of the applications of learning to mobile robots.

  1. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  2. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  3. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  4. Adaptive method for electron bunch profile prediction

    NASA Astrophysics Data System (ADS)

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.

  5. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  6. Adaptation improves neural coding efficiency despite increasing correlations in variability.

    PubMed

    Adibi, Mehdi; McDonald, James S; Clifford, Colin W G; Arabzadeh, Ehsan

    2013-01-30

    Exposure of cortical cells to sustained sensory stimuli results in changes in the neuronal response function. This phenomenon, known as adaptation, is a common feature across sensory modalities. Here, we quantified the functional effect of adaptation on the ensemble activity of cortical neurons in the rat whisker-barrel system. A multishank array of electrodes was used to allow simultaneous sampling of neuronal activity. We characterized the response of neurons to sinusoidal whisker vibrations of varying amplitude in three states of adaptation. The adaptors produced a systematic rightward shift in the neuronal response function. Consistently, mutual information revealed that peak discrimination performance was not aligned to the adaptor but to test amplitudes 3-9 μm higher. Stimulus presentation reduced single neuron trial-to-trial response variability (captured by Fano factor) and correlations in the population response variability (noise correlation). We found that these two types of variability were inversely proportional to the average firing rate regardless of the adaptation state. Adaptation transferred the neuronal operating regime to lower rates with higher Fano factor and noise correlations. Noise correlations were positive and in the direction of signal, and thus detrimental to coding efficiency. Interestingly, across all population sizes, the net effect of adaptation was to increase the total information despite increasing the noise correlation between neurons. PMID:23365247

  7. Adaptive norm-based coding of facial identity.

    PubMed

    Rhodes, Gillian; Jeffery, Linda

    2006-09-01

    Identification of a face is facilitated by adapting to its computationally opposite identity, suggesting that the average face functions as a norm for coding identity [Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89-94; Leopold, D. A., Rhodes, G., Müller, K. -M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society of London, Series B, 272, 897-904]. Crucially, this interpretation requires that the aftereffect is selective for the opposite identity, but this has not been convincingly demonstrated. We demonstrate such selectivity, observing a larger aftereffect for opposite than non-opposite adapt-test pairs that are matched on perceptual contrast (dissimilarity). Component identities were also harder to detect in morphs of opposite than non-opposite face pairs. We propose an adaptive norm-based coding model of face identity. PMID:16647736

  8. The Helicopter Antenna Radiation Prediction Code (HARP)

    NASA Technical Reports Server (NTRS)

    Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.

    1990-01-01

    The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.

  9. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  10. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  11. Adaptive shape coding for perceptual decisions in the human brain

    PubMed Central

    Kourtzi, Zoe; Welchman, Andrew E.

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  12. Picturewise inter-view prediction selection for multiview video coding

    NASA Astrophysics Data System (ADS)

    Huo, Junyan; Chang, Yilin; Li, Ming; Yang, Haitao

    2010-11-01

    Inter-view prediction is introduced in multiview video coding (MVC) to exploit the inter-view correlation. Statistical analyses show that the coding gain benefited from inter-view prediction is unequal among pictures. On the basis of this observation, a picturewise interview prediction selection scheme is proposed. This scheme employs a novel inter-view prediction selection criterion to determine whether it is necessary to apply inter-view prediction to the current coding picture. This criterion is derived from the available coding information of the temporal reference pictures. Experimental results show that the proposed scheme can improve the performance of MVC with a comprehensive consideration of compression efficiency, computational complexity, and random access ability.

  13. Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code

    SciTech Connect

    Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.

    1985-01-01

    In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.

  14. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  15. Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination.

    PubMed

    Thomas, Blake T; Blalock, Davis W; Levy, William B

    2015-07-01

    Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts. PMID:26176744

  16. Improving Intra Prediction in High-Efficiency Video Coding.

    PubMed

    Chen, Haoming; Zhang, Tao; Sun, Ming-Ting; Saxena, Ankur; Budagavi, Madhukar

    2016-08-01

    Intra prediction is an important tool in intra-frame video coding to reduce the spatial redundancy. In current coding standard H.265/high-efficiency video coding (HEVC), a copying-based method based on the boundary (or interpolated boundary) reference pixels is used to predict each pixel in the coding block to remove the spatial redundancy. We find that the conventional copying-based method can be further improved in two cases: 1) the boundary has an inhomogeneous region and 2) the predicted pixel is far away from the boundary that the correlation between the predicted pixel and the reference pixels is relatively weak. This paper performs a theoretical analysis of the optimal weights based on a first-order Gaussian Markov model and the effects when the pixel values deviate from the model and the predicted pixel is far away from the reference pixels. It also proposes a novel intra prediction scheme based on the analysis that smoothing the copying-based prediction can derive a better prediction block. Both the theoretical analysis and the experimental results show the effectiveness of the proposed intra prediction method. An average gain of 2.3% on all intra coding can be achieved with the HEVC reference software. PMID:27249831

  17. Prediction of long saphenous vein graft adaptation.

    PubMed

    Davies, A H; Magee, T R; Hayward, J K; Baird, R N; Horrocks, M

    1994-07-01

    The ability of vein to dilate may allow smaller veins to be used for bypass if this change could be predicted. Sixty patients undergoing femorodistal popliteal or infrapopliteal bypass have had their long saphenous vein studied. Diameter measurements of the long saphenous vein have been performed using an ATL Duplex scanner at the groin, mid-thigh and knee. Measurements were performed preoperatively both at rest and with a venous occlusion cuff to dilate the vein and subsequently at 7 days and 3, 6, 9, 12 months after implantation. The mean diameter of the vein at the mid thigh was 4.2 mm non dilated, 5.1 mm with occlusion, 5.4 mm 7 days postoperatively and 5.5 mm at 12 months (p < 0.01 ANOVA). The mean diameter of the vein at the knee was 3.8 mm non-dilated, 4.8 mm with occlusion, 4.8 mm at 7 days and 5.0 mm at 12 months after operation (p < 0.01 ANOVA). If the minimum resting internal diameter of vein regarded as being suitable for bypass was 3 mm, this technique would have increased the vein utilisation rate by 22%. These results show that by using a technique of venous occlusion at the time of preoperative vein mapping the adaptive response of the vein can be predicted and this can result in an increased rate of vein utilisation. PMID:8088400

  18. Predictive Bias and Sensitivity in NRC Fuel Performance Codes

    SciTech Connect

    Geelhood, Kenneth J.; Luscher, Walter G.; Senor, David J.; Cunningham, Mitchel E.; Lanning, Donald D.; Adkins, Harold E.

    2009-10-01

    The latest versions of the fuel performance codes, FRAPCON-3 and FRAPTRAN were examined to determine if the codes are intrinsically conservative. Each individual model and type of code prediction was examined and compared to the data that was used to develop the model. In addition, a brief literature search was performed to determine if more recent data have become available since the original model development for model comparison.

  19. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  20. Results and code predictions for ABCOVE aerosol code validation - Test AB5

    SciTech Connect

    Hilliard, R K; McCormack, J D; Postma, A K

    1983-11-01

    A program for aerosol behavior code validation and evaluation (ABCOVE) has been developed in accordance with the LMFBR Safety Program Plan. The ABCOVE program is a cooperative effort between the USDOE, the USNRC, and their contractor organizations currently involved in aerosol code development, testing or application. The first large-scale test in the ABCOVE program, AB5, was performed in the 850-m{sup 3} CSTF vessel using a sodium spray as the aerosol source. Seven organizations made pretest predictions of aerosol behavior using seven different computer codes (HAA-3, HAA-4, HAARM-3, QUICK, MSPEC, MAEROS and CONTAIN). Three of the codes were used by more than one user so that the effect of user input could be assessed, as well as the codes themselves. Detailed test results are presented and compared with the code predictions for eight key parameters.

  1. Adaptive, predictive controller for optimal process control

    SciTech Connect

    Brown, S.K.; Baum, C.C.; Bowling, P.S.; Buescher, K.L.; Hanagandi, V.M.; Hinde, R.F. Jr.; Jones, R.D.; Parkinson, W.J.

    1995-12-01

    One can derive a model for use in a Model Predictive Controller (MPC) from first principles or from experimental data. Until recently, both methods failed for all but the simplest processes. First principles are almost always incomplete and fitting to experimental data fails for dimensions greater than one as well as for non-linear cases. Several authors have suggested the use of a neural network to fit the experimental data to a multi-dimensional and/or non-linear model. Most networks, however, use simple sigmoid functions and backpropagation for fitting. Training of these networks generally requires large amounts of data and, consequently, very long training times. In 1993 we reported on the tuning and optimization of a negative ion source using a special neural network[2]. One of the properties of this network (CNLSnet), a modified radial basis function network, is that it is able to fit data with few basis functions. Another is that its training is linear resulting in guaranteed convergence and rapid training. We found the training to be rapid enough to support real-time control. This work has been extended to incorporate this network into an MPC using the model built by the network for predictive control. This controller has shown some remarkable capabilities in such non-linear applications as continuous stirred exothermic tank reactors and high-purity fractional distillation columns[3]. The controller is able not only to build an appropriate model from operating data but also to thin the network continuously so that the model adapts to changing plant conditions. The controller is discussed as well as its possible use in various of the difficult control problems that face this community.

  2. System code requirements for SBWR LOCA predictions

    SciTech Connect

    Rohatgi, U.S.; Slovik, G.; Kroeger, P.

    1994-12-31

    The simplified boiling water reactor (SBWR) is the latest design in the family of boiling water reactors (BWRs) from General Electric. The concept is based on many innovative, passive, safety systems that rely on naturally occurring phenomena, such as natural circulation, gravity flows, and condensation. Reliability has been improved by eliminating active systems such as pumps and valves. The reactor pressure vessel (RPV) is connected to heat exchangers submerged in individual water tanks, which are open to atmosphere. These heat exchanger, or isolation condensers (ICs), provide a heat sink to reduce the RPV pressure when isolated. The RPV is also connected to three elevated tanks of water called the gravity-driven cooling system (GDCS). During a loss-of-coolant accident (LOCA), the RPV is depressurized by the automatic depressurization system (ADS), allowing the gravity-driven flow from the GDCS tanks. The containment pressure is controlled by a passive containment cooling system (PCCS) and suppression pool. Similarly, there are new plant protection systems in the SBWR, such as fine-motion control rod drive, passive standby liquid control system, and the automatic feedwater runback system. These safety and plant protection systems respond to phenomena that are different from previous BWR designs. System codes must be upgraded to include models for the phenomena expected during transients for the SBWR.

  3. Lossless compression of hyperspectral images using adaptive edge-based prediction

    NASA Astrophysics Data System (ADS)

    Wang, Keyan; Wang, Liping; Liao, Huilin; Song, Juan; Li, Yunsong

    2013-09-01

    By fully exploiting the high correlation of the pixels along an edge, a new lossless compression algorithm for hyperspectral images using adaptive edge-based prediction is presented in order to improve compression performance. The proposed algorithm contains three modes in prediction: intraband prediction, interband prediction, and no prediction. An improved median predictor (IMP) with diagonal edge detection is adopted in the intraband mode. And in the interband mode, an adaptive edge-based predictor (AEP) is utilized to exploit the spectral redundancy. The AEP, which is driven by the strong interband structural similarity, applies an edge detection first to the reference band, and performs a local edge analysis to adaptively determine the optimal prediction context of the pixel to be predicted in the current band, and then calculates the prediction coefficients by least-squares optimization. After intra/inter prediction, all predicted residuals are finally entropy coded. For a band with no prediction mode, all the pixels are directly entropy coded. Experimental results show that the proposed algorithm improves the lossless compression ratio for both standard AVIRIS 1997 hyperspectral images and the newer CCSDS test images.

  4. 3D Finite Element Trajectory Code with Adaptive Meshing

    NASA Astrophysics Data System (ADS)

    Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien

    2004-11-01

    Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.

  5. Roadmap Toward a Predictive Performance-based Commercial Energy Code

    SciTech Connect

    Rosenberg, Michael I.; Hart, Philip R.

    2014-10-01

    Energy codes have provided significant increases in building efficiency over the last 38 years, since the first national energy model code was published in late 1975. The most commonly used path in energy codes, the prescriptive path, appears to be reaching a point of diminishing returns. The current focus on prescriptive codes has limitations including significant variation in actual energy performance depending on which prescriptive options are chosen, a lack of flexibility for designers and developers, and the inability to handle control optimization that is specific to building type and use. This paper provides a high level review of different options for energy codes, including prescriptive, prescriptive packages, EUI Target, outcome-based, and predictive performance approaches. This paper also explores a next generation commercial energy code approach that places a greater emphasis on performance-based criteria. A vision is outlined to serve as a roadmap for future commercial code development. That vision is based on code development being led by a specific approach to predictive energy performance combined with building specific prescriptive packages that are designed to be both cost-effective and to achieve a desired level of performance. Compliance with this new approach can be achieved by either meeting the performance target as demonstrated by whole building energy modeling, or by choosing one of the prescriptive packages.

  6. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  7. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  8. Dream to Predict? REM Dreaming as Prospective Coding

    PubMed Central

    Llewellyn, Sue

    2016-01-01

    The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies—through expectations derived from discerning patterns in associated past experiences—would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus—without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning—within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate—another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a

  9. Dream to Predict? REM Dreaming as Prospective Coding.

    PubMed

    Llewellyn, Sue

    2015-01-01

    The dream as prediction seems inherently improbable. The bizarre occurrences in dreams never characterize everyday life. Dreams do not come true! But assuming that bizarreness negates expectations may rest on a misunderstanding of how the predictive brain works. In evolutionary terms, the ability to rapidly predict what sensory input implies-through expectations derived from discerning patterns in associated past experiences-would have enhanced fitness and survival. For example, food and water are essential for survival, associating past experiences (to identify location patterns) predicts where they can be found. Similarly, prediction may enable predator identification from what would have been only a fleeting and ambiguous stimulus-without prior expectations. To confront the many challenges associated with natural settings, visual perception is vital for humans (and most mammals) and often responses must be rapid. Predictive coding during wake may, therefore, be based on unconscious imagery so that visual perception is maintained and appropriate motor actions triggered quickly. Speed may also dictate the form of the imagery. Bizarreness, during REM dreaming, may result from a prospective code fusing phenomena with the same meaning-within a particular context. For example, if the context is possible predation, from the perspective of the prey two different predators can both mean the same (i.e., immediate danger) and require the same response (e.g., flight). Prospective coding may also prune redundancy from memories, to focus the image on the contextually-relevant elements only, thus, rendering the non-relevant phenomena indeterminate-another aspect of bizarreness. In sum, this paper offers an evolutionary take on REM dreaming as a form of prospective coding which identifies a probabilistic pattern in past events. This pattern is portrayed in an unconscious, associative, sensorimotor image which may support cognition in wake through being mobilized as a predictive

  10. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  11. Customizing Countermeasure Prescriptions using Predictive Measures of Sensorimotor Adaptability

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Peters, B. T.; Mulavara, A. P.; Miller, C. A.; Batson, C. D.; Wood, S. J.; Guined, J. R.; Cohen, H. S.; Buccello-Stout, R.; DeDios, Y. E.; Kofman, I. S.; Szecsy, D. L.; Erdeniz, B.; Koppelmans, V.; Seidler, R. D.

    2014-01-01

    Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the readapation phase following a return to a gravitational environment. These alterations may lead to disruption in the ability to perform mission critical functional tasks during and after these gravitational transitions. Astronauts show significant inter-subject variation in adaptive capability following gravitational transitions. The ability to predict the manner and degree to which each individual astronaut will be affected would improve the effectiveness of a countermeasure comprised of a training program designed to enhance sensorimotor adaptability. Due to this inherent individual variability we need to develop predictive measures of sensorimotor adaptability that will allow us to predict, before actual space flight, which crewmember will experience challenges in adaptive capacity. Thus, obtaining this information will allow us to design and implement better sensorimotor adaptability training countermeasures that will be customized for each crewmember's unique adaptive capabilities. Therefore the goals of this project are to: 1) develop a set of predictive measures capable of identifying individual differences in sensorimotor adaptability, and 2) use this information to design sensorimotor adaptability training countermeasures that are customized for each crewmember's individual sensorimotor adaptive characteristics. To achieve these goals we are currently pursuing the following specific aims: Aim 1: Determine whether behavioral metrics of individual sensory bias predict sensorimotor adaptability. For this aim, subjects perform tests that delineate individual sensory biases in tests of visual, vestibular, and proprioceptive function. Aim 2: Determine if individual capability for strategic and plastic-adaptive responses predicts sensorimotor adaptability. For this aim, each subject's strategic and plastic-adaptive motor learning abilities are assessed using

  12. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS

    SciTech Connect

    McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.

  13. Adaptive phase-coded reconstruction for cardiac CT

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Mayo, John; Acharya, Kishor; Pan, Tin-Su

    2000-04-01

    Cardiac imaging with conventional computed tomography (CT) has gained significant attention in recent years. New hardware development enables a CT scanner to rotate at a faster speed so that less cardiac motion is present in acquired projection data. Many new tomographic reconstruction techniques have also been developed to reduce the artifacts induced by the cardiac motion. Most of the algorithms make use of the projection data collected over several cardiac cycles to formulate a single projection data set. Because the data set is formed with samples collected roughly in the same phase of a cardiac cycle, the temporal resolution of the newly formed data set is significantly improved compared with projections collected continuously. In this paper, we present an adaptive phase- coded reconstruction scheme (APR) for cardiac CT. Unlike the previously proposed schemes where the projection sector size is identical, APR determines each sector size based on the tomographic reconstruction algorithm. The newly proposed scheme ensures that the temporal resolution of each sector is substantially equal. In addition, the scan speed is selected based on the measured EKG signal of the patient.

  14. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Since regular operation of the DOE/NASA MOD-1 wind turbine began in October 1979 about 10 nearby households have complained of noise from the machine. Development of the NASA-LeRC with turbine sound prediction code began in May 1980 as part of an effort to understand and reduce the noise generated by MOD-1. Tone sound levels predicted with this code are in generally good agreement with measured data taken in the vicinity MOD-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may be amplifying the actual sound levels by about 6 dB. Parametric analysis using the code has shown that the predominant contributions to MOD-1 rotor noise are: (1) the velocity deficit in the wake of the support tower; (2) the high rotor speed; and (3) off column operation.

  15. The NASA-LeRC wind turbine sound prediction code

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1981-01-01

    Development of the wind turbine sound prediction code began as part of an effort understand and reduce the noise generated by Mod-1. Tone sound levels predicted with this code are in good agreement with measured data taken in the vicinity Mod-1 wind turbine (less than 2 rotor diameters). Comparison in the far field indicates that propagation effects due to terrain and atmospheric conditions may amplify the actual sound levels by 6 dB. Parametric analysis using the code shows that the predominant contributors to Mod-1 rotor noise are (1) the velocity deficit in the wake of the support tower, (2) the high rotor speed, and (3) off-optimum operation.

  16. Fast coding unit selection method for high efficiency video coding intra prediction

    NASA Astrophysics Data System (ADS)

    Xiong, Jian

    2013-07-01

    The high efficiency video coding (HEVC) video coding standard under development can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. To improve coding performance, a quad-tree coding structure and a robust rate-distortion (RD) optimization technique is used to select an optimum coding mode. Since the RD costs of all possible coding modes are computed to decide an optimum mode, high computational complexity is induced in the encoder. A fast learning-based coding unit (CU) size selection method is presented for HEVC intra prediction. The proposed algorithm is based on theoretical analysis that shows the non-normalized histogram of oriented gradient (n-HOG) can be used to help select CU size. A codebook is constructed offline by clustering n-HOGs of training sequences for each CU size. The optimum size is determined by comparing the n-HOG of the current CU with the learned codebooks. Experimental results show that the CU size selection scheme speeds up intra coding significantly with negligible loss of peak signal-to-noise ratio.

  17. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  18. Genome-environment associations in sorghum landraces predict adaptive traits.

    PubMed

    Lasky, Jesse R; Upadhyaya, Hari D; Ramu, Punna; Deshpande, Santosh; Hash, C Tom; Bonnette, Jason; Juenger, Thomas E; Hyma, Katie; Acharya, Charlotte; Mitchell, Sharon E; Buckler, Edward S; Brenton, Zachary; Kresovich, Stephen; Morris, Geoffrey P

    2015-07-01

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these could be used to predict phenotypic variation for adaptive traits. We tested this proposition in the global food crop Sorghum bicolor, characterizing 1943 georeferenced landraces at 404,627 SNPs and quantifying allelic associations with bioclimatic and soil gradients. Environment explained a substantial portion of SNP variation, independent of geographical distance, and genic SNPs were enriched for environmental associations. Further, environment-associated SNPs predicted genotype-by-environment interactions under experimental drought stress and aluminum toxicity. Our results suggest that genomic signatures of environmental adaptation may be useful for crop improvement, enhancing germplasm identification and marker-assisted selection. Together, genome-environment associations and phenotypic analyses may reveal the basis of environmental adaptation. PMID:26601206

  19. Genome-environment associations in sorghum landraces predict adaptive traits

    PubMed Central

    Lasky, Jesse R.; Upadhyaya, Hari D.; Ramu, Punna; Deshpande, Santosh; Hash, C. Tom; Bonnette, Jason; Juenger, Thomas E.; Hyma, Katie; Acharya, Charlotte; Mitchell, Sharon E.; Buckler, Edward S.; Brenton, Zachary; Kresovich, Stephen; Morris, Geoffrey P.

    2015-01-01

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these could be used to predict phenotypic variation for adaptive traits. We tested this proposition in the global food crop Sorghum bicolor, characterizing 1943 georeferenced landraces at 404,627 SNPs and quantifying allelic associations with bioclimatic and soil gradients. Environment explained a substantial portion of SNP variation, independent of geographical distance, and genic SNPs were enriched for environmental associations. Further, environment-associated SNPs predicted genotype-by-environment interactions under experimental drought stress and aluminum toxicity. Our results suggest that genomic signatures of environmental adaptation may be useful for crop improvement, enhancing germplasm identification and marker-assisted selection. Together, genome-environment associations and phenotypic analyses may reveal the basis of environmental adaptation. PMID:26601206

  20. Reflections on agranular architecture: predictive coding in the motor cortex

    PubMed Central

    Shipp, Stewart; Adams, Rick A.; Friston, Karl J.

    2013-01-01

    The agranular architecture of motor cortex lacks a functional interpretation. Here, we consider a ‘predictive coding’ account of this unique feature based on asymmetries in hierarchical cortical connections. In sensory cortex, layer 4 (the granular layer) is the target of ascending pathways. We theorise that the operation of predictive coding in the motor system (a process termed ‘active inference’) provides a principled rationale for the apparent recession of the ascending pathway in motor cortex. The extension of this theory to interlaminar circuitry also accounts for a sub-class of ‘mirror neuron’ in motor cortex – whose activity is suppressed when observing an action –explaining how predictive coding can gate hierarchical processing to switch between perception and action. PMID:24157198

  1. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  2. TAS: A Transonic Aircraft/Store flow field prediction code

    NASA Technical Reports Server (NTRS)

    Thompson, D. S.

    1983-01-01

    A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.

  3. An integrative approach to predicting the functional effects of non-coding and coding sequence variation

    PubMed Central

    Shihab, Hashem A.; Rogers, Mark F.; Gough, Julian; Mort, Matthew; Cooper, David N.; Day, Ian N. M.; Gaunt, Tom R.; Campbell, Colin

    2015-01-01

    Motivation: Technological advances have enabled the identification of an increasingly large spectrum of single nucleotide variants within the human genome, many of which may be associated with monogenic disease or complex traits. Here, we propose an integrative approach, named FATHMM-MKL, to predict the functional consequences of both coding and non-coding sequence variants. Our method utilizes various genomic annotations, which have recently become available, and learns to weight the significance of each component annotation source. Results: We show that our method outperforms current state-of-the-art algorithms, CADD and GWAVA, when predicting the functional consequences of non-coding variants. In addition, FATHMM-MKL is comparable to the best of these algorithms when predicting the impact of coding variants. The method includes a confidence measure to rank order predictions. Availability and implementation: The FATHMM-MKL webserver is available at: http://fathmm.biocompute.org.uk Contact: H.Shihab@bristol.ac.uk or Mark.Rogers@bristol.ac.uk or C.Campbell@bristol.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25583119

  4. Prediction of the space adaptation syndrome

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Homick, J. L.; Ryan, P.; Moseley, E. C.

    1984-01-01

    The univariate and multivariate relationships of provocative measures used to produce motion sickness symptoms were described. Normative subjects were used to develop and cross-validate sets of linear equations that optimally predict motion sickness in parabolic flights. The possibility of reducing the number of measurements required for prediction was assessed. After describing the variables verbally and statistically for 159 subjects, a factor analysis of 27 variables was completed to improve understanding of the relationships between variables and to reduce the number of measures for prediction purposes. The results of this analysis show that none of variables are significantly related to the responses to parabolic flights. A set of variables was selected to predict responses to KC-135 flights. A series of discriminant analyses were completed. Results indicate that low, moderate, or severe susceptibility could be correctly predicted 64 percent and 53 percent of the time on original and cross-validation samples, respectively. Both the factor analysis and the discriminant analysis provided no basis for reducing the number of tests.

  5. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  6. Deficits in Predictive Coding Underlie Hallucinations in Schizophrenia

    PubMed Central

    Schatz, Kelly C.; Abi-Dargham, Anissa

    2014-01-01

    The neural mechanisms that produce hallucinations and other psychotic symptoms remain unclear. Previous research suggests that deficits in predictive signals for learning, such as prediction error signals, may underlie psychotic symptoms, but the mechanism by which such deficits produce psychotic symptoms remains to be established. We used model-based fMRI to study sensory prediction errors in human patients with schizophrenia who report daily auditory verbal hallucinations (AVHs) and sociodemographically matched healthy control subjects. We manipulated participants' expectations for hearing speech at different periods within a speech decision-making task. Patients activated a voice-sensitive region of the auditory cortex while they experienced AVHs in the scanner and displayed a concomitant deficit in prediction error signals in a similar portion of auditory cortex. This prediction error deficit correlated strongly with increased activity during silence and with reduced volumes of the auditory cortex, two established neural phenotypes of AVHs. Furthermore, patients with more severe AVHs had more deficient prediction error signals and greater activity during silence within the region of auditory cortex where groups differed, regardless of the severity of psychotic symptoms other than AVHs. Our findings suggest that deficient predictive coding accounts for the resting hyperactivity in sensory cortex that leads to hallucinations. PMID:24920613

  7. Biocomputational prediction of small non-coding RNAs in Streptomyces

    PubMed Central

    Pánek, Josef; Bobek, Jan; Mikulík, Karel; Basler, Marek; Vohradský, Jiří

    2008-01-01

    Background The first systematic study of small non-coding RNAs (sRNA, ncRNA) in Streptomyces is presented. Except for a few exceptions, the Streptomyces sRNAs, as well as the sRNAs in other genera of the Actinomyces group, have remained unstudied. This study was based on sequence conservation in intergenic regions of Streptomyces, localization of transcription termination factors, and genomic arrangement of genes flanking the predicted sRNAs. Results Thirty-two potential sRNAs in Streptomyces were predicted. Of these, expression of 20 was detected by microarrays and RT-PCR. The prediction was validated by a structure based computational approach. Two predicted sRNAs were found to be terminated by transcription termination factors different from the Rho-independent terminators. One predicted sRNA was identified computationally with high probability as a Streptomyces 6S RNA. Out of the 32 predicted sRNAs, 24 were found to be structurally dissimilar from known sRNAs. Conclusion Streptomyces is the largest genus of Actinomyces, whose sRNAs have not been studied. The Actinomyces is a group of bacterial species with unique genomes and phenotypes. Therefore, in Actinomyces, new unique bacterial sRNAs may be identified. The sequence and structural dissimilarity of the predicted Streptomyces sRNAs demonstrated by this study serve as the first evidence of the uniqueness of Actinomyces sRNAs. PMID:18477385

  8. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex

    PubMed Central

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-01-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  9. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    PubMed

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  10. AGR-1 Safety Test Predictions using the PARFUME code

    SciTech Connect

    Blaise Collin

    2012-05-01

    The PARFUME modeling code was used to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the first irradiation test of the Advanced Gas Reactor program (AGR-1). These calculations support the AGR-1 Safety Testing Experiment, which is part of the PIE effort on AGR-1. Modeling of the AGR-1 Safety Test Predictions includes a 620-day irradiation followed by a 300-hour heat-up phase of selected AGR-1 compacts. Results include fuel failure probability, palladium penetration, and fractional release of fission products. Results show that no particle failure is predicted during irradiation or heat-up, and that fractional release of fission products is limited during irradiation but that it significantly increases during heat-up.

  11. Sonic boom predictions using a modified Euler code

    NASA Astrophysics Data System (ADS)

    Siclari, Michael J.

    1992-04-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  12. Sonic boom predictions using a modified Euler code

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1992-01-01

    The environmental impact of a next generation fleet of high-speed civil transports (HSCT) is of great concern in the evaluation of the commercial development of such a transport. One of the potential environmental impacts of a high speed civilian transport is the sonic boom generated by the aircraft and its effects on the population, wildlife, and structures in the vicinity of its flight path. If an HSCT aircraft is restricted from flying overland routes due to excessive booms, the commercial feasibility of such a venture may be questionable. NASA has taken the lead in evaluating and resolving the issues surrounding the development of a high speed civilian transport through its High-Speed Research Program (HSRP). The present paper discusses the usage of a Computational Fluid Dynamics (CFD) nonlinear code in predicting the pressure signature and ultimately the sonic boom generated by a high speed civilian transport. NASA had designed, built, and wind tunnel tested two low boom configurations for flight at Mach 2 and Mach 3. Experimental data was taken at several distances from these models up to a body length from the axis of the aircraft. The near field experimental data serves as a test bed for computational fluid dynamic codes in evaluating their accuracy and reliability for predicting the behavior of future HSCT designs. Sonic boom prediction methodology exists which is based on modified linear theory. These methods can be used reliably if near field signatures are available at distances from the aircraft where nonlinear and three dimensional effects have diminished in importance. Up to the present time, the only reliable method to obtain this data was via the wind tunnel with costly model construction and testing. It is the intent of the present paper to apply a modified three dimensional Euler code to predict the near field signatures of the two low boom configurations recently tested by NASA.

  13. Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands

    NASA Astrophysics Data System (ADS)

    Gao, Fang; Guo, Shuxu

    2016-01-01

    An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.

  14. Near-lossless image compression by adaptive prediction: new developments and comparison of algorithms

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Alparone, Luciano; Baronti, Stefano

    2003-01-01

    This paper describes state-of-the-art approaches to near-lossless image compression by adaptive causal DPCM and presents two advanced schemes based on crisp and fuzzy switching of predictors, respectively. The former relies on a linear-regression prediction in which a different predictor is employed for each image block. Such block-representative predictors are calculated from the original data set through an iterative relaxation-labeling procedure. Coding time are affordable thanks to fast convergence of training. Decoding is always performed in real time. The latter is still based on adaptive MMSE prediction in which a different predictor at each pixel position is achieved by blending a number of prototype predictors through adaptive weights calculated from the past decoded samples. Quantization error feedback loops are introduced into the basic lossless encoders to enable user-defined upper-bounded reconstruction errors. Both schemes exploit context modeling of prediction errors followed by arithmetic coding to enhance entropy coding performances. A thorough performance comparison on a wide test image set show the superiority of the proposed schemes over both up-to-date encoders in the literature and new/upcoming standards.

  15. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-01

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK. PMID:27505775

  16. Development of a massively parallel parachute performance prediction code

    SciTech Connect

    Peterson, C.W.; Strickland, J.H.; Wolfe, W.P.; Sundberg, W.D.; McBride, D.D.

    1997-04-01

    The Department of Energy has given Sandia full responsibility for the complete life cycle (cradle to grave) of all nuclear weapon parachutes. Sandia National Laboratories is initiating development of a complete numerical simulation of parachute performance, beginning with parachute deployment and continuing through inflation and steady state descent. The purpose of the parachute performance code is to predict the performance of stockpile weapon parachutes as these parachutes continue to age well beyond their intended service life. A new massively parallel computer will provide unprecedented speed and memory for solving this complex problem, and new software will be written to treat the coupled fluid, structure and trajectory calculations as part of a single code. Verification and validation experiments have been proposed to provide the necessary confidence in the computations.

  17. Interpersonal predictive coding, not action perception, is impaired in autism

    PubMed Central

    von der Lühe, T.; Manera, V.; Barisic, I.; Becchio, C.; Vogeley, K.

    2016-01-01

    This study was conducted to examine interpersonal predictive coding in individuals with high-functioning autism (HFA). Healthy and HFA participants observed point-light displays of two agents (A and B) performing separate actions. In the ‘communicative’ condition, the action performed by agent B responded to a communicative gesture performed by agent A. In the ‘individual’ condition, agent A's communicative action was substituted by a non-communicative action. Using a simultaneous masking-detection task, we demonstrate that observing agent A's communicative gesture enhanced visual discrimination of agent B for healthy controls, but not for participants with HFA. These results were not explained by differences in attentional factors as measured via eye-tracking, or by differences in the recognition of the point-light actions employed. Our findings, therefore, suggest that individuals with HFA are impaired in the use of social information to predict others' actions and provide behavioural evidence that such deficits could be closely related to impairments of predictive coding. PMID:27069050

  18. Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.

    PubMed

    Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura

    2016-02-01

    Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413

  19. Structured Set Intra Prediction With Discriminative Learning in a Max-Margin Markov Network for High Efficiency Video Coding

    PubMed Central

    Dai, Wenrui; Xiong, Hongkai; Jiang, Xiaoqian; Chen, Chang Wen

    2014-01-01

    This paper proposes a novel model on intra coding for High Efficiency Video Coding (HEVC), which simultaneously predicts blocks of pixels with optimal rate distortion. It utilizes the spatial statistical correlation for the optimal prediction based on 2-D contexts, in addition to formulating the data-driven structural interdependences to make the prediction error coherent with the probability distribution, which is desirable for successful transform and coding. The structured set prediction model incorporates a max-margin Markov network (M3N) to regulate and optimize multiple block predictions. The model parameters are learned by discriminating the actual pixel value from other possible estimates to maximize the margin (i.e., decision boundary bandwidth). Compared to existing methods that focus on minimizing prediction error, the M3N-based model adaptively maintains the coherence for a set of predictions. Specifically, the proposed model concurrently optimizes a set of predictions by associating the loss for individual blocks to the joint distribution of succeeding discrete cosine transform coefficients. When the sample size grows, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. As an internal step, we optimize the underlying Markov network structure to find states that achieve the maximal energy using expectation propagation. For validation, we integrate the proposed model into HEVC for optimal mode selection on rate-distortion optimization. The proposed prediction model obtains up to 2.85% bit rate reduction and achieves better visual quality in comparison to the HEVC intra coding. PMID:25505829

  20. Predicting foreign-accent adaptation in older adults.

    PubMed

    Janse, Esther; Adank, Patti

    2012-01-01

    We investigated comprehension of and adaptation to speech in an unfamiliar accent in older adults. Participants performed a speeded sentence verification task for accented sentences: one group upon auditory-only presentation, and the other group upon audiovisual presentation. Our questions were whether audiovisual presentation would facilitate adaptation to the novel accent, and which cognitive and linguistic measures would predict adaptation. Participants were therefore tested on a range of background tests: hearing acuity, auditory verbal short-term memory, working memory, attention-switching control, selective attention, and vocabulary knowledge. Both auditory-only and audiovisual groups showed improved accuracy and decreasing response times over the course of the experiment, effectively showing accent adaptation. Even though the total amount of improvement was similar for the auditory-only and audiovisual groups, initial rate of adaptation was faster in the audiovisual group. Hearing sensitivity and short-term and working memory measures were associated with efficient processing of the novel accent. Analysis of the relationship between accent comprehension and the background tests revealed furthermore that selective attention and vocabulary size predicted the amount of adaptation over the course of the experiment. These results suggest that vocabulary knowledge and attentional abilities facilitate the attention-shifting strategies proposed to be required for perceptual learning. PMID:22530648

  1. Predicting Adaptive Behavior from the Bayley Scales of Infant Development.

    ERIC Educational Resources Information Center

    Hotard, Stephen; McWhirter, Richard

    To examine the proportion of variance in adaptive functioning predictable from mental ability, chronological age, I.Q., evidence of brain malfunction, seizure medication, and receptive and expressive language scores, 25 severely and profoundly retarded institutionalized persons (2-19 years old) were administered the Bayley Infant Scale Mental…

  2. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    SciTech Connect

    D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite

  3. The predictive roles of neural oscillations in speech motor adaptability.

    PubMed

    Sengupta, Ranit; Nasir, Sazzad M

    2016-06-01

    The human speech system exhibits a remarkable flexibility by adapting to alterations in speaking environments. While it is believed that speech motor adaptation under altered sensory feedback involves rapid reorganization of speech motor networks, the mechanisms by which different brain regions communicate and coordinate their activity to mediate adaptation remain unknown, and explanations of outcome differences in adaption remain largely elusive. In this study, under the paradigm of altered auditory feedback with continuous EEG recordings, the differential roles of oscillatory neural processes in motor speech adaptability were investigated. The predictive capacities of different EEG frequency bands were assessed, and it was found that theta-, beta-, and gamma-band activities during speech planning and production contained significant and reliable information about motor speech adaptability. It was further observed that these bands do not work independently but interact with each other suggesting an underlying brain network operating across hierarchically organized frequency bands to support motor speech adaptation. These results provide novel insights into both learning and disorders of speech using time frequency analysis of neural oscillations. PMID:26936976

  4. DCT/DST-based transform coding for intra prediction in image/video coding.

    PubMed

    Saxena, Ankur; Fernandes, Felix C

    2013-10-01

    In this paper, we present a DCT/DST based transform scheme that applies either the conventional DCT or type-7 DST for all the video-coding intra-prediction modes: vertical, horizontal, and oblique. Our approach is applicable to any block-based intra prediction scheme in a codec that employs transforms along the horizontal and vertical direction separably. Previously, Han, Saxena, and Rose showed that for the intra-predicted residuals of horizontal and vertical modes, the DST is the optimal transform with performance close to the KLT. Here, we prove that this is indeed the case for the other oblique modes. The optimal choice of using DCT or DST is based on intra-prediction modes and requires no additional signaling information or rate-distortion search. The DCT/DST scheme presented in this paper was adopted in the HEVC standardization in March 2011. Further simplifications, especially to reduce implementation complexity, which remove the mode-dependency between DCT and DST, and simply always use DST for the 4 × 4 intra luma blocks, were adopted in the HEVC standard in July 2012. Simulation results conducted for the DCT/DST algorithm are shown in the reference software for the ongoing HEVC standardization. Our results show that the DCT/DST scheme provides significant BD-rate improvement over the conventional DCT based scheme for intra prediction in video sequences. PMID:23744679

  5. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  6. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition. PMID:21986295

  7. 30 Mbit/s codec for the NTSC color TV signal using an interfield-intrafield adaptive prediction

    NASA Astrophysics Data System (ADS)

    Yamamoto, H.; Hatori, Y.; Murakami, H.

    1981-12-01

    This paper proposes a new approach to the composite coding of the NTSC color TV signal, i.e., an interfield-intrafield adaptive prediction. First, concerning prediction efficiency for various moving pictures, an advantage of this coding scheme over interframe coding is clarified theoretically and experimentally. This adaptive prediction gives very good and stable performance for still to violently moving pictures. A 30 Mbit/s codec, based on this idea, and its performance are presented. Field transmission testing through an Intelsat satellite using this codec is also described. The picture quality is satisfactory for practically all the pictures expected in broadcast TV programs, and it is subjectively estimated to be a little better than that of the half-transponder FM transmission now employed in the Intelsat system.

  8. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  9. Deficits in context-dependent adaptive coding of reward in schizophrenia.

    PubMed

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism's ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  10. Olfactory Predictive Codes and Stimulus Templates in Piriform Cortex

    PubMed Central

    Zelano, Christina; Mohanty, Aprajita; Gottfried, Jay A.

    2011-01-01

    Summary Neuroscientific models of sensory perception suggest that the brain utilizes predictive codes in advance of a stimulus encounter, enabling organisms to infer forthcoming sensory events. However, it is poorly understood how such mechanisms are implemented in the olfactory system. Combining high-resolution functional magnetic resonance imaging with multivariate (pattern-based) analyses, we examined the spatiotemporal evolution of odor perception in the human brain during an olfactory search task. Ensemble activity patterns in anterior piriform cortex (APC) and orbitofrontal cortex (OFC) reflected the attended odor target both before and after stimulus onset. In contrast, pre-stimulus ensemble representations of the odor target in posterior piriform cortex (PPC) gave way to post-stimulus representations of the odor itself. Critically, the robustness of target-related patterns in PPC predicted subsequent behavioral performance. Our findings directly show that the brain generates predictive templates or “search images” in PPC, with physical correspondence to odor-specific pattern representations, to augment olfactory perception. PMID:21982378

  11. Fast motion prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  12. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  13. Application of Avco data analysis and prediction techniques (ADAPT) to prediction of sunspot activity

    NASA Technical Reports Server (NTRS)

    Hunter, H. E.; Amato, R. A.

    1972-01-01

    The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.

  14. Numerical Prediction of SERN Performance using WIND code

    NASA Technical Reports Server (NTRS)

    Engblom, W. A.

    2003-01-01

    Computational results are presented for the performance and flow behavior of single-expansion ramp nozzles (SERNs) during overexpanded operation and transonic flight. Three-dimensional Reynolds-Averaged Navier Stokes (RANS) results are obtained for two vehicle configurations, including the NASP Model 5B and ISTAR RBCC (a variant of X-43B) using the WIND code. Numerical predictions for nozzle integrated forces and pitch moments are directly compared to experimental data for the NASP Model 5B, and adequate-to-excellent agreement is found. The sensitivity of SERN performance and separation phenomena to freestream static pressure and Mach number is demonstrated via a matrix of cases for both vehicles. 3-D separation regions are shown to be induced by either lateral (e.g., sidewall) shocks or vertical (e.g., cowl trailing edge) shocks. Finally, the implications of this work to future preliminary design efforts involving SERNs are discussed.

  15. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  16. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    NASA Technical Reports Server (NTRS)

    Geng, Steven M.; Tew, Roy C.

    1992-01-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine specific calibration to bring predictions and experimental data into agreement.

  17. Comparison of GLIMPS and HFAST Stirling engine code predictions with experimental data

    SciTech Connect

    Geng, S.M.; Tew, R.C.

    1994-09-01

    Predictions from GLIMPS and HFAST design codes are compared with experimental data for the RE-1000 and SPRE free-piston Stirling engines. Engine performance and available power loss predictions are compared. Differences exist between GLIMPS and HFAST loss predictions. Both codes require engine-specific calibration to bring predictions and experimental data into agreement.

  18. Vector Sum Excited Linear Prediction (VSELP) speech coding at 4.8 kbps

    NASA Technical Reports Server (NTRS)

    Gerson, Ira A.; Jasiuk, Mark A.

    1990-01-01

    Code Excited Linear Prediction (CELP) speech coders exhibit good performance at data rates as low as 4800 bps. The major drawback to CELP type coders is their larger computational requirements. The Vector Sum Excited Linear Prediction (VSELP) speech coder utilizes a codebook with a structure which allows for a very efficient search procedure. Other advantages of the VSELP codebook structure is discussed and a detailed description of a 4.8 kbps VSELP coder is given. This coder is an improved version of the VSELP algorithm, which finished first in the NSA's evaluation of the 4.8 kbps speech coders. The coder uses a subsample resolution single tap long term predictor, a single VSELP excitation codebook, a novel gain quantizer which is robust to channel errors, and a new adaptive pre/postfilter arrangement.

  19. Adaptation of a neutron diffraction detector to coded aperture imaging

    SciTech Connect

    Vanier, P.E.; Forman, L.

    1997-02-01

    A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.

  20. AGR-2 Safety Test Predictions Using the PARFUME Code

    SciTech Connect

    Blaise Collin

    2014-09-01

    This report documents calculations performed to predict failure probability of TRISO-coated fuel particles and diffusion of fission products through these particles during safety tests following the second irradiation test of the Advanced Gas Reactor program (AGR-2). The calculations include the modeling of the AGR-2 irradiation that occurred from June 2010 to October 2013 in the Advanced Test Reactor (ATR) and the modeling of a safety testing phase to support safety tests planned at Oak Ridge National Laboratory and at Idaho National Laboratory (INL) for a selection of AGR-2 compacts. The heat-up of AGR-2 compacts is a critical component of the AGR-2 fuel performance evaluation, and its objectives are to identify the effect of accident test temperature, burnup, and irradiation temperature on the performance of the fuel at elevated temperature. Safety testing of compacts will be followed by detailed examinations of the fuel particles to further evaluate fission product retention and behavior of the kernel and coatings. The modeling was performed using the particle fuel model computer code PARFUME developed at INL. PARFUME is an advanced gas-cooled reactor fuel performance modeling and analysis code (Miller 2009). It has been developed as an integrated mechanistic code that evaluates the thermal, mechanical, and physico-chemical behavior of fuel particles during irradiation to determine the failure probability of a population of fuel particles given the particle-to-particle statistical variations in physical dimensions and material properties that arise from the fuel fabrication process, accounting for all viable mechanisms that can lead to particle failure. The code also determines the diffusion of fission products from the fuel through the particle coating layers, and through the fuel matrix to the coolant boundary. The subsequent release of fission products is calculated at the compact level (release of fission products from the compact). PARFUME calculates the

  1. Towards feasible and effective predictive wavefront control for adaptive optics

    SciTech Connect

    Poyneer, L A; Veran, J

    2008-06-04

    We have recently proposed Predictive Fourier Control, a computationally efficient and adaptive algorithm for predictive wavefront control that assumes frozen flow turbulence. We summarize refinements to the state-space model that allow operation with arbitrary computational delays and reduce the computational cost of solving for new control. We present initial atmospheric characterization using observations with Gemini North's Altair AO system. These observations, taken over 1 year, indicate that frozen flow is exists, contains substantial power, and is strongly detected 94% of the time.

  2. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  3. Results and code predictions for ABCOVE (aerosol behavior code validation and evaluation) aerosol code validation: Test AB6 with two aerosol species. [LMFBR

    SciTech Connect

    Hilliard, R K; McCormack, J C; Muhlestein, L D

    1984-12-01

    A program for aerosol behavior code validation and evaluation (ABCOVE) has been developed in accordance with the LMFBR Safety Program Plan. The ABCOVE program is a cooperative effort between the USDOE, the USNRC, and their contractor organizations currently involved in aerosol code development, testing or application. The second large-scale test in the ABCOVE program, AB6, was performed in the 850-m/sup 3/ CSTF vessel with a two-species test aerosol. The test conditions simulated the release of a fission product aerosol, NaI, in the presence of a sodium spray fire. Five organizations made pretest predictions of aerosol behavior using seven computer codes. Three of the codes (QUICKM, MAEROS and CONTAIN) were discrete, multiple species codes, while four (HAA-3, HAA-4, HAARM-3 and SOFIA) were log-normal codes which assume uniform coagglomeration of different aerosol species. Detailed test results are presented and compared with the code predictions for seven key aerosol behavior parameters.

  4. Predictive coding of depth images across multiple views

    NASA Astrophysics Data System (ADS)

    Morvan, Yannick; Farin, Dirk; de With, Peter H. N.

    2007-02-01

    A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictivecoding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 dB as compared to H.264 compression.

  5. Correctable noise of quantum-error-correcting codes under adaptive concatenation

    NASA Astrophysics Data System (ADS)

    Fern, Jesse

    2008-01-01

    We examine the transformation of noise under a quantum-error-correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum-fault-tolerant thresholds.

  6. Nanoparticle-dispersed metamaterial sensors for adaptive coded aperture imaging applications

    NASA Astrophysics Data System (ADS)

    Nehmetallah, Georges; Banerjee, Partha; Aylo, Rola; Rogers, Stanley

    2011-09-01

    We propose tunable single-layer and multi-layer (periodic and with defect) structures comprising nanoparticle dispersed metamaterials in suitable hosts, including adaptive coded aperture constructs, for possible Adaptive Coded Aperture Imaging (ACAI) applications such as in microbolometry, pressure/temperature sensors, and directed energy transfer, over a wide frequency range, from visible to terahertz. These structures are easy to fabricate, are low-cost and tunable, and offer enhanced functionality, such as perfect absorption (in the case of bolometry) and low cross-talk (for sensors). Properties of the nanoparticle dispersed metamaterial are determined using effective medium theory.

  7. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  8. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  9. Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.

    2010-01-01

    Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.

  10. Code division controlled-MAC in wireless sensor network by adaptive binary signature design

    NASA Astrophysics Data System (ADS)

    Wei, Lili; Batalama, Stella N.; Pados, Dimitris A.; Suter, Bruce

    2007-04-01

    We consider the problem of signature waveform design for code division medium-access-control (MAC) of wireless sensor networks (WSN). In contract to conventional randomly chosen orthogonal codes, an adaptive signature design strategy is developed under the maximum pre-detection SINR (signal to interference plus noise ratio) criterion. The proposed algorithm utilizes slowest descent cords of the optimization surface to move toward the optimum solution and exhibits, upon eigenvector decomposition, linear computational complexity with respect to signature length. Numerical and simulation studies demonstrate the performance of the proposed method and offer comparisons with conventional signature code sets.

  11. Seizure prediction using adaptive neuro-fuzzy inference system.

    PubMed

    Rabbi, Ahmed F; Azinfar, Leila; Fazel-Rezai, Reza

    2013-01-01

    In this study, we present a neuro-fuzzy approach of seizure prediction from invasive Electroencephalogram (EEG) by applying adaptive neuro-fuzzy inference system (ANFIS). Three nonlinear seizure predictive features were extracted from a patient's data obtained from the European Epilepsy Database, one of the most comprehensive EEG database for epilepsy research. A total of 36 hours of recordings including 7 seizures was used for analysis. The nonlinear features used in this study were similarity index, phase synchronization, and nonlinear interdependence. We designed an ANFIS classifier constructed based on these features as input. Fuzzy if-then rules were generated by the ANFIS classifier using the complex relationship of feature space provided during training. The membership function optimization was conducted based on a hybrid learning algorithm. The proposed method achieved highest sensitivity of 80% with false prediction rate as low as 0.46 per hour. PMID:24110134

  12. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors

    PubMed Central

    Murphy, Peter R.; van Moort, Marianne L.; Nieuwenhuis, Sander

    2016-01-01

    Reaction time (RT) is commonly observed to slow down after an error. This post-error slowing (PES) has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES. PMID:27010472

  13. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system. PMID:21572924

  14. Visual Bias Predicts Gait Adaptability in Novel Sensory Discordant Conditions

    NASA Technical Reports Server (NTRS)

    Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    We designed a gait training study that presented combinations of visual flow and support-surface manipulations to investigate the response of healthy adults to novel discordant sensorimotor conditions. We aimed to determine whether a relationship existed between subjects visual dependence and their postural stability and cognitive performance in a new discordant environment presented at the conclusion of training (Transfer Test). Our training system comprised a treadmill placed on a motion base facing a virtual visual scene that provided a variety of sensory challenges. Ten healthy adults completed 3 training sessions during which they walked on a treadmill at 1.1 m/s while receiving discordant support-surface and visual manipulations. At the first visit, in an analysis of normalized torso translation measured in a scene-movement-only condition, 3 of 10 subjects were classified as visually dependent. During the Transfer Test, all participants received a 2-minute novel exposure. In a combined measure of stride frequency and reaction time, the non-visually dependent subjects showed improved adaptation on the Transfer Test compared to their visually dependent counterparts. This finding suggests that individual differences in the ability to adapt to new sensorimotor conditions may be explained by individuals innate sensory biases. An accurate preflight assessment of crewmembers biases for visual dependence could be used to predict their propensities to adapt to novel sensory conditions. It may also facilitate the development of customized training regimens that could expedite adaptation to alternate gravitational environments.

  15. Direct social perception, mindreading and Bayesian predictive coding.

    PubMed

    de Bruin, Leon; Strijbos, Derek

    2015-11-01

    Mindreading accounts of social cognition typically claim that we cannot directly perceive the mental states of other agents and therefore have to exercise certain cognitive capacities in order to infer them. In recent years this view has been challenged by proponents of the direct social perception (DSP) thesis, who argue that the mental states of other agents can be directly perceived. In this paper we show, first, that the main disagreement between proponents of DSP and mindreading accounts has to do with the so-called 'sandwich model' of social cognition. Although proponents of DSP are critical of this model, we argue that they still seem to accept the distinction between perception, cognition and action that underlies it. Second, we contrast the sandwich model of social cognition with an alternative theoretical framework that is becoming increasingly popular in the cognitive neurosciences: Bayesian Predictive Coding (BPC). We show that the BPC framework renders a principled distinction between perception, cognition and action obsolete, and can accommodate elements of both DSP and mindreading accounts. PMID:25959592

  16. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  17. Incorporating spike-rate adaptation into a rate code in mathematical and biological neurons.

    PubMed

    Ralston, Bridget N; Flagg, Lucas Q; Faggin, Eric; Birmingham, John T

    2016-06-01

    For a slowly varying stimulus, the simplest relationship between a neuron's input and output is a rate code, in which the spike rate is a unique function of the stimulus at that instant. In the case of spike-rate adaptation, there is no unique relationship between input and output, because the spike rate at any time depends both on the instantaneous stimulus and on prior spiking (the "history"). To improve the decoding of spike trains produced by neurons that show spike-rate adaptation, we developed a simple scheme that incorporates "history" into a rate code. We utilized this rate-history code successfully to decode spike trains produced by 1) mathematical models of a neuron in which the mechanism for adaptation (IAHP) is specified, and 2) the gastropyloric receptor (GPR2), a stretch-sensitive neuron in the stomatogastric nervous system of the crab Cancer borealis, that exhibits long-lasting adaptation of unknown origin. Moreover, when we modified the spike rate either mathematically in a model system or by applying neuromodulatory agents to the experimental system, we found that changes in the rate-history code could be related to the biophysical mechanisms responsible for altering the spiking. PMID:26888106

  18. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  19. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    PubMed Central

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  20. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  1. Adaptive modelling of structured molecular representations for toxicity prediction

    NASA Astrophysics Data System (ADS)

    Bertinetto, Carlo; Duce, Celia; Micheli, Alessio; Solaro, Roberto; Tiné, Maria Rosaria

    2012-12-01

    We investigated the possibility of modelling structure-toxicity relationships by direct treatment of the molecular structure (without using descriptors) through an adaptive model able to retain the appropriate structural information. With respect to traditional descriptor-based approaches, this provides a more general and flexible way to tackle prediction problems that is particularly suitable when little or no background knowledge is available. Our method employs a tree-structured molecular representation, which is processed by a recursive neural network (RNN). To explore the realization of RNN modelling in toxicological problems, we employed a data set containing growth impairment concentrations (IGC50) for Tetrahymena pyriformis.

  2. Prediction and control of chaotic processes using nonlinear adaptive networks

    SciTech Connect

    Jones, R.D.; Barnes, C.W.; Flake, G.W.; Lee, K.; Lewis, P.S.; O'Rouke, M.K.; Qian, S.

    1990-01-01

    We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.

  3. Performance predictions for the Keck telescope adaptive optics system

    SciTech Connect

    Gavel, D.T.; Olivier, S.S.

    1995-08-07

    The second Keck ten meter telescope (Keck-11) is slated to have an infrared-optimized adaptive optics system in the 1997--1998 time frame. This system will provide diffraction-limited images in the 1--3 micron region and the ability to use a diffraction-limited spectroscopy slit. The AO system is currently in the preliminary design phase and considerable analysis has been performed in order to predict its performance under various seeing conditions. In particular we have investigated the point-spread function, energy through a spectroscopy slit, crowded field contrast, object limiting magnitude, field of view, and sky coverage with natural and laser guide stars.

  4. Performance of Adaptive Trellis Coded Modulation Applied to MC-CDMA with Bi-orthogonal Keying

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Yamasaki, Shoichiro; Haseyama, Miki

    A Generalized Symbol-rate-increased (GSRI) Pragmatic Adaptive Trellis Coded Modulation (ATCM) is applied to a Multi-carrier CDMA (MC-CDMA) system with bi-orthogonal keying is analyzed. The MC-CDMA considered in this paper is that the input sequence of a bi-orthogonal modulator has code selection bit sequence and sign bit sequence. In [9], an efficient error correction code using Reed-Solomon (RS) code for the code selection bit sequence has been proposed. However, since BPSK is employed for the sign bit modulation, no error correction code is applied to it. In order to realize a high speed wireless system, a multi-level modulation scheme (e.g. MPSK, MQAM, etc.) is desired. In this paper, we investigate the performance of the MC-CDMA with bi-orthogonal keying employing GSRI ATCM. GSRI TC-MPSK can arbitrarily set the bandwidth expansion ratio keeping higher coding gain than the conventional pragmatic TCM scheme. By changing the modulation scheme and the bandwidth expansion ratio (coding rate), this scheme can optimize the performance according to the channel conditions. The performance evaluations by simulations on an AWGN channel and multi-path fading channels are presented. It is shown that the proposed scheme has remarkable throughput performance than that of the conventional scheme.

  5. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  6. A neural mechanism for time-window separation resolves ambiguity of adaptive coding.

    PubMed

    Hildebrandt, K Jannis; Ronacher, Bernhard; Hennig, R Matthias; Benda, Jan

    2015-03-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task--namely, the reliable encoding of the pattern of an acoustic signal-but detrimental for another--the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  7. An adaptive multigrid model for hurricane track prediction

    NASA Technical Reports Server (NTRS)

    Fulton, Scott R.

    1993-01-01

    This paper describes a simple numerical model for hurricane track prediction which uses a multigrid method to adapt the model resolution as the vortex moves. The model is based on the modified barotropic vorticity equation, discretized in space by conservative finite differences and in time by a Runge-Kutta scheme. A multigrid method is used to solve an elliptic problem for the streamfunction at each time step. Nonuniform resolution is obtained by superimposing uniform grids of different spatial extent; these grids move with the vortex as it moves. Preliminary numerical results indicate that the local mesh refinement allows accurate prediction of the hurricane track with substantially less computer time than required on a single uniform grid.

  8. Prediction of conductivity by adaptive neuro-fuzzy model.

    PubMed

    Akbarzadeh, S; Arof, A K; Ramesh, S; Khanmirzaei, M H; Nor, R M

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  9. Prediction of Conductivity by Adaptive Neuro-Fuzzy Model

    PubMed Central

    Akbarzadeh, S.; Arof, A. K.; Ramesh, S.; Khanmirzaei, M. H.; Nor, R. M.

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  10. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  11. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  12. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  13. Predictive Simulation Generates Human Adaptations during Loaded and Inclined Walking

    PubMed Central

    Hicks, Jennifer L.; Delp, Scott L.

    2015-01-01

    Predictive simulation is a powerful approach for analyzing human locomotion. Unlike techniques that track experimental data, predictive simulations synthesize gaits by minimizing a high-level objective such as metabolic energy expenditure while satisfying task requirements like achieving a target velocity. The fidelity of predictive gait simulations has only been systematically evaluated for locomotion data on flat ground. In this study, we construct a predictive simulation framework based on energy minimization and use it to generate normal walking, along with walking with a range of carried loads and up a range of inclines. The simulation is muscle-driven and includes controllers based on muscle force and stretch reflexes and contact state of the legs. We demonstrate how human-like locomotor strategies emerge from adapting the model to a range of environmental changes. Our simulation dynamics not only show good agreement with experimental data for normal walking on flat ground (92% of joint angle trajectories and 78% of joint torque trajectories lie within 1 standard deviation of experimental data), but also reproduce many of the salient changes in joint angles, joint moments, muscle coordination, and metabolic energy expenditure observed in experimental studies of loaded and inclined walking. PMID:25830913

  14. The evolution of predictive adaptive responses in human life history

    PubMed Central

    Nettle, Daniel; Frankenhuis, Willem E.; Rickard, Ian J.

    2013-01-01

    Many studies in humans have shown that adverse experience in early life is associated with accelerated reproductive timing, and there is comparative evidence for similar effects in other animals. There are two different classes of adaptive explanation for associations between early-life adversity and accelerated reproduction, both based on the idea of predictive adaptive responses (PARs). According to external PAR hypotheses, early-life adversity provides a ‘weather forecast’ of the environmental conditions into which the individual will mature, and it is adaptive for the individual to develop an appropriate phenotype for this anticipated environment. In internal PAR hypotheses, early-life adversity has a lasting negative impact on the individual's somatic state, such that her health is likely to fail more rapidly as she gets older, and there is an advantage to adjusting her reproductive schedule accordingly. We use a model of fluctuating environments to derive evolveability conditions for acceleration of reproductive timing in response to early-life adversity in a long-lived organism. For acceleration to evolve via the external PAR process, early-life cues must have a high degree of validity and the level of annual autocorrelation in the individual's environment must be almost perfect. For acceleration to evolve via the internal PAR process requires that early-life experience must determine a significant fraction of the variance in survival prospects in adulthood. The two processes are not mutually exclusive, and mechanisms for calibrating reproductive timing on the basis of early experience could evolve through a combination of the predictive value of early-life adversity for the later environment and its negative impact on somatic state. PMID:23843395

  15. The development and application of the self-adaptive grid code, SAGE

    NASA Astrophysics Data System (ADS)

    Davies, Carol B.

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  16. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  17. Asynchrony adaptation reveals neural population code for audio-visual timing

    PubMed Central

    Roach, Neil W.; Heron, James; Whitaker, David; McGraw, Paul V.

    2011-01-01

    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects. PMID:20961905

  18. Advanced turboprop noise prediction: Development of a code at NASA Langley based on recent theoretical results

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Dunn, M. H.; Padula, S. L.

    1986-01-01

    The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.

  19. Rate-adaptive modulation and coding for optical fiber transmission systems

    NASA Astrophysics Data System (ADS)

    Gho, Gwang-Hyun; Kahn, Joseph M.

    2011-01-01

    Rate-adaptive optical transmission techniques adjust information bit rate based on transmission distance and other factors affecting signal quality. These techniques enable increased bit rates over shorter links, while enabling transmission over longer links when regeneration is not available. They are likely to become more important with increasing network traffic and a continuing evolution toward optically switched mesh networks, which make signal quality more variable. We propose a rate-adaptive scheme using variable-rate forward error correction (FEC) codes and variable constellations with a fixed symbol rate, quantifying how achievable bit rates vary with distance. The scheme uses serially concatenated Reed-Solomon codes and an inner repetition code to vary the code rate, combined with singlecarrier polarization-multiplexed M-ary quadrature amplitude modulation (PM-M-QAM) with variable M and digital coherent detection. A rate adaptation algorithm uses the signal-to-noise ratio (SNR) or the FEC decoder input bit-error ratio (BER) estimated by a receiver to determine the FEC code rate and constellation size that maximizes the information bit rate while satisfying a target FEC decoder output BER and an SNR margin, yielding a peak rate of 200 Gbit/s in a nominal 50-GHz channel bandwidth. We simulate single-channel transmission through a long-haul fiber system incorporating numerous optical switches, evaluating the impact of fiber nonlinearity and bandwidth narrowing. With zero SNR margin, we achieve bit rates of 200/100/50 Gbit/s over distances of 650/2000/3000 km. Compared to an ideal coding scheme, the proposed scheme exhibits a performance gap ranging from about 6.4 dB at 650 km to 7.5 dB at 5000 km.

  20. Multi-criterial coding sequence prediction. Combination of GeneMark with two novel, coding-character specific quantities.

    PubMed

    Almirantis, Yannis; Nikolaou, Christoforos

    2005-10-01

    This work applies two recently formulated quantities, strongly correlated with the coding character of a sequence, as an additional "module" on GeneMark, in a three-criterial method. The difference in the statistical approaches implicated by the methods combined here, is expected to contribute to an efficient assignment of functionality to unannotated genomic sequences. The developed combined algorithm is used to fractionalize a collection of GeneMark-predicted exons into sub-collections of different expectation to be coding. A further modification of the algorithm allows for the assignment of an improved estimation of the probability to be coding, to GeneMark-predicted exons. This is on the basis of a suitable training set of GeneMark-predicted exons of known functionality. PMID:15809100

  1. Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.

  2. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  3. Dynamic Forces in Spur Gears - Measurement, Prediction, and Code Validation

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Townsend, Dennis P.; Rebbechi, Brian; Lin, Hsiang Hsi

    1996-01-01

    Measured and computed values for dynamic loads in spur gears were compared to validate a new version of the NASA gear dynamics code DANST-PC. Strain gage data from six gear sets with different tooth profiles were processed to determine the dynamic forces acting between the gear teeth. Results demonstrate that the analysis code successfully simulates the dynamic behavior of the gears. Differences between analysis and experiment were less than 10 percent under most conditions.

  4. Development of a shock noise prediction code for high-speed helicopters - The subsonically moving shock

    NASA Technical Reports Server (NTRS)

    Tadghighi, H.; Holz, R.; Farassat, F.; Lee, Yung-Jang

    1991-01-01

    A previously defined airfoil subsonic shock-noise prediction formula whose result depends on a mapping of the time-dependent shock surface to a time-independent computational domain is presently coded and incorporated in the NASA-Langley rotor-noise prediction code, WOPWOP. The structure and algorithms used in the shock-noise prediction code are presented; special care has been taken to reduce computation time while maintaining accuracy. Numerical examples of shock-noise prediction are presented for hover and forward flight. It is confirmed that shock noise is an important component of the quadrupole source.

  5. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  6. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  7. Adapting a Navier-Stokes code to the ICL-DAP

    NASA Technical Reports Server (NTRS)

    Grosch, C. E.

    1985-01-01

    The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.

  8. Domain Adaptation for Pedestrian Detection Based on Prediction Consistency

    PubMed Central

    Huan-ling, Tang; Zhi-yong, An

    2014-01-01

    Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene. PMID:25013850

  9. Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Haghiabi, Amir Hamzeh

    2016-07-01

    In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computing technique for predicting longitudinal dispersion coefficient (D L ) in rivers. As mentioned in the literature, experimental dataset related to D L was collected and used for preparing MARS model. Results of MARS model were compared with multi-layer neural network model and empirical formulas. To define the most effective parameters on D L , the Gamma test was used. Performance of MARS model was assessed by calculation of standard error indices. Error indices showed that MARS model has suitable performance and is more accurate compared to multi-layer neural network model and empirical formulas. Results of the Gamma test and MARS model showed that flow depth (H) and ratio of the mean velocity to shear velocity (u/u ∗) were the most effective parameters on the D L .

  10. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  11. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  12. Prediction of longitudinal dispersion coefficient using multivariate adaptive regression splines

    NASA Astrophysics Data System (ADS)

    Haghiabi, Amir Hamzeh

    2016-07-01

    In this paper, multivariate adaptive regression splines (MARS) was developed as a novel soft-computing technique for predicting longitudinal dispersion coefficient ( D L ) in rivers. As mentioned in the literature, experimental dataset related to D L was collected and used for preparing MARS model. Results of MARS model were compared with multi-layer neural network model and empirical formulas. To define the most effective parameters on D L , the Gamma test was used. Performance of MARS model was assessed by calculation of standard error indices. Error indices showed that MARS model has suitable performance and is more accurate compared to multi-layer neural network model and empirical formulas. Results of the Gamma test and MARS model showed that flow depth ( H) and ratio of the mean velocity to shear velocity ( u/ u ∗) were the most effective parameters on the D L .

  13. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  14. CRASH: A Block-adaptive-mesh Code for Radiative Shock Hydrodynamics—Implementation and Verification

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Tóth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.; Fryxell, B.; Drake, R. P.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  15. CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.

    2011-01-01

    We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

  16. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  17. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    PubMed

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  18. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    SciTech Connect

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.

  19. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    NASA Astrophysics Data System (ADS)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  20. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  1. Less can be more: RNA-adapters may enhance coding capacity of replicators.

    PubMed

    de Boer, Folkert K; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary 'functional' structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This 'RNA-adapter' can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in order to

  2. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering.

    PubMed

    Kneissler, Jan; Drugowitsch, Jan; Friston, Karl; Butz, Martin V

    2015-01-01

    Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares. PMID:25983690

  3. Vector adaptive predictive coder for speech and audio

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey (Inventor); Gersho, Allen (Inventor)

    1990-01-01

    A real-time vector adaptive predictive coder which approximates each vector of K speech samples by using each of M fixed vectors in a first codebook to excite a time-varying synthesis filter and picking the vector that minimizes distortion. Predictive analysis for each frame determines parameters used for computing from vectors in the first codebook zero-state response vectors that are stored at the same address (index) in a second codebook. Encoding of input speech vectors s.sub.n is then carried out using the second codebook. When the vector that minimizes distortion is found, its index is transmitted to a decoder which has a codebook identical to the first codebook of the decoder. There the index is used to read out a vector that is used to synthesize an output speech vector s.sub.n. The parameters used in the encoder are quantized, for example by using a table, and the indices are transmitted to the decoder where they are decoded to specify transfer characteristics of filters used in producing the vector s.sub.n from the receiver codebook vector selected by the vector index transmitted.

  4. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  5. Assessment of 3D Codes for Predicting Liner Attenuation in Flow Ducts

    NASA Technical Reports Server (NTRS)

    Watson, W. R.; Nark, D. M.; Jones, M. G.

    2008-01-01

    This paper presents comparisons of seven propagation codes for predicting liner attenuation in ducts with flow. The selected codes span the spectrum of methods available (finite element, parabolic approximation, and pseudo-time domain) and are collectively representative of the state-of-art in the liner industry. These codes are included because they have two-dimensional and three-dimensional versions and can be exported to NASA's Columbia Supercomputer. The basic assumptions, governing differential equations, boundary conditions, and numerical methods underlying each code are briefly reviewed and an assessment is performed based on two predefined metrics. The two metrics used in the assessment are the accuracy of the predicted attenuation and the amount of wall clock time to predict the attenuation. The assessment is performed over a range of frequencies, mean flow rates, and grazing flow liner impedances commonly used in the liner industry. The primary conclusions of the study are (1) predicted attenuations are in good agreement for rigid wall ducts, (2) the majority of codes compare well to each other and to approximate results from mode theory for soft wall ducts, (3) most codes compare well to measured data on a statistical basis, (4) only the finite element codes with cubic Hermite polynomials capture extremely large attenuations, and (5) wall clock time increases by an order of magnitude or more are observed for a three-dimensional code relative to the corresponding two-dimensional version of the same code.

  6. Computer code for the prediction of nozzle admittance

    NASA Technical Reports Server (NTRS)

    Nguyen, Thong V.

    1988-01-01

    A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.

  7. Curved Duct Noise Prediction Using the Fast Scattering Code

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Tinetti, Ana F.; Farassat, F.

    2007-01-01

    Results of a study to validate the Fast Scattering Code (FSC) as a duct noise predictor, including the effects of curvature, finite impedance on the walls, and uniform background flow, are presented in this paper. Infinite duct theory was used to generate the modal content of the sound propagating within the duct. Liner effects were incorporated via a sound absorbing boundary condition on the scattering surfaces. Simulations for a rectangular duct of constant cross-sectional area have been compared to analytical solutions and experimental data. Comparisons with analytical results indicate that the code can properly calculate a given dominant mode for hardwall surfaces. Simulated acoustic behavior in the presence of lined walls (using hardwall duct modes as incident sound) is consistent with expected trends. Duct curvature was found to enhance weaker modes and reduce pressure amplitude. Agreement between simulated and experimental results for a straight duct with hard walls (no flow) was excellent.

  8. A predictive transport modeling code for ICRF-heated tokamaks

    SciTech Connect

    Phillips, C.K.; Hwang, D.Q. . Plasma Physics Lab.); Houlberg, W.; Attenberger, S.; Tolliver, J.; Hively, L. )

    1992-02-01

    In this report, a detailed description of the physic included in the WHIST/RAZE package as well as a few illustrative examples of the capabilities of the package will be presented. An in depth analysis of ICRF heating experiments using WHIST/RAZE will be discussed in a forthcoming report. A general overview of philosophy behind the structure of the WHIST/RAZE package, a summary of the features of the WHIST code, and a description of the interface to the RAZE subroutines are presented in section 2 of this report. Details of the physics contained in the RAZE code are examined in section 3. Sample results from the package follow in section 4, with concluding remarks and a discussion of possible improvements to the package discussed in section 5.

  9. Operation of the helicopter antenna radiation prediction code

    NASA Technical Reports Server (NTRS)

    Braeden, E. W.; Klevenow, F. T.; Newman, E. H.; Rojas, R. G.; Sampath, K. S.; Scheik, J. T.; Shamansky, H. T.

    1993-01-01

    HARP is a front end as well as a back end for the AMC and NEWAIR computer codes. These codes use the Method of Moments (MM) and the Uniform Geometrical Theory of Diffraction (UTD), respectively, to calculate the electromagnetic radiation patterns for antennas on aircraft. The major difficulty in using these codes is in the creation of proper input files for particular aircraft and in verifying that these files are, in fact, what is intended. HARP creates these input files in a consistent manner and allows the user to verify them for correctness using sophisticated 2 and 3D graphics. After antenna field patterns are calculated using either MM or UTD, HARP can display the results on the user's screen or provide hardcopy output. Because the process of collecting data, building the 3D models, and obtaining the calculated field patterns was completely automated by HARP, the researcher's productivity can be many times what it could be if these operations had to be done by hand. A complete, step by step, guide is provided so that the researcher can quickly learn to make use of all the capabilities of HARP.

  10. Predicting Health Care Cost Transitions Using a Multidimensional Adaptive Prediction Process.

    PubMed

    Guo, Xiaobo; Gandy, William; Coberley, Carter; Pope, James; Rula, Elizabeth; Wells, Aaron

    2015-08-01

    Managing population health requires meeting individual care needs while striving for increased efficiency and quality of care. Predictive models can integrate diverse data to provide objective assessment of individual prospective risk to identify individuals requiring more intensive health management in the present. The purpose of this research was to develop and test a predictive modeling approach, Multidimensional Adaptive Prediction Process (MAPP). MAPP is predicated on dividing the population into cost cohorts and then utilizing a collection of models and covariates to optimize future cost prediction for individuals in each cohort. MAPP was tested on 3 years of administrative health care claims starting in 2009 for health plan members (average n=25,143) with evidence of coronary heart disease. A "status quo" reference modeling methodology applied to the total annual population was established for comparative purposes. Results showed that members identified by MAPP contributed $7.9 million and $9.7 million more in 2011 health care costs than the reference model for cohorts increasing in cost or remaining high cost, respectively. Across all cohorts, the additional accurate cost capture of MAPP translated to an annual difference of $1882 per member, a 21% improvement, relative to the reference model. The results demonstrate that improved future cost prediction is achievable using a novel adaptive multiple model approach. Through accurate prospective identification of individuals whose costs are expected to increase, MAPP can help health care entities achieve efficient resource allocation while improving care quality for emergent need individuals who are intermixed among a diverse set of health care consumers. PMID:25607816

  11. The PLUTO Code for Adaptive Mesh Computations in Astrophysical Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Zanni, C.; Tzeferacos, P.; van Straalen, B.; Colella, P.; Bodo, G.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  12. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  13. Predictive codes of familiarity and context during the perceptual learning of facial identities

    NASA Astrophysics Data System (ADS)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  14. ICAN: A versatile code for predicting composite properties

    NASA Technical Reports Server (NTRS)

    Ginty, C. A.; Chamis, C. C.

    1986-01-01

    The Integrated Composites ANalyzer (ICAN), a stand-alone computer code, incorporates micromechanics equations and laminate theory to analyze/design multilayered fiber composite structures. Procedures for both the implementation of new data in ICAN and the selection of appropriate measured data are summarized for: (1) composite systems subject to severe thermal environments; (2) woven fabric/cloth composites; and (3) the selection of new composite systems including those made from high strain-to-fracture fibers. The comparisons demonstrate the versatility of ICAN as a reliable method for determining composite properties suitable for preliminary design.

  15. Genome-environment associations in sorghum landraces predict adaptive traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improving environmental adaptation in crops is essential for food security under global change, but phenotyping adaptive traits remains a major bottleneck. If associations between single-nucleotide polymorphism (SNP) alleles and environment of origin in crop landraces reflect adaptation, then these ...

  16. Monte Carlo Predictions of Prompt Fission Neutrons and Photons: a Code Comparison

    NASA Astrophysics Data System (ADS)

    Talou, P.; Kawano, T.; Stetcu, I.; Vogt, R.; Randrup, J.

    2014-04-01

    This paper reports on initial comparisons between the LANL CGMF and LBNL/LLNL FREYA codes, which both aim at computing prompt fission neutrons and gammas. While the methodologies used in both codes are somewhat similar, the detailed implementations and physical assumptions are different. We are investigating how some of these differences impact predictions.

  17. Signature Product Code for Predicting Protein-Protein Interactions

    SciTech Connect

    Martin, Shawn B.; Brown, William M.

    2004-09-25

    The SigProdV1.0 software consists of four programs which together allow the prediction of protein-protein interactions using only amino acid sequences and experimental data. The software is based on the use of tensor products of amino acid trimers coupled with classifiers known as support vector machines. Essentially the program looks for amino acid trimer pairs which occur more frequently in protein pairs which are known to interact. These trimer pairs are then used to make predictions about unknown protein pairs. A detailed description of the method can be found in the paper: S. Martin, D. Roe, J.L. Faulon. "Predicting protein-protein interactions using signature products," Bioinformatics, available online from Advance Access, Aug. 19, 2004.

  18. Fire aerosol experiment and comparisons with computer code predictions

    NASA Astrophysics Data System (ADS)

    Gregory, W. S.; Nichols, B. D.; White, B. W.; Smith, P. R.; Leslie, I. H.; Corkran, J. R.

    1988-08-01

    Los Alamos National Laboratory, in cooperation with New Mexico State University, has carried on a series of tests to provide experimental data on fire-generated aerosol transport. These data will be used to verify the aerosol transport capabilities of the FIRAC computer code. FIRAC was developed by Los Alamos for the U.S. Nuclear Regulatory Commission. It is intended to be used by safety analysts to evaluate the effects of hypothetical fires on nuclear plants. One of the most significant aspects of this analysis deals with smoke and radioactive material movement throughout the plant. The tests have been carried out using an industrial furnace that can generate gas temperatures to 300 C. To date, we have used quartz aerosol with a median diameter of about 10 microns as the fire aerosol simulant. We also plan to use fire-generated aerosols of polystyrene and polymethyl methacrylate (PMMA). The test variables include two nominal gas flow rates (150 and 300 cu ft/min) and three nominal gas temperatures (ambient, 150 C, and 300 C). The test results are presented in the form of plots of aerosol deposition vs length of duct. In addition, the mass of aerosol caught in a high-efficiency particulate air (HEPA) filter during the tests is reported. The tests are simulated with the FIRAC code, and the results are compared with the experimental data.

  19. Similarities in error processing establish a link between saccade prediction at baseline and adaptation performance

    PubMed Central

    Shelhamer, Mark

    2014-01-01

    Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. PMID:24598520

  20. Less Can Be More: RNA-Adapters May Enhance Coding Capacity of Replicators

    PubMed Central

    de Boer, Folkert K.; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary ‘functional’ structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This ‘RNA-adapter’ can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in

  1. Precise minds in uncertain worlds: predictive coding in autism.

    PubMed

    Van de Cruys, Sander; Evers, Kris; Van der Hallen, Ruth; Van Eylen, Lien; Boets, Bart; de-Wit, Lee; Wagemans, Johan

    2014-10-01

    There have been numerous attempts to explain the enigma of autism, but existing neurocognitive theories often provide merely a refined description of 1 cluster of symptoms. Here we argue that deficits in executive functioning, theory of mind, and central coherence can all be understood as the consequence of a core deficit in the flexibility with which people with autism spectrum disorder can process violations to their expectations. More formally we argue that the human mind processes information by making and testing predictions and that the errors resulting from violations to these predictions are given a uniform, inflexibly high weight in autism spectrum disorder. The complex, fluctuating nature of regularities in the world and the stochastic and noisy biological system through which people experience it require that, in the real world, people not only learn from their errors but also need to (meta-)learn to sometimes ignore errors. Especially when situations (e.g., social) or stimuli (e.g., faces) become too complex or dynamic, people need to tolerate a certain degree of error in order to develop a more abstract level of representation. Starting from an inability to flexibly process prediction errors, a number of seemingly core deficits become logically secondary symptoms. Moreover, an insistence on sameness or the acting out of stereotyped and repetitive behaviors can be understood as attempts to provide a reassuring sense of predictive success in a world otherwise filled with error. (PsycINFO Database Record (c) 2014 APA, all rights reserved). PMID:25347312

  2. AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2011-04-01

    AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries. AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell [2]. Current work is being done to develop a more advanced real gas equation of state.

  3. Pilot-Assisted Adaptive Channel Estimation for Coded MC-CDMA with ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Yui, Tatsunori; Tomeba, Hiromichi; Adachi, Fumiyuki

    One of the promising wireless access techniques for the next generation mobile communications systems is multi-carrier code division multiple access (MC-CDMA). MC-CDMA can provide good transmission performance owing to the frequency diversity effect in a severe frequency-selective fading channel. However, the bit error rate (BER) performance of coded MC-CDMA is inferior to that of orthogonal frequency division multiplexing (OFDM) due to the residual inter-code interference (ICI) after frequency-domain equalization (FDE). Recently, we proposed a frequency-domain soft interference cancellation (FDSIC) to reduce the residual ICI and confirmed by computer simulation that the MC-CDMA with FDSIC provides better BER performance than OFDM. However, ideal channel estimation was assumed. In this paper, we propose adaptive decision-feedback channel estimation (ADFCE) and evaluate by computer simulation the average BER and throughput performances of turbo-coded MC-CDMA with FDSIC. We show that even if a practical channel estimation is used, MC-CDMA with FDSIC can still provide better performance than OFDM.

  4. An experimental infrared sensor using adaptive coded apertures for enhanced resolution

    NASA Astrophysics Data System (ADS)

    Gordon, Neil T.; de Villiers, Geoffrey D.; Ridley, Kevin D.; Bennett, Charlotte R.; McNie, Mark E.; Proudler, Ian K.; Russell, Lee; Slinger, Christopher W.; Gilholm, Kevin

    2010-08-01

    Adaptive coded aperture imaging (ACAI) has the potential to enhance greatly the performance of sensing systems by allowing sub detector pixel image and tracking resolution. A small experimental system has been set up to allow the practical demonstration of these benefits in the mid infrared, as well as investigating the calibration and stability of the system. The system can also be used to test modeling of similar ACAI systems in the infrared. The demonstrator can use either a set of fixed masks or a novel MOEMS adaptive transmissive spatial light modulator. This paper discusses the design and testing of the system including the development of novel decoding algorithms and some initial imaging results are presented.

  5. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  6. Effects of Selective Adaptation on Coding Sugar and Salt Tastes in Mixtures

    PubMed Central

    Goyert, Holly F.; Formaker, Bradley K.; Hettinger, Thomas P.

    2012-01-01

    Little is known about coding of taste mixtures in complex dynamic stimulus environments. A protocol developed for odor stimuli was used to test whether rapid selective adaptation extracted sugar and salt component tastes from mixtures as it did component odors. Seventeen human subjects identified taste components of “salt + sugar” mixtures. In 4 sessions, 16 adapt–test stimulus pairs were presented as atomized, 150-μL “taste puffs” to the tongue tip to simulate odor sniffs. Stimuli were NaCl, sucrose, “NaCl + sucrose,” and water. The sugar was 98% identified but the suppressed salt 65% identified in unadapted mixtures of 2 concentrations of NaCl, 0.1 or 0.05 M, and sucrose at 3 times those concentrations, 0.3 or 0.15 M. Rapid selective adaptation decreased identification of sugar and salt preadapted ambient components to 35%, well below the 74% self-adapted level, despite variation in stimulus concentration and adapting time (<5 or >10 s). The 96% identification of sugar and salt extra mixture components was as certain as identification of single compounds. The results revealed that salt–sugar mixture suppression, dependent on relative mixture-component concentration, was mutual. Furthermore, like odors, stronger and recent tastes are emphasized in dynamic experimental conditions replicating natural situations. PMID:22562765

  7. Signature Product Code for Predicting Protein-Protein Interactions

    2004-09-25

    The SigProdV1.0 software consists of four programs which together allow the prediction of protein-protein interactions using only amino acid sequences and experimental data. The software is based on the use of tensor products of amino acid trimers coupled with classifiers known as support vector machines. Essentially the program looks for amino acid trimer pairs which occur more frequently in protein pairs which are known to interact. These trimer pairs are then used to make predictionsmore » about unknown protein pairs. A detailed description of the method can be found in the paper: S. Martin, D. Roe, J.L. Faulon. "Predicting protein-protein interactions using signature products," Bioinformatics, available online from Advance Access, Aug. 19, 2004.« less

  8. The Use of Data-Containing Codes for Prediction of Yields of Radioactive Isotopes

    NASA Astrophysics Data System (ADS)

    Chechenin, N. G.; Chuvilskaya, T. V.; Kadmenskii, A. G.; Shirokova, A. A.

    2015-11-01

    The capability of modern nuclear reaction codes to predict the yields of exotic nuclei in various reactions is discussed. Advanced data-containing codes EMPIRE and TALYS are considered. The yields of radioactive isotopes in high-energy p + 27Al and p + 183W collisions are calculated to illustrate the properties of the codes, their common elements and particular features. The calculations confirm a potentiality of the codes for estimation of yields of various isotopes in reactions induced by high-energy protons.

  9. PARC Navier-Stokes code upgrade and validation for high speed aeroheating predictions

    NASA Technical Reports Server (NTRS)

    Liver, Peter A.; Praharaj, Sarat C.; Seaford, C. Mark

    1990-01-01

    Applications of the PARC full Navier-Stokes code for hypersonic flowfield and aeroheating predictions around blunt bodies such as the Aeroassist Flight Experiment (AFE) and Aeroassisted Orbital Transfer Vehicle (AOTV) are evaluated. Two-dimensional/axisymmetric and three-dimensional perfect gas versions of the code were upgraded and tested against benchmark wind tunnel cases of hemisphere-cylinder, three-dimensional AFE forebody, and axisymmetric AFE and AOTV aerobrake/wake flowfields. PARC calculations are in good agreement with experimental data and results of similar computer codes. Difficulties encountered in flowfield and heat transfer predictions due to effects of grid density, boundary conditions such as singular stagnation line axis and artificial dissipation terms are presented together with subsequent improvements made to the code. The experience gained with the perfect gas code is being currently utilized in applications of an equilibrium air real gas PARC version developed at REMTECH.

  10. High Speed Research Noise Prediction Code (HSRNOISE) User's and Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Golub, Robert (Technical Monitor); Rawls, John W., Jr.; Yeager, Jessie C.

    2004-01-01

    This report describes a computer program, HSRNOISE, that predicts noise levels for a supersonic aircraft powered by mixed flow turbofan engines with rectangular mixer-ejector nozzles. It fully documents the noise prediction algorithms, provides instructions for executing the HSRNOISE code, and provides predicted noise levels for the High Speed Research (HSR) program Technology Concept (TC) aircraft. The component source noise prediction algorithms were developed jointly by Boeing, General Electric Aircraft Engines (GEAE), NASA and Pratt & Whitney during the course of the NASA HSR program. Modern Technologies Corporation developed an alternative mixer ejector jet noise prediction method under contract to GEAE that has also been incorporated into the HSRNOISE prediction code. Algorithms for determining propagation effects and calculating noise metrics were taken from the NASA Aircraft Noise Prediction Program.

  11. Ducted-Fan Engine Acoustic Predictions using a Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Rumsey, C. L.; Biedron, R. T.; Farassat, F.; Spence, P. L.

    1998-01-01

    A Navier-Stokes computer code is used to predict one of the ducted-fan engine acoustic modes that results from rotor-wake/stator-blade interaction. A patched sliding-zone interface is employed to pass information between the moving rotor row and the stationary stator row. The code produces averaged aerodynamic results downstream of the rotor that agree well with a widely used average-passage code. The acoustic mode of interest is generated successfully by the code and is propagated well upstream of the rotor; temporal and spatial numerical resolution are fine enough such that attenuation of the signal is small. Two acoustic codes are used to find the far-field noise. Near-field propagation is computed by using Eversman's wave envelope code, which is based on a finite-element model. Propagation to the far field is accomplished by using the Kirchhoff formula for moving surfaces with the results of the wave envelope code as input data. Comparison of measured and computed far-field noise levels show fair agreement in the range of directivity angles where the peak radiation lobes from the inlet are observed. Although only a single acoustic mode is targeted in this study, the main conclusion is a proof-of-concept: Navier-Stokes codes can be used both to generate and propagate rotor/stator acoustic modes forward through an engine, where the results can be coupled to other far-field noise prediction codes.

  12. OrfPredictor: predicting protein-coding regions in EST-derived sequences.

    PubMed

    Min, Xiang Jia; Butler, Gregory; Storms, Reginald; Tsang, Adrian

    2005-07-01

    OrfPredictor is a web server designed for identifying protein-coding regions in expressed sequence tag (EST)-derived sequences. For query sequences with a hit in BLASTX, the program predicts the coding regions based on the translation reading frames identified in BLASTX alignments, otherwise, it predicts the most probable coding region based on the intrinsic signals of the query sequences. The output is the predicted peptide sequences in the FASTA format, and a definition line that includes the query ID, the translation reading frame and the nucleotide positions where the coding region begins and ends. OrfPredictor facilitates the annotation of EST-derived sequences, particularly, for large-scale EST projects. OrfPredictor is available at https://fungalgenome.concordia.ca/tools/OrfPredictor.html. PMID:15980561

  13. Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong

    2015-02-01

    Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.

  14. A Longitudinal Focus on Fathers: Predicting Toddler Adaptation.

    ERIC Educational Resources Information Center

    Grossman, Frances Kaplan

    The purposes of this longitudinal study were: (1) to see whether it was possible to discern direct or indirect effects of mothers' and fathers' functioning on their 2-year-olds' cognitive, social, and psychomotor adaptations, and (2) to examine separately the relationships between parent functioning and the adaptations of first and later born…

  15. Hierarchical Novelty-Familiarity Representation in the Visual System by Modular Predictive Coding

    PubMed Central

    Vladimirskiy, Boris; Urbanczik, Robert; Senn, Walter

    2015-01-01

    Predictive coding has been previously introduced as a hierarchical coding framework for the visual system. At each level, activity predicted by the higher level is dynamically subtracted from the input, while the difference in activity continuously propagates further. Here we introduce modular predictive coding as a feedforward hierarchy of prediction modules without back-projections from higher to lower levels. Within each level, recurrent dynamics optimally segregates the input into novelty and familiarity components. Although the anatomical feedforward connectivity passes through the novelty-representing neurons, it is nevertheless the familiarity information which is propagated to higher levels. This modularity results in a twofold advantage compared to the original predictive coding scheme: the familiarity-novelty representation forms quickly, and at each level the full representational power is exploited for an optimized readout. As we show, natural images are successfully compressed and can be reconstructed by the familiarity neurons at each level. Missing information on different spatial scales is identified by novelty neurons and complements the familiarity representation. Furthermore, by virtue of the recurrent connectivity within each level, non-classical receptive field properties still emerge. Hence, modular predictive coding is a biologically realistic metaphor for the visual system that dynamically extracts novelty at various scales while propagating the familiarity information. PMID:26670700

  16. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  17. Maneuvering Rotorcraft Noise Prediction: A New Code for a New Problem

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.; Bres, Guillaume A.; Perez, Guillaume; Jones, Henry E.

    2002-01-01

    This paper presents the unique aspects of the development of an entirely new maneuver noise prediction code called PSU-WOPWOP. The main focus of the code is the aeroacoustic aspects of the maneuver noise problem, when the aeromechanical input data are provided (namely aircraft and blade motion, blade airloads). The PSU-WOPWOP noise prediction capability was developed for rotors in steady and transient maneuvering flight. Featuring an object-oriented design, the code allows great flexibility for complex rotor configuration and motion (including multiple rotors and full aircraft motion). The relative locations and number of hinges, flexures, and body motions can be arbitrarily specified to match the any specific rotorcraft. An analysis of algorithm efficiency is performed for maneuver noise prediction along with a description of the tradeoffs made specifically for the maneuvering noise problem. Noise predictions for the main rotor of a rotorcraft in steady descent, transient (arrested) descent, hover and a mild "pop-up" maneuver are demonstrated.

  18. An adaptive coded aperture imager: building, testing and trialing a super-resolving terrestrial demonstrator

    NASA Astrophysics Data System (ADS)

    Slinger, Christopher W.; Bennett, Charlotte R.; Dyer, Gavin; Gilholm, Kevin; Gordon, Neil; Huckridge, David; McNie, Mark; Penney, Richard W.; Proudler, Ian K.; Rice, Kevin; Ridley, Kevin D.; Russell, Lee; de Villiers, Geoffrey D.; Watson, Philip J.

    2011-09-01

    There is an increasingly important requirement for day and night, wide field of view imaging and tracking for both imaging and sensing applications. Applications include military, security and remote sensing. We describe the development of a proof of concept demonstrator of an adaptive coded-aperture imager operating in the mid-wave infrared to address these requirements. This consists of a coded-aperture mask, a set of optics and a 4k x 4k focal plane array (FPA). This system can produce images with a resolution better than that achieved by the detector pixel itself (i.e. superresolution) by combining multiple frames of data recorded with different coded-aperture mask patterns. This superresolution capability has been demonstrated both in the laboratory and in imaging of real-world scenes, the highest resolution achieved being ½ the FPA pixel pitch. The resolution for this configuration is currently limited by vibration and theoretically ¼ pixel pitch should be possible. Comparisons have been made between conventional and ACAI solutions to these requirements and show significant advantages in size, weight and cost for the ACAI approach.

  19. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  20. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  1. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-01

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code. PMID:27607718

  2. A Predictive Approach to Eliminating Errors in Software Code

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA s Metrics Data Program Data Repository is a database that stores problem, product, and metrics data. The primary goal of this data repository is to provide project data to the software community. In doing so, the Metrics Data Program collects artifacts from a large NASA dataset, generates metrics on the artifacts, and then generates reports that are made available to the public at no cost. The data that are made available to general users have been sanitized and authorized for publication through the Metrics Data Program Web site by officials representing the projects from which the data originated. The data repository is operated by NASA s Independent Verification and Validation (IV&V) Facility, which is located in Fairmont, West Virginia, a high-tech hub for emerging innovation in the Mountain State. The IV&V Facility was founded in 1993, under the NASA Office of Safety and Mission Assurance, as a direct result of recommendations made by the National Research Council and the Report of the Presidential Commission on the Space Shuttle Challenger Accident. Today, under the direction of Goddard Space Flight Center, the IV&V Facility continues its mission to provide the highest achievable levels of safety and cost-effectiveness for mission-critical software. By extending its data to public users, the facility has helped improve the safety, reliability, and quality of complex software systems throughout private industry and other government agencies. Integrated Software Metrics, Inc., is one of the organizations that has benefited from studying the metrics data. As a result, the company has evolved into a leading developer of innovative software-error prediction tools that help organizations deliver better software, on time and on budget.

  3. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  4. Amino acids and our genetic code: a highly adaptive and interacting defense system.

    PubMed

    Verheesen, R H; Schweitzer, C M

    2012-04-01

    Since the discovery of the genetic code, Mendel's heredity theory and Darwin's evolution theory, science believes that adaptations to the environment are processes in which the adaptation of the genes is a matter of probability, in which finally the specie will survive which is evolved by chance. We hypothesize that evolution and the adaptation of the genes is a well-organized fully adaptive system in which there is no rigidity of the genes. The dividing of the genes will take place in line with the environment to be expected, sensed through the mother. The encoding triplets can encode for more than one amino acid depending on the availability of the amino acids and the needed micronutrients. Those nutrients can cause disease but also prevent diseases, even cancer and auto immunity. In fact we hypothesize that auto immunity is an effective process of the organism to clear suboptimal proteins, formed due to amino acid and micronutrient deficiencies. Only when deficiencies sustain, disease will develop, otherwise the autoantibodies will function as all antibodies function, in a protective way. Furthermore, we hypothesize that essential amino acids are less important than nonessential amino acid (NEA). Species developed the ability to produce the nonessential amino acids themselves because they were not provided by food sufficiently. In contrast essential amino acids are widely available, without any evolutionary pressure. Since we can only produce small amounts of NEA and the availability in food can be reasoned to be too low they are still our main concern in amino acid availability. In conclusion, we hypothesize that increasing health will only be possible by improving our natural environment and living circumstances, not by changing the genes, since they are our last line of defense in surviving our environmental changes. PMID:22289341

  5. Transforming the sensing and numerical prediction of high-impact local weather through dynamic adaptation.

    PubMed

    Droegemeier, Kelvin K

    2009-03-13

    Mesoscale weather, such as convective systems, intense local rainfall resulting in flash floods and lake effect snows, frequently is characterized by unpredictable rapid onset and evolution, heterogeneity and spatial and temporal intermittency. Ironically, most of the technologies used to observe the atmosphere, predict its evolution and compute, transmit or store information about it, operate in a static pre-scheduled framework that is fundamentally inconsistent with, and does not accommodate, the dynamic behaviour of mesoscale weather. As a result, today's weather technology is highly constrained and far from optimal when applied to any particular situation. This paper describes a new cyberinfrastructure framework, in which remote and in situ atmospheric sensors, data acquisition and storage systems, assimilation and prediction codes, data mining and visualization engines, and the information technology frameworks within which they operate, can change configuration automatically, in response to evolving weather. Such dynamic adaptation is designed to allow system components to achieve greater overall effectiveness, relative to their static counterparts, for any given situation. The associated service-oriented architecture, known as Linked Environments for Atmospheric Discovery (LEAD), makes advanced meteorological and cyber tools as easy to use as ordering a book on the web. LEAD has been applied in a variety of settings, including experimental forecasting by the US National Weather Service, and allows users to focus much more attention on the problem at hand and less on the nuances of data formats, communication protocols and job execution environments. PMID:19087934

  6. Beta- and gamma-band activity reflect predictive coding in the processing of causal events.

    PubMed

    van Pelt, Stan; Heil, Lieke; Kwisthout, Johan; Ondobaka, Sasha; van Rooij, Iris; Bekkering, Harold

    2016-06-01

    In daily life, complex events are perceived in a causal manner, suggesting that the brain relies on predictive processes to model them. Within predictive coding theory, oscillatory beta-band activity has been linked to top-down predictive signals and gamma-band activity to bottom-up prediction errors. However, neurocognitive evidence for predictive coding outside lower-level sensory areas is scarce. We used magnetoencephalography to investigate neural activity during probability-dependent action perception in three areas pivotal for causal inference, superior temporal sulcus, temporoparietal junction and medial prefrontal cortex, using bowling action animations. Within this network, Granger-causal connectivity in the beta-band was found to be strongest for backward top-down connections and gamma for feed-forward bottom-up connections. Moreover, beta-band power in TPJ increased parametrically with the predictability of the action kinematics-outcome sequences. Conversely, gamma-band power in TPJ and MPFC increased with prediction error. These findings suggest that the brain utilizes predictive-coding-like computations for higher-order cognition such as perception of causal events. PMID:26873806

  7. Optimizing color fidelity for display devices using contour phase predictive coding for text, graphics, and video content

    NASA Astrophysics Data System (ADS)

    Lebowsky, Fritz

    2013-02-01

    High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k2k and beyond. Subsequently, uncompressed pixel data transmission becomes costly when transmitting over cable or wireless communication channels. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. Therefore, we present the idea of detecting synthetic small text fonts and fine graphics and applying contour phase predictive coding for improved text and graphics rendering at the decoder side. Using a predictive parametric (text) contour model and transmitting correlated phase information in vector format across all three color channels combined with foreground/background color vectors of a local color map promises to overcome weaknesses in compression schemes that process luminance and chrominance channels separately. The residual error of the predictive model is being minimized more easily since the decoder is an integral part of the encoder. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, discusses current limitations with regard to high quality color rendering, and identifies remaining visual artifacts.

  8. Adaptive coded spreading OFDM signal for dynamic-λ optical access network

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-12-01

    This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.

  9. The NASPE/BPEG generic pacemaker code for antibradyarrhythmia and adaptive-rate pacing and antitachyarrhythmia devices.

    PubMed

    Bernstein, A D; Camm, A J; Fletcher, R D; Gold, R D; Rickards, A F; Smyth, N P; Spielman, S R; Sutton, R

    1987-07-01

    A new generic pacemaker code, derived from and compatible with the Revised ICHD Code, was proposed jointly by the North American Society of Pacing and Electrophysiology (NASPE) Mode Code Committee and the British Pacing and Electrophysiology Group (BPEG), and has been adopted by the NASPE Board of Trustees. It is abbreviated as the NBG (for "NASPE/BPEG Generic") Code, and was developed to permit extension of the generic-code concept to pacemakers whose escape rate is continuously controlled by monitoring some physiologic variable, rather than determined by fixed escape intervals measured from stimuli or sensed depolarizations, and to antitachyarrhythmia devices including cardioverters and defibrillators. The NASPE/BPEG Code incorporates an "R" in the fourth position to signify rate modulation (adaptive-rate pacing), and one of four letters in the fifth position to indicate the presence of antitachyarrhythmia-pacing capability or of cardioversion or defibrillation functions. PMID:2441363

  10. Application of TURBO-AE to Flutter Prediction: Aeroelastic Code Development

    NASA Technical Reports Server (NTRS)

    Hoyniak, Daniel; Simons, Todd A.; Stefko, George (Technical Monitor)

    2001-01-01

    The TURBO-AE program has been evaluated by comparing the obtained results to cascade rig data and to prediction made from various in-house programs. A high-speed fan cascade, a turbine cascade, a turbine cascade and a fan geometry that shower flutter in torsion mode were analyzed. The steady predictions for the high-speed fan cascade showed the TURBO-AE predictions to match in-house codes. However, the predictions did not match the measured blade surface data. Other researchers also reported similar disagreement with these data set. Unsteady runs for the fan configuration were not successful using TURBO-AE .

  11. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  12. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  13. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  14. EMMA: an adaptive mesh refinement cosmological simulation code with radiative transfer

    NASA Astrophysics Data System (ADS)

    Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre

    2015-11-01

    EMMA is a cosmological simulation code aimed at investigating the reionization epoch. It handles simultaneously collisionless and gas dynamics, as well as radiative transfer physics using a moment-based description with the M1 approximation. Field quantities are stored and computed on an adaptive three-dimensional mesh and the spatial resolution can be dynamically modified based on physically motivated criteria. Physical processes can be coupled at all spatial and temporal scales. We also introduce a new and optional approximation to handle radiation: the light is transported at the resolution of the non-refined grid and only once the dynamics has been fully updated, whereas thermo-chemical processes are still tracked on the refined elements. Such an approximation reduces the overheads induced by the treatment of radiation physics. A suite of standard tests are presented and passed by EMMA, providing a validation for its future use in studies of the reionization epoch. The code is parallel and is able to use graphics processing units (GPUs) to accelerate hydrodynamics and radiative transfer calculations. Depending on the optimizations and the compilers used to generate the CPU reference, global GPU acceleration factors between ×3.9 and ×16.9 can be obtained. Vectorization and transfer operations currently prevent better GPU performance and we expect that future optimizations and hardware evolution will lead to greater accelerations.

  15. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  16. A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes

    NASA Astrophysics Data System (ADS)

    Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun

    The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.

  17. Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding

    NASA Astrophysics Data System (ADS)

    Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan

    2011-01-01

    This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.

  18. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  19. Can Toddler Temperament Characteristics Predict Later School Adaptation?

    ERIC Educational Resources Information Center

    Cooney, Ramie Robeson; Holmes, Deborah L.

    A study investigated how children's constitutional temperament (Easy, Difficult, or Intermediate), measured at 18 months of age, influences their adaptation to formal schooling. Information was collected from 35 children and their parents who were part of a longitudinal study of development. All participants were first-born children from…

  20. Genomic islands predict functional adaptation in marine actinobacteria

    SciTech Connect

    Penn, Kevin; Jenkins, Caroline; Nett, Markus; Udwary, Daniel; Gontang, Erin; McGlinchey, Ryan; Foster, Brian; Lapidus, Alla; Podell, Sheila; Allen, Eric; Moore, Bradley; Jensen, Paul

    2009-04-01

    Linking functional traits to bacterial phylogeny remains a fundamental but elusive goal of microbial ecology 1. Without this information, it becomes impossible to resolve meaningful units of diversity and the mechanisms by which bacteria interact with each other and adapt to environmental change. Ecological adaptations among bacterial populations have been linked to genomic islands, strain-specific regions of DNA that house functionally adaptive traits 2. In the case of environmental bacteria, these traits are largely inferred from bioinformatic or gene expression analyses 2, thus leaving few examples in which the functions of island genes have been experimentally characterized. Here we report the complete genome sequences of Salinispora tropica and S. arenicola, the first cultured, obligate marine Actinobacteria 3. These two species inhabit benthic marine environments and dedicate 8-10percent of their genomes to the biosynthesis of secondary metabolites. Despite a close phylogenetic relationship, 25 of 37 secondary metabolic pathways are species-specific and located within 21 genomic islands, thus providing new evidence linking secondary metabolism to ecological adaptation. Species-specific differences are also observed in CRISPR sequences, suggesting that variations in phage immunity provide fitness advantages that contribute to the cosmopolitan distribution of S. arenicola 4. The two Salinispora genomes have evolved by complex processes that include the duplication and acquisition of secondary metabolite genes, the products of which provide immediate opportunities for molecular diversification and ecological adaptation. Evidence that secondary metabolic pathways are exchanged by Horizontal Gene Transfer (HGT) yet are fixed among globally distributed populations 5 supports a functional role for their products and suggests that pathway acquisition represents a previously unrecognized force driving bacterial diversification

  1. The Cortical Organization of Speech Processing: Feedback Control and Predictive Coding the Context of a Dual-Stream Model

    ERIC Educational Resources Information Center

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the…

  2. Virtual Simulator: An infrastructure for design and performance-prediction of massively parallel codes

    NASA Astrophysics Data System (ADS)

    Perumalla, K.; Fujimoto, R.; Pande, S.; Karimabadi, H.; Driscoll, J.; Omelchenko, Y.

    2005-12-01

    Large parallel/distributed scientific simulations are very complex, and their dynamic behavior is hard to predict. Efficient development of massively parallel codes remains a computational challenge. For example, almost none of the kinetic codes in use in space physics today have dynamic load balancing capability. Here we present a new infrastructure for design and prediction of parallel codes. Performance prediction is useful to analyze, understand and experiment with different partitioning schemes, multiple modeling alternatives and so on, without having to run the application on supercomputers. Instrumentation of the model (with least perturbance to performance) is useful to glean key metrics and understand application-level behavior. Unfortunately, traditional approaches to virtual execution and instrumentation are limited by either slow execution speed or low resolution or both. We present a new framework that provides a high-resolution framework that provides a virtual CPU abstraction (with a full thread context per CPU), yet scales to thousands of virtual CPUs. The tool, called PDES2, presents different levels of modeling interfaces, from general purpose parallel simulations to parallel grid-based particle-in-cell (PIC) codes. The tool itself runs on multiple processors in order to accommodate the high-resolution by distributing the virtual execution across processors. Validation experiments of PIC models in the framework using a 1-D hybrid shock application show close agreement of results from virtual executions with results from actual supercomputer runs. The utility of this tool is further illustrated through an application to a parallel global hybrid code.

  3. A predictive coding framework for rapid neural dynamics during sentence-level language comprehension.

    PubMed

    Lewis, Ashley G; Bastiaansen, Marcel

    2015-07-01

    There is a growing literature investigating the relationship between oscillatory neural dynamics measured using electroencephalography (EEG) and/or magnetoencephalography (MEG), and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally. PMID:25840879

  4. Adaptive and predictive control of a simulated robot arm.

    PubMed

    Tolu, Silvia; Vanegas, Mauricio; Garrido, Jesús A; Luque, Niceto R; Ros, Eduardo

    2013-06-01

    In this work, a basic cerebellar neural layer and a machine learning engine are embedded in a recurrent loop which avoids dealing with the motor error or distal error problem. The presented approach learns the motor control based on available sensor error estimates (position, velocity, and acceleration) without explicitly knowing the motor errors. The paper focuses on how to decompose the input into different components in order to facilitate the learning process using an automatic incremental learning model (locally weighted projection regression (LWPR) algorithm). LWPR incrementally learns the forward model of the robot arm and provides the cerebellar module with optimal pre-processed signals. We present a recurrent adaptive control architecture in which an adaptive feedback (AF) controller guarantees a precise, compliant, and stable control during the manipulation of objects. Therefore, this approach efficiently integrates a bio-inspired module (cerebellar circuitry) with a machine learning component (LWPR). The cerebellar-LWPR synergy makes the robot adaptable to changing conditions. We evaluate how this scheme scales for robot-arms of a high number of degrees of freedom (DOFs) using a simulated model of a robot arm of the new generation of light weight robots (LWRs). PMID:23627657

  5. TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    2000-01-01

    TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.

  6. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  7. A 3-D Vortex Code for Parachute Flow Predictions: VIPAR Version 1.0

    SciTech Connect

    STRICKLAND, JAMES H.; HOMICZ, GREGORY F.; PORTER, VICKI L.; GOSSLER, ALBERT A.

    2002-07-01

    This report describes a 3-D fluid mechanics code for predicting flow past bluff bodies whose surfaces can be assumed to be made up of shell elements that are simply connected. Version 1.0 of the VIPAR code (Vortex Inflation PARachute code) is described herein. This version contains several first order algorithms that we are in the process of replacing with higher order ones. These enhancements will appear in the next version of VIPAR. The present code contains a motion generator that can be used to produce a large class of rigid body motions. The present code has also been fully coupled to a structural dynamics code in which the geometry undergoes large time dependent deformations. Initial surface geometry is generated from triangular shell elements using a code such as Patran and is written into an ExodusII database file for subsequent input into VIPAR. Surface and wake variable information is output into two ExodusII files that can be post processed and viewed using software such as EnSight{trademark}.

  8. Adaptive plasticity in speech perception: Effects of external information and internal predictions.

    PubMed

    Guediche, Sara; Fiez, Julie A; Holt, Lori L

    2016-07-01

    When listeners encounter speech under adverse listening conditions, adaptive adjustments in perception can improve comprehension over time. In some cases, these adaptive changes require the presence of external information that disambiguates the distorted speech signals, whereas in other cases mere exposure is sufficient. Both external (e.g., written feedback) and internal (e.g., prior word knowledge) sources of information can be used to generate predictions about the correct mapping of a distorted speech signal. We hypothesize that these predictions provide a basis for determining the discrepancy between the expected and actual speech signal that can be used to guide adaptive changes in perception. This study provides the first empirical investigation that manipulates external and internal factors through (a) the availability of explicit external disambiguating information via the presence or absence of postresponse orthographic information paired with a repetition of the degraded stimulus, and (b) the accuracy of internally generated predictions; an acoustic distortion is introduced either abruptly or incrementally. The results demonstrate that the impact of external information on adaptive plasticity is contingent upon whether the intelligibility of the stimuli permits accurate internally generated predictions during exposure. External information sources enhance adaptive plasticity only when input signals are severely degraded and cannot reliably access internal predictions. This is consistent with a computational framework for adaptive plasticity in which error-driven supervised learning relies on the ability to compute sensory prediction error signals from both internal and external sources of information. (PsycINFO Database Record PMID:26854531

  9. Validation of Framework Code Approach to a Life Prediction System for Fiber Reinforced Composites

    NASA Technical Reports Server (NTRS)

    Gravett, Phillip

    1997-01-01

    The grant was conducted by the MMC Life Prediction Cooperative, an industry/government collaborative team, Ohio Aerospace Institute (OAI) acted as the prime contractor on behalf of the Cooperative for this grant effort. See Figure I for the organization and responsibilities of team members. The technical effort was conducted during the period August 7, 1995 to June 30, 1996 in cooperation with Erwin Zaretsky, the LERC Program Monitor. Phil Gravett of Pratt & Whitney was the principal technical investigator. Table I documents all meeting-related coordination memos during this period. The effort under this grant was closely coordinated with an existing USAF sponsored program focused on putting into practice a life prediction system for turbine engine components made of metal matrix composites (MMC). The overall architecture of the NMC life prediction system was defined in the USAF sponsored program (prior to this grant). The efforts of this grant were focussed on implementing and tailoring of the life prediction system, the framework code within it and the damage modules within it to meet the specific requirements of the Cooperative. T'he tailoring of the life prediction system provides the basis for pervasive and continued use of this capability by the industry/government cooperative. The outputs of this grant are: 1. Definition of the framework code to analysis modules interfaces, 2. Definition of the interface between the materials database and the finite element model, and 3. Definition of the integration of the framework code into an FEM design tool.

  10. Adaptive filtering and prediction of the Southern Oscillation index

    NASA Astrophysics Data System (ADS)

    Keppenne, Christian L.; Ghil, Michael

    1992-12-01

    Singular spectrum analysis (SSA), a variant of principal component analysis, is applied to a time series of the Southern Oscillation index (SOI). The analysis filters out variability unrelated to the Southern Oscillation and separates the high-frequency, 2- to 3-year variability, including the quasi-biennial oscillation, from the lower-frequency 4- to 6-year El Niño cycle. The maximum entropy method (MEM) is applied to forecasting the prefiltered SOI. Prediction based on MEM-associated autoregressive models has useful skill for 30-36 months. A 1993-1994 La Niña event is predicted based on data through February 1992.

  11. Adaptive filtering and prediction of the Southern Oscillation index

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Ghil, Michael

    1992-01-01

    Singular spectrum analysis (SSA), a variant of principal component analysis, is applied to a time series of the Southern Oscillation index (SOI). The analysis filters out variability unrelated to the Southern Oscillation and separates the high-frequency, 2- to 3-year variability, including the quasi-biennial oscillation, from the lower-frequency 4- to 6-year El Nino cycle. The maximum entropy method (MEM) is applied to forecasting the prefiltered SOI. Prediction based on MEM-associated autoregresive models has useful skill for 30-36 months. A 1993-1994 La Nina event is predicted based on data through February 1992.

  12. Adaptive filtering and prediction of the Southern Oscillation index

    SciTech Connect

    Keppenne, C.L. California Inst. of Technology, Pasadena ); Ghil, M. )

    1992-12-20

    Singular spectrum analysis (SSA), a variant of principal component analysis, is applied to a time series of the Southern Oscillation index (SOI). The analysis filters out variability unrelated to the Southern Oscillation and separates the high-frequency, 2- to 3-year variability, including the quasi-biennial oscillation, from the lower-frequency 4- to 6-year El Nino cycle. The maximum entropy method (MEM) is applied to forecasting the prefiltered SOI. Prediction based on MEM-associated autoregressive models has useful skill for 30-36 months. A 1993-1994 La Nina event is predicted based on data through February 1992. 52 refs., 4 figs.

  13. Predictive Coding: A Possible Explanation of Filling-In at the Blind Spot

    PubMed Central

    Raman, Rajani; Sarkar, Sandip

    2016-01-01

    Filling-in at the blind spot is a perceptual phenomenon in which the visual system fills the informational void, which arises due to the absence of retinal input corresponding to the optic disc, with surrounding visual attributes. It is known that during filling-in, nonlinear neural responses are observed in the early visual area that correlates with the perception, but the knowledge of underlying neural mechanism for filling-in at the blind spot is far from complete. In this work, we attempted to present a fresh perspective on the computational mechanism of filling-in process in the framework of hierarchical predictive coding, which provides a functional explanation for a range of neural responses in the cortex. We simulated a three-level hierarchical network and observe its response while stimulating the network with different bar stimulus across the blind spot. We find that the predictive-estimator neurons that represent blind spot in primary visual cortex exhibit elevated non-linear response when the bar stimulated both sides of the blind spot. Using generative model, we also show that these responses represent the filling-in completion. All these results are consistent with the finding of psychophysical and physiological studies. In this study, we also demonstrate that the tolerance in filling-in qualitatively matches with the experimental findings related to non-aligned bars. We discuss this phenomenon in the predictive coding paradigm and show that all our results could be explained by taking into account the efficient coding of natural images along with feedback and feed-forward connections that allow priors and predictions to co-evolve to arrive at the best prediction. These results suggest that the filling-in process could be a manifestation of the general computational principle of hierarchical predictive coding of natural images. PMID:26959812

  14. Model-Biased, Data-Driven Adaptive Failure Prediction

    NASA Technical Reports Server (NTRS)

    Leen, Todd K.

    2004-01-01

    This final report, which contains a research summary and a viewgraph presentation, addresses clustering and data simulation techniques for failure prediction. The researchers applied their techniques to both helicopter gearbox anomaly detection and segmentation of Earth Observing System (EOS) satellite imagery.

  15. Prediction of stochastic blade responses using a filtered noise turbulence model in the FLAP (Force and Loads Analysis Program) code

    SciTech Connect

    Thresher, R.W.; Holley, W.E.; Wright, A.D.

    1988-11-01

    Accurately predicting wind turbine blade loads and resulting stresses is important for predicting the fatigue life of components. There is a clear need within the wind industry for validated codes that can predict not only the deterministic loads from the mean wind velocity, wind shear, and gravity, but also the stochastic loads from turbulent inflow. The FLAP code has already been validated for predicting deterministic loads. This paper concentrates on validating the FLAP code for predicting stochastic turbulence loads using the filtered-noise turbulence model as input. 26 refs., 13 figs., 2 tabs.

  16. Saccadic gain adaptation is predicted by the statistics of natural fluctuations in oculomotor function

    PubMed Central

    Albert, Mark V.; Catz, Nicolas; Thier, Peter; Kording, Konrad

    2012-01-01

    Due to multiple factors such as fatigue, muscle strengthening, and neural plasticity, the responsiveness of the motor apparatus to neural commands changes over time. To enable precise movements the nervous system must adapt to compensate for these changes. Recent models of motor adaptation derive from assumptions about the way the motor apparatus changes. Characterizing these changes is difficult because motor adaptation happens at the same time, masking most of the effects of ongoing changes. Here, we analyze eye movements of monkeys with lesions to the posterior cerebellar vermis that impair adaptation. Their fluctuations better reveal the underlying changes of the motor system over time. When these measured, unadapted changes are used to derive optimal motor adaptation rules the prediction precision significantly improves. Among three models that similarly fit single-day adaptation results, the model that also matches the temporal correlations of the non-adapting saccades most accurately predicts multiple day adaptation. Saccadic gain adaptation is well matched to the natural statistics of fluctuations of the oculomotor plant. PMID:23230397

  17. An adaptive genetic algorithm for crystal structure prediction

    SciTech Connect

    Wu, Shunqing; Ji, Min; Wang, Cai-Zhuang; Nguyen, Manh Cuong; Zhao, Xin; Umemoto, K.; Wentzcovitch, R. M.; Ho, Kai-Ming

    2013-12-18

    We present a genetic algorithm (GA) for structural search that combines the speed of structure exploration by classical potentials with the accuracy of density functional theory (DFT) calculations in an adaptive and iterative way. This strategy increases the efficiency of the DFT-based GA by several orders of magnitude. This gain allows a considerable increase in the size and complexity of systems that can be studied by first principles. The performance of the method is illustrated by successful structure identifications of complex binary and ternary intermetallic compounds with 36 and 54 atoms per cell, respectively. The discovery of a multi-TPa Mg-silicate phase with unit cell containing up to 56 atoms is also reported. Such a phase is likely to be an essential component of terrestrial exoplanetary mantles.

  18. Do We Need Better Climate Predictions to Adapt to a Changing Climate? (Invited)

    NASA Astrophysics Data System (ADS)

    Dessai, S.; Hulme, M.; Lempert, R.; Pielke, R., Jr.

    2009-12-01

    Based on a series of international scientific assessments, climate change has been presented to society as a major problem that needs urgently to be tackled. The science that underpins these assessments has been pre-dominantly from the realm of the natural sciences and central to this framing have been ‘projections’ of future climate change (and its impacts on environment and society) under various greenhouse gas emissions scenarios and using a variety of climate model predictions with embedded assumptions. Central to much of the discussion surrounding adaptation to climate change is the claim - explicit or implicit - that decision makers need accurate and increasingly precise assessments of future impacts of climate change in order to adapt successfully. If true, this claim places a high premium on accurate and precise climate predictions at a range of geographical and temporal scales; such predictions therefore become indispensable, and indeed a prerequisite for, effective adaptation decision-making. But is effective adaptation tied to the ability of the scientific enterprise to predict future climate with accuracy and precision? If so, this may impose a serious and intractable limit on adaptation. This paper proceeds in three sections. It first gathers evidence of claims that climate prediction is necessary for adaptation decision-making. This evidence is drawn from peer-reviewed literature and from published science funding strategies and government policy in a number of different countries. The second part discusses the challenges of climate prediction and why science will consistently be unable to provide accurate and precise predictions of future climate relevant for adaptation (usually at the local/regional level). Section three discusses whether these limits to future foresight represent a limit to adaptation, arguing that effective adaptation need not be limited by a general inability to predict future climate. Given the deep uncertainties involved in

  19. Smoothed reference inter-layer texture prediction for bit depth scalable video coding

    NASA Astrophysics Data System (ADS)

    Ma, Zhan; Luo, Jiancong; Yin, Peng; Gomila, Cristina; Wang, Yao

    2010-01-01

    We present a smoothed reference inter-layer texture prediction mode for bit depth scalability based on the Scalable Video Coding extension of the H.264/MPEG-4 AVC standard. In our approach, the base layer encodes an 8-bit signal that can be decoded by any existing H.264/MPEG-4 AVC decoder and the enhancement layer encodes a higher bit depth signal (e.g. 10/12-bit) which requires a bit depth scalable decoder. The approach presented uses base layer motion vectors to conduct motion compensation upon enhancement layer reference frames. Then, the motion compensated block is tone mapped and summed with the co-located base layer residue block prior to being inverse tone mapped to obtain a smoothed reference predictor. In addition to the original inter-/intra-layer prediction modes, the smoothed reference prediction mode enables inter-layer texture prediction for blocks with inter-coded co-located block. The proposed method is designed to improve the coding efficiency for sequences with non-linear tone mapping, in which case we have gains up to 0.4dB over the CGS-based BDS framework.

  20. The basal ganglia select the expected sensory input used for predictive coding

    PubMed Central

    Colder, Brian

    2015-01-01

    While considerable evidence supports the notion that lower-level interpretation of incoming sensory information is guided by top-down sensory expectations, less is known about the source of the sensory expectations or the mechanisms by which they are spread. Predictive coding theory proposes that sensory expectations flow down from higher-level association areas to lower-level sensory cortex. A separate theory of the role of prediction in cognition describes “emulations” as linked representations of potential actions and their associated expected sensation that are hypothesized to play an important role in many aspects of cognition. The expected sensations in active emulations are proposed to be the top-down expectation used in predictive coding. Representations of the potential action and expected sensation in emulations are claimed to be instantiated in distributed cortical networks. Combining predictive coding with emulations thus provides a theoretical link between the top-down expectations that guide sensory expectations and the cortical networks representing potential actions. Now moving to theories of action selection, the basal ganglia has long been proposed to select between potential actions by reducing inhibition to the cortical network instantiating the desired action plan. Integration of these isolated theories leads to the novel hypothesis that reduction in inhibition from the basal ganglia selects not just action plans, but entire emulations, including the sensory input expected to result from the action. Basal ganglia disinhibition is hypothesized to both initiate an action and also allow propagation of the action’s associated sensory expectation down towards primary sensory cortex. This is a novel proposal for the role of the basal ganglia in biasing perception by selecting the expected sensation, and initiating the top-down transmission of those expectations in predictive coding. PMID:26441627

  1. Sequence Prediction With Sparse Distributed Hyperdimensional Coding Applied to the Analysis of Mobile Phone Use Patterns.

    PubMed

    Rasanen, Okko J; Saarinen, Jukka P

    2016-09-01

    Modeling and prediction of temporal sequences is central to many signal processing and machine learning applications. Prediction based on sequence history is typically performed using parametric models, such as fixed-order Markov chains ( n -grams), approximations of high-order Markov processes, such as mixed-order Markov models or mixtures of lagged bigram models, or with other machine learning techniques. This paper presents a method for sequence prediction based on sparse hyperdimensional coding of the sequence structure and describes how higher order temporal structures can be utilized in sparse coding in a balanced manner. The method is purely incremental, allowing real-time online learning and prediction with limited computational resources. Experiments with prediction of mobile phone use patterns, including the prediction of the next launched application, the next GPS location of the user, and the next artist played with the phone media player, reveal that the proposed method is able to capture the relevant variable-order structure from the sequences. In comparison with the n -grams and the mixed-order Markov models, the sparse hyperdimensional predictor clearly outperforms its peers in terms of unweighted average recall and achieves an equal level of weighted average recall as the mixed-order Markov chain but without the batch training of the mixed-order model. PMID:26285224

  2. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  3. [Prediction of litter moisture content in Tahe Forestry Bureau of Northeast China based on FWI moisture codes].

    PubMed

    Zhang, Heng; Jin, Sen; Di, Xue-Ying

    2014-07-01

    Canadian fire weather index system (FWI) is the most widely used fire weather index system in the world. Its fuel moisture prediction is also a very important research method. In this paper, litter moisture contents of typical forest types in Tahe Forestry Bureau of Northeast China were successively observed and the relationships between FWI codes (fine fuel moisture code FFMC, duff moisture code DMC and drought code DC) and fuel moisture were analyzed. Results showed that the mean absolute error and the mean relative error of models.established using FWI moisture code FFMC was 14.9% and 70.7%, respectively, being lower than those of meteorological elements regression model, which indicated that FWI codes had some advantage in predicting litter moisture contents and could be used to predict fuel moisture contents. But the advantage was limited, and further calibration was still needed, especially in modification of FWI codes after rainfall. PMID:25345057

  4. Predicting Hyper-Chaotic Time Series Using Adaptive Higher-Order Nonlinear Filter

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-Shu; Xiao, Xian-Ci

    2001-03-01

    A newly proposed method, i.e. the adaptive higher-order nonlinear finite impulse response (HONFIR) filter based on higher-order sparse Volterra series expansions, is introduced to predict hyper-chaotic time series. The effectiveness of using the adaptive HONFIR filter for making one-step and multi-step predictions is tested based on very few data points by computer-generated hyper-chaotic time series including the Mackey-Glass equation and four-dimensional nonlinear dynamical system. A comparison is made with some neural networks for predicting the Mackey-Glass hyper-chaotic time series. Numerical simulation results show that the adaptive HONFIR filter proposed here is a very powerful tool for making prediction of hyper-chaotic time series.

  5. Identification-free adaptive optimal control based on switching predictive models

    NASA Astrophysics Data System (ADS)

    Luo, Wenguang; Pan, Shenghui; Ma, Zhaomin; Lan, Hongli

    2008-10-01

    An identification-free adaptive optimal control based on switching predictive models is proposed for the systems with big inertia, long time delay and multi models. Multi predictive models are set in the identification-free adaptive predictive control, and switched according to the optimal switching instants in control of the switching law along with the system running situations in real time. The switching law is designed based on the most important character parameter of the systems, and the optimal switching instants are computed out with the optimal theory for switched systems. The simulation test results show the proposed method is suitable to the systems, such as superheated steam temperature systems of electric power plants, can provide excellent control performance, improve rejecting disturbance ability and self-adaptability, and has lower demand on the predictive model precision.

  6. Ply level failure prediction of carbon fibre reinforced laminated composite panels subjected to low velocity drop-weight impact using adaptive meshing techniques

    NASA Astrophysics Data System (ADS)

    Farooq, Umar; Myler, Peter

    2014-09-01

    This work is concerned with physical testing and numerical simulations of flat and round nose drop-weight impact of carbon fibre-reinforced laminate composite panels to predict ply level failure. Majority of the existing studies on impact of composites by spherical nose impactors are experimental, computational models are simplified, and based on classical laminated plate theories where contributions of through-thickness stresses are neglected. Present work considers flat nose impact and contributions from through-thickness stresses and is mainly simulation based. A computational model was developed in ABAQUS™ software using adaptive meshing techniques. Simulation produced (2D model) stresses were numerically integrated using MATALB™ code to predict through-thickness (3D) stresses. Through-the-thickness stresses were then utilised in advanced failure criteria coded in MATLAB™ software to predict ply level failures. Simulation produced results demonstrate that the computational model can efficiently and effectively predict ply-by-ply failure status of relatively thick laminates.

  7. Severe accident source term characteristics for selected Peach Bottom sequences predicted by the MELCOR Code

    SciTech Connect

    Carbajo, J.J.

    1993-09-01

    The purpose of this report is to compare in-containment source terms developed for NUREG-1159, which used the Source Term Code Package (STCP), with those generated by MELCOR to identify significant differences. For this comparison, two short-term depressurized station blackout sequences (with a dry cavity and with a flooded cavity) and a Loss-of-Coolant Accident (LOCA) concurrent with complete loss of the Emergency Core Cooling System (ECCS) were analyzed for the Peach Bottom Atomic Power Station (a BWR-4 with a Mark I containment). The results indicate that for the sequences analyzed, the two codes predict similar total in-containment release fractions for each of the element groups. However, the MELCOR/CORBH Package predicts significantly longer times for vessel failure and reduced energy of the released material for the station blackout sequences (when compared to the STCP results). MELCOR also calculated smaller releases into the environment than STCP for the station blackout sequences.

  8. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  9. Prediction of material strength and fracture of brittle materials using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Stellingwwerf, R.F.

    1995-12-31

    The design of many devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics that are used in armor packages; glass that is used in windshields; and rock and concrete that are used in oil wells. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, they did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  10. Dynamic Divisive Normalization Predicts Time-Varying Value Coding in Decision-Related Circuits

    PubMed Central

    LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W.

    2014-01-01

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  11. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  12. Structural Life and Reliability Metrics: Benchmarking and Verification of Probabilistic Life Prediction Codes

    NASA Astrophysics Data System (ADS)

    Litt, Jonathan S.; Soditus, Sherry; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-10-01

    Over the past two decades there has been considerable effort by NASA Glenn and others to develop probabilistic codes to predict with reasonable engineering certainty the life and reliability of critical components in rotating machinery and, more specifically, in the rotating sections of airbreathing and rocket engines. These codes have, to a very limited extent, been verified with relatively small bench rig type specimens under uniaxial loading. Because of the small and very narrow database the acceptance of these codes within the aerospace community has been limited. An alternate approach to generating statistically significant data under complex loading and environments simulating aircraft and rocket engine conditions is to obtain, catalog and statistically analyze actual field data. End users of the engines, such as commercial airlines and the military, record and store operational and maintenance information. This presentation describes a cooperative program between the NASA GRC, United Airlines, USAF Wright Laboratory, U.S. Army Research Laboratory and Australian Aeronautical & Maritime Research Laboratory to obtain and analyze these airline data for selected components such as blades, disks and combustors. These airline data will be used to benchmark and compare existing life prediction codes.

  13. Dynamic divisive normalization predicts time-varying value coding in decision-related circuits.

    PubMed

    Louie, Kenway; LoFaro, Thomas; Webb, Ryan; Glimcher, Paul W

    2014-11-26

    Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a context-dependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasic-sustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding. PMID:25429145

  14. Utilizing micro-electro-mechanical systems (MEMS) micro-shutter designs for adaptive coded aperture imaging (ACAI) technologies

    NASA Astrophysics Data System (ADS)

    Ledet, Mary M.; Starman, LaVern A.; Coutu, Ronald A., Jr.; Rogers, Stanley

    2009-08-01

    Coded aperture imaging (CAI) has been used in both the astronomical and medical communities for years due to its ability to image light at short wavelengths and thus replacing conventional lenses. Where CAI is limited, adaptive coded aperture imaging (ACAI) can recover what is lost. The use of photonic micro-electro-mechanical-systems (MEMS) for creating adaptive coded apertures has been gaining momentum since 2007. Successful implementation of micro-shutter technologies would potentially enable the use of adaptive coded aperture imaging and non-imaging systems in current and future military surveillance and intelligence programs. In this effort, a prototype of MEMS microshutters has been designed and fabricated onto a 3 mm x 3 mm square of silicon substrate using the PolyMUMPSTM process. This prototype is a line-drivable array using thin flaps of polysilicon to cover and uncover an 8 x 8 array of 20 μm apertures. A characterization of the micro-shutters to include mechanical, electrical and optical properties is provided. This prototype, its actuation scheme, and other designs for individual microshutters have been modeled and studied for feasibility purposes. In addition, microshutters fabricated from an Al-Au alloy on a quartz wafer were optically tested and characterized with a 632 nm HeNe laser.

  15. Real-time speech encoding based on Code-Excited Linear Prediction (CELP)

    NASA Technical Reports Server (NTRS)

    Leblanc, Wilfrid P.; Mahmoud, S. A.

    1988-01-01

    This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.

  16. Results from baseline tests of the SPRE I and comparison with code model predictions

    SciTech Connect

    Cairelli, J.E.; Geng, S.M.; Skupinski, R.C.

    1994-09-01

    The Space Power Research Engine (SPRE), a free-piston Stirling engine with linear alternator, is being tested at the NASA Lewis Research Center as part of the Civil Space Technology Initiative (CSTI) as a candidate for high capacity space power. This paper presents results of base-line engine tests at design and off-design operating conditions. The test results are compared with code model predictions.

  17. Linking pattern completion in the hippocampus to predictive coding in visual cortex.

    PubMed

    Hindy, Nicholas C; Ng, Felicia Y; Turk-Browne, Nicholas B

    2016-05-01

    Models of predictive coding frame perception as a generative process in which expectations constrain sensory representations. These models account for expectations about how a stimulus will move or change from moment to moment, but do not address expectations about what other, distinct stimuli are likely to appear based on prior experience. We show that such memory-based expectations in human visual cortex are related to the hippocampal mechanism of pattern completion. PMID:27065363

  18. Improvement of the predicted aural detection code ICHIN (I Can Hear It Now)

    NASA Astrophysics Data System (ADS)

    Mueller, Arnold W.; Smith, Charles D.; Lemasurier, Phillip

    Acoustic tests were conducted to study the far-field sound pressure levels and aural detection ranges associated with a Sikorsky S-76A helicopter in straight and level flight at various advancing blade tip Mach numbers. The flight altitude was nominally 150 meters above ground level. This paper compares the normalized predicted aural detection distances, based on the measured far-field sound pressure levels, to the normalized measured aural detection distances obtained from sound jury response measurements obtained during the same test. Both unmodified and modified versions of the prediction code ICHIN-6 (I Can Hear It Now) were used to produce the results for this study.

  19. Predicting multi-wall structural response to hypervelocity impact using the hull code

    NASA Technical Reports Server (NTRS)

    Schonberg, William P.

    1993-01-01

    Previously, multi-wall structures have been analyzed extensively, primarily through experiment, as a means of increasing the meteoroid/space debris impact protection of spacecraft. As structural configurations become more varied, the number of tests required to characterize their response increases dramatically. As an alternative to experimental testing, numerical modeling of high-speed impact phenomena is often being used to predict the response of a variety of structural systems under different impact loading conditions. The results of comparing experimental tests to Hull Hydrodynamic Computer Code predictions are reported. Also, the results of a numerical parametric study of multi-wall structural response to hypervelocity cylindrical projectile impact are presented.

  20. Adaptive DFT-based fringe tracking and prediction at IOTA

    NASA Astrophysics Data System (ADS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-10-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHz PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  1. Adaptive DIT-Based Fringe Tracking and Prediction at IOTA

    NASA Technical Reports Server (NTRS)

    Wilson, Edward; Pedretti, Ettore; Bregman, Jesse; Mah, Robert W.; Traub, Wesley A.

    2004-01-01

    An automatic fringe tracking system has been developed and implemented at the Infrared Optical Telescope Array (IOTA). In testing during May 2002, the system successfully minimized the optical path differences (OPDs) for all three baselines at IOTA. Based on sliding window discrete Fourier transform (DFT) calculations that were optimized for computational efficiency and robustness to atmospheric disturbances, the algorithm has also been tested extensively on off-line data. Implemented in ANSI C on the 266 MHZ PowerPC processor running the VxWorks real-time operating system, the algorithm runs in approximately 2.0 milliseconds per scan (including all three interferograms), using the science camera and piezo scanners to measure and correct the OPDs. Preliminary analysis on an extension of this algorithm indicates a potential for predictive tracking, although at present, real-time implementation of this extension would require significantly more computational capacity.

  2. Users manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE)

    NASA Technical Reports Server (NTRS)

    Ruff, Gary A.; Berkowitz, Brian M.

    1990-01-01

    LEWICE is an ice accretion prediction code that applies a time-stepping procedure to calculate the shape of an ice accretion. The potential flow field is calculated in LEWICE using the Douglas Hess-Smith 2-D panel code (S24Y). This potential flow field is then used to calculate the trajectories of particles and the impingement points on the body. These calculations are performed to determine the distribution of liquid water impinging on the body, which then serves as input to the icing thermodynamic code. The icing thermodynamic model is based on the work of Messinger, but contains several major modifications and improvements. This model is used to calculate the ice growth rate at each point on the surface of the geometry. By specifying an icing time increment, the ice growth rate can be interpreted as an ice thickness which is added to the body, resulting in the generation of new coordinates. This procedure is repeated, beginning with the potential flow calculations, until the desired icing time is reached. The operation of LEWICE is illustrated through the use of five examples. These examples are representative of the types of applications expected for LEWICE. All input and output is discussed, along with many of the diagnostic messages contained in the code. Several error conditions that may occur in the code for certain icing conditions are identified, and a course of action is recommended. LEWICE has been used to calculate a variety of ice shapes, but should still be considered a research code. The code should be exercised further to identify any shortcomings and inadequacies. Any modifications identified as a result of these cases, or of additional experimental results, should be incorporated into the model. Using it as a test bed for improvements to the ice accretion model is one important application of LEWICE.

  3. Why do you fear the bogeyman? An embodied predictive coding model of perceptual inference.

    PubMed

    Pezzulo, Giovanni

    2014-09-01

    Why are we scared by nonperceptual entities such as the bogeyman, and why does the bogeyman only visit us during the night? Why does hearing a window squeaking in the night suggest to us the unlikely idea of a thief or a killer? And why is this more likely to happen after watching a horror movie? To answer these and similar questions, we need to put mind and body together again and consider the embodied nature of perceptual and cognitive inference. Predictive coding provides a general framework for perceptual inference; I propose to extend it by including interoceptive and bodily information. The resulting embodied predictive coding inference permits one to compare alternative hypotheses (e.g., is the sound I hear generated by a thief or the wind?) using the same inferential scheme as in predictive coding, but using both sensory and interoceptive information as evidence, rather than just considering sensory events. If you hear a window squeaking in the night after watching a horror movie, you may consider plausible a very unlikely hypothesis (e.g., a thief, or even the bogeyman) because it explains both what you sense (e.g., the window squeaking in the night) and how you feel (e.g., your high heart rate). The good news is that the inference that I propose is fully rational and gives minds and bodies equal dignity. The bad news is that it also gives an embodiment to the bogeyman, and a reason to fear it. PMID:24307092

  4. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  5. Validation of annual average air concentration predictions from the AIRDOS-EPA computer code

    SciTech Connect

    Miller, C.W.; Fields, D.E.; Cotter, S.J.

    1981-01-01

    The AIRDOS-EPA computer code is used to assess the annual doses to the general public resulting from releases of radionuclides to the atmosphere by Oak Ridge National Laboratory (ORNL) facilities. This code uses a modified Gaussian plume equation to estimate air concentrations resulting from the release of a maximum of 36 radionuclides. Radionuclide concentrations in food products are estimated from the output of the atmospheric transport model using the terrestrial transport model described in US Nuclear Regulatory Commission Regulatory Guide 1.109. Doses to man at each distance and direction specified are estimated for up to eleven organs and five exposure modes. To properly use any environmental transport model, some estimate of the model's predictive accuracy must be obtained. Because of a lack of sufficient data for the ORNL site, one year of weekly average /sup 85/Kr concentrations observed at 13 stations located 30 to 150 km distant from an assumed-continuous point source at the Savannah River Plant, Aiken, South Carolina, have been used in a validation study of the atmospheric transport portion of AIRDOS-EPA. The predicted annual average concentration at each station exceeded the observed value in every case. The overprediction factor ranged from 1.4 to 3.4 with an average value of 2.4. Pearson's correlation between pairs of logarithms of observed and predicted values was r = 0.93. Based on a one-tailed students's test, we can be 98% confident that for this site under similar meteorological, release, and monitoring conditions no annual average air concentrations will be observed at the sampling stations in excess of those predicted by the code. As the averaging time of the prdiction decreases, however, the uncertainty in the prediction increases.

  6. Prediction of contact forces of underactuated finger by adaptive neuro fuzzy approach

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Abbasi, Almas; Kiani, Kourosh; Al-Shammari, Eiman Tamah

    2015-12-01

    To obtain adaptive finger passive underactuation can be used. Underactuation principle can be used to adapt shapes of the fingers for grasping objects. The fingers with underactuation do not require control algorithm. In this study a kinetostatic model of the underactuated finger mechanism was analyzed. The underactuation is achieved by adding the compliance in every finger joint. Since the contact forces of the finger depend on contact position of the finger and object, it is suitable to make a prediction model for the contact forces in function of contact positions of the finger and grasping objects. In this study prediction of the contact forces was established by a soft computing approach. Adaptive neuro-fuzzy inference system (ANFIS) was applied as the soft computing method to perform the prediction of the finger contact forces.

  7. Predicting cortical bone adaptation to axial loading in the mouse tibia

    PubMed Central

    Pereira, A. F.; Javaheri, B.; Pitsillides, A. A.; Shefelbine, S. J.

    2015-01-01

    The development of predictive mathematical models can contribute to a deeper understanding of the specific stages of bone mechanobiology and the process by which bone adapts to mechanical forces. The objective of this work was to predict, with spatial accuracy, cortical bone adaptation to mechanical load, in order to better understand the mechanical cues that might be driving adaptation. The axial tibial loading model was used to trigger cortical bone adaptation in C57BL/6 mice and provide relevant biological and biomechanical information. A method for mapping cortical thickness in the mouse tibia diaphysis was developed, allowing for a thorough spatial description of where bone adaptation occurs. Poroelastic finite-element (FE) models were used to determine the structural response of the tibia upon axial loading and interstitial fluid velocity as the mechanical stimulus. FE models were coupled with mechanobiological governing equations, which accounted for non-static loads and assumed that bone responds instantly to local mechanical cues in an on–off manner. The presented formulation was able to simulate the areas of adaptation and accurately reproduce the distributions of cortical thickening observed in the experimental data with a statistically significant positive correlation (Kendall's τ rank coefficient τ = 0.51, p < 0.001). This work demonstrates that computational models can spatially predict cortical bone mechanoadaptation to a time variant stimulus. Such models could be used in the design of more efficient loading protocols and drug therapies that target the relevant physiological mechanisms. PMID:26311315

  8. Comparison of Code Predictions to Test Measurements for Two Orifice Compensated Hydrostatic Bearings at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Keba, John E.

    1996-01-01

    Rotordynamic coefficients obtained from testing two different hydrostatic bearings are compared to values predicted by two different computer programs. The first set of test data is from a relatively long (L/D=1) orifice compensated hydrostatic bearing tested in water by Texas A&M University (TAMU Bearing No.9). The second bearing is a shorter (L/D=.37) bearing and was tested in a lower viscosity fluid by Rocketdyne Division of Rockwell (Rocketdyne 'Generic' Bearing) at similar rotating speeds and pressures. Computed predictions of bearing rotordynamic coefficients were obtained from the cylindrical seal code 'ICYL', one of the industrial seal codes developed for NASA-LeRC by Mechanical Technology Inc., and from the hydrodynamic bearing code 'HYDROPAD'. The comparison highlights the difference the bearing has on the accuracy of the predictions. The TAMU Bearing No. 9 test data is closely matched by the predictions obtained for the HYDROPAD code (except for added mass terms) whereas significant differences exist between the data from the Rocketdyne 'Generic' bearing the code predictions. The results suggest that some aspects of the fluid behavior in the shorter, higher Reynolds Number 'Generic' bearing may not be modeled accurately in the codes. The ICYL code predictions for flowrate and direct stiffness approximately equal those of HYDROPAD. Significant differences in cross-coupled stiffness and the damping terms were obtained relative to HYDROPAD and both sets of test data. Several observations are included concerning application of the ICYL code.

  9. Age-Related Changes in Predictive Capacity Versus Internal Model Adaptability: Electrophysiological Evidence that Individual Differences Outweigh Effects of Age.

    PubMed

    Bornkessel-Schlesewsky, Ina; Philipp, Markus; Alday, Phillip M; Kretzschmar, Franziska; Grewe, Tanja; Gumpert, Maike; Schumacher, Petra B; Schlesewsky, Matthias

    2015-01-01

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age-related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60-81 years; n = 40) as they read sentences of the form "The opposite of black is white/yellow/nice." Replicating previous work in young adults, results showed a target-related P300 for the expected antonym ("white"; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, "nice," versus the incongruous associated condition, "yellow"). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that - at both a neurophysiological and a functional level - the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance. PMID

  10. A high temperature fatigue life prediction computer code based on the total strain version of StrainRange Partitioning (SRP)

    NASA Technical Reports Server (NTRS)

    Mcgaw, Michael A.; Saltsman, James F.

    1993-01-01

    A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.

  11. Simulation study of HL-2A-like plasma using integrated predictive modeling code

    SciTech Connect

    Poolyarat, N.; Onjun, T.; Promping, J.

    2009-11-15

    Self-consistent simulations of HL-2A-like plasma are carried out using 1.5D BALDUR integrated predictive modeling code. In these simulations, the core transport is predicted using the combination of Multi-mode (MMM95) anomalous core transport model and NCLASS neoclassical transport model. The evolution of plasma current, temperature and density is carried out. Consequently, the plasma current, temperature and density profiles, as well as other plasma parameters, are obtained as the predictions in each simulation. It is found that temperature and density profiles in these simulations are peak near the plasma center. In addition, the sawtooth period is studied using the Porcilli model and is found that before, during, and after the electron cyclotron resonance heating (ECRH) operation the sawtooth period are approximately the same. It is also observed that the mixing radius of sawtooth crashes is reduced during the ECRH operation.

  12. A modified prediction scheme of the H.264 multiview video coding to improve the decoder performance

    NASA Astrophysics Data System (ADS)

    Hamadan, Ayman M.; Aly, Hussein A.; Fouad, Mohamed M.; Dansereau, Richard M.

    2013-02-01

    In this paper, we present a modified inter-view prediction scheme for the multiview video coding (MVC).With more inter-view prediction, the number of reference frames required to decode a single view increase. Consequently, the data size of decoding a single view increases, thus impacting the decoder performance. In this paper, we propose an MVC scheme that requires less inter-view prediction than that of the MVC standard scheme. The proposed scheme is implemented and tested on real multiview video sequences. Improvements are shown using the proposed scheme in terms of average data size required either to decode a single view, or to access any frame (i.e., random access), with comparable rate-distortion. It is compared to the MVC standard scheme and another improved techniques from the literature.

  13. Inter-bit prediction based on maximum likelihood estimate for distributed video coding

    NASA Astrophysics Data System (ADS)

    Klepko, Robert; Wang, Demin; Huchet, Grégory

    2010-01-01

    Distributed Video Coding (DVC) is an emerging video coding paradigm for the systems that require low complexity encoders supported by high complexity decoders. A typical real world application for a DVC system is mobile phones with video capture hardware that have a limited encoding capability supported by base-stations with a high decoding capability. Generally speaking, a DVC system operates by dividing a source image sequence into two streams, key frames and Wyner-Ziv (W) frames, with the key frames being used to represent the source plus an approximation to the W frames called S frames (where S stands for side information), while the W frames are used to correct the bit errors in the S frames. This paper presents an effective algorithm to reduce the bit errors in the side information of a DVC system. The algorithm is based on the maximum likelihood estimation to help predict future bits to be decoded. The reduction in bit errors in turn reduces the number of parity bits needed for error correction. Thus, a higher coding efficiency is achieved since fewer parity bits need to be transmitted from the encoder to the decoder. The algorithm is called inter-bit prediction because it predicts the bit-plane to be decoded from previously decoded bit-planes, one bitplane at a time, starting from the most significant bit-plane. Results provided from experiments using real-world image sequences show that the inter-bit prediction algorithm does indeed reduce the bit rate by up to 13% for our test sequences. This bit rate reduction corresponds to a PSNR gain of about 1.6 dB for the W frames.

  14. Rotor Wake/Stator Interaction Noise Prediction Code Technical Documentation and User's Manual

    NASA Technical Reports Server (NTRS)

    Topol, David A.; Mathews, Douglas C.

    2010-01-01

    This report documents the improvements and enhancements made by Pratt & Whitney to two NASA programs which together will calculate noise from a rotor wake/stator interaction. The code is a combination of subroutines from two NASA programs with many new features added by Pratt & Whitney. To do a calculation V072 first uses a semi-empirical wake prediction to calculate the rotor wake characteristics at the stator leading edge. Results from the wake model are then automatically input into a rotor wake/stator interaction analytical noise prediction routine which calculates inlet aft sound power levels for the blade-passage-frequency tones and their harmonics, along with the complex radial mode amplitudes. The code allows for a noise calculation to be performed for a compressor rotor wake/stator interaction, a fan wake/FEGV interaction, or a fan wake/core stator interaction. This report is split into two parts, the first part discusses the technical documentation of the program as improved by Pratt & Whitney. The second part is a user's manual which describes how input files are created and how the code is run.

  15. In silico prediction of long intergenic non-coding RNAs in sheep.

    PubMed

    Bakhtiarizadeh, Mohammad Reza; Hosseinpour, Batool; Arefnezhad, Babak; Shamabadi, Narges; Salami, Seyed Alireza

    2016-04-01

    Long non-coding RNAs (lncRNAs) are transcribed RNA molecules >200 nucleotides in length that do not encode proteins and serve as key regulators of diverse biological processes. Recently, thousands of long intergenic non-coding RNAs (lincRNAs), a type of lncRNAs, have been identified in mammalians using massive parallel large sequencing technologies. The availability of the genome sequence of sheep (Ovis aries) has allowed us genomic prediction of non-coding RNAs. This is the first study to identify lincRNAs using RNA-seq data of eight different tissues of sheep, including brain, heart, kidney, liver, lung, ovary, skin, and white adipose. A computational pipeline was employed to characterize 325 putative lincRNAs with high confidence from eight important tissues of sheep using different criteria such as GC content, exon number, gene length, co-expression analysis, stability, and tissue-specific scores. Sixty-four putative lincRNAs displayed tissues-specific expression. The highest number of tissues-specific lincRNAs was found in skin and brain. All novel lincRNAs that aligned to the human and mouse lincRNAs had conserved synteny. These closest protein-coding genes were enriched in 11 significant GO terms such as limb development, appendage development, striated muscle tissue development, and multicellular organismal development. The findings reported here have important implications for the study of sheep genome. PMID:27002388

  16. Resting-state functional connectivity predicts longitudinal change in autistic traits and adaptive functioning in autism.

    PubMed

    Plitt, Mark; Barnes, Kelly Anne; Wallace, Gregory L; Kenworthy, Lauren; Martin, Alex

    2015-12-01

    Although typically identified in early childhood, the social communication symptoms and adaptive behavior deficits that are characteristic of autism spectrum disorder (ASD) persist throughout the lifespan. Despite this persistence, even individuals without cooccurring intellectual disability show substantial heterogeneity in outcomes. Previous studies have found various behavioral assessments [such as intelligence quotient (IQ), early language ability, and baseline autistic traits and adaptive behavior scores] to be predictive of outcome, but most of the variance in functioning remains unexplained by such factors. In this study, we investigated to what extent functional brain connectivity measures obtained from resting-state functional connectivity MRI (rs-fcMRI) could predict the variance left unexplained by age and behavior (follow-up latency and baseline autistic traits and adaptive behavior scores) in two measures of outcome--adaptive behaviors and autistic traits at least 1 y postscan (mean follow-up latency = 2 y, 10 mo). We found that connectivity involving the so-called salience network (SN), default-mode network (DMN), and frontoparietal task control network (FPTCN) was highly predictive of future autistic traits and the change in autistic traits and adaptive behavior over the same time period. Furthermore, functional connectivity involving the SN, which is predominantly composed of the anterior insula and the dorsal anterior cingulate, predicted reliable improvement in adaptive behaviors with 100% sensitivity and 70.59% precision. From rs-fcMRI data, our study successfully predicted heterogeneity in outcomes for individuals with ASD that was unaccounted for by simple behavioral metrics and provides unique evidence for networks underlying long-term symptom abatement. PMID:26627261

  17. Resting-state functional connectivity predicts longitudinal change in autistic traits and adaptive functioning in autism

    PubMed Central

    Plitt, Mark; Barnes, Kelly Anne; Wallace, Gregory L.; Kenworthy, Lauren; Martin, Alex

    2015-01-01

    Although typically identified in early childhood, the social communication symptoms and adaptive behavior deficits that are characteristic of autism spectrum disorder (ASD) persist throughout the lifespan. Despite this persistence, even individuals without cooccurring intellectual disability show substantial heterogeneity in outcomes. Previous studies have found various behavioral assessments [such as intelligence quotient (IQ), early language ability, and baseline autistic traits and adaptive behavior scores] to be predictive of outcome, but most of the variance in functioning remains unexplained by such factors. In this study, we investigated to what extent functional brain connectivity measures obtained from resting-state functional connectivity MRI (rs-fcMRI) could predict the variance left unexplained by age and behavior (follow-up latency and baseline autistic traits and adaptive behavior scores) in two measures of outcome—adaptive behaviors and autistic traits at least 1 y postscan (mean follow-up latency = 2 y, 10 mo). We found that connectivity involving the so-called salience network (SN), default-mode network (DMN), and frontoparietal task control network (FPTCN) was highly predictive of future autistic traits and the change in autistic traits and adaptive behavior over the same time period. Furthermore, functional connectivity involving the SN, which is predominantly composed of the anterior insula and the dorsal anterior cingulate, predicted reliable improvement in adaptive behaviors with 100% sensitivity and 70.59% precision. From rs-fcMRI data, our study successfully predicted heterogeneity in outcomes for individuals with ASD that was unaccounted for by simple behavioral metrics and provides unique evidence for networks underlying long-term symptom abatement. PMID:26627261

  18. A Peak Power Reduction Method with Adaptive Inversion of Clustered Parity-Carriers in BCH-Coded OFDM Systems

    NASA Astrophysics Data System (ADS)

    Muta, Osamu; Akaiwa, Yoshihiko

    In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.

  19. Accurate rotor loads prediction using the FLAP (Force and Loads Analysis Program) dynamics code

    SciTech Connect

    Wright, A.D.; Thresher, R.W.

    1987-10-01

    Accurately predicting wind turbine blade loads and response is very important in predicting the fatigue life of wind turbines. There is a clear need in the wind turbine community for validated and user-friendly structural dynamics codes for predicting blade loads and response. At the Solar Energy Research Institute (SERI), a Force and Loads Analysis Program (FLAP) has been refined and validated and is ready for general use. Currently, FLAP is operational on an IBM-PC compatible computer and can be used to analyze both rigid- and teetering-hub configurations. The results of this paper show that FLAP can be used to accurately predict the deterministic loads for rigid-hub rotors. This paper compares analytical predictions to field test measurements for a three-bladed, upwind turbine with a rigid-hub configuration. The deterministic loads predicted by FLAP are compared with 10-min azimuth averages of blade root flapwise bending moments for different wind speeds. 6 refs., 12 figs., 3 tabs.

  20. Life Prediction for a CMC Component Using the NASALIFE Computer Code

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John Z.; Murthy, Pappu L. N.; Mital, Subodh K.

    2005-01-01

    The computer code, NASALIFE, was used to provide estimates for life of an SiC/SiC stator vane under varying thermomechanical loading conditions. The primary intention of this effort is to show how the computer code NASALIFE can be used to provide reasonable estimates of life for practical propulsion system components made of advanced ceramic matrix composites (CMC). Simple loading conditions provided readily observable and acceptable life predictions. Varying the loading conditions such that low cycle fatigue and creep were affected independently provided expected trends in the results for life due to varying loads and life due to creep. Analysis was based on idealized empirical data for the 9/99 Melt Infiltrated SiC fiber reinforced SiC.

  1. Natural Circulation of Lead-Bismuth in a One-Dimensional Loop: Experiments and Code Predictions

    SciTech Connect

    Agostini, P.; Bertacci, G.; Gherardi, G.; Bianchi, F.; Meloni, P.; Nicolini, D.; Ambrosini, W.; Forgione, F.; Fruttuoso, G.; Oriolo, F.

    2002-07-01

    The paper summarizes the results obtained by an experimental and computational study jointly performed by ENEA and University of Pisa. The study is aimed at assessing the capabilities of an available thermal-hydraulic system code in simulating natural circulation in a loop in which the working fluid is the eutectic lead-bismuth alloy as in the Italian proposal for Accelerator Driven System (ADS) reactor concepts. Experiments were performed in the CHEOPE facility installed at the ENEA Brasimone Research Centre and pre- and post-test calculations were run using a version of the RELAP5/Mod.3.2, purposely modified to account for Pb-Bi liquid alloy properties and behavior. The main results obtained by the experimental tests and by the code analyses are presented in the paper providing material to discuss the present predictive capabilities of transient and steady-state behavior in liquid Pb-Bi systems. (authors)

  2. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly

  3. Predicted effects of sensorineural hearing loss on across-fiber envelope coding in the auditory nervea

    PubMed Central

    Swaminathan, Jayaganesh; Heinz, Michael G.

    2011-01-01

    Cross-channel envelope correlations are hypothesized to influence speech intelligibility, particularly in adverse conditions. Acoustic analyses suggest speech envelope correlations differ for syllabic and phonemic ranges of modulation frequency. The influence of cochlear filtering was examined here by predicting cross-channel envelope correlations in different speech modulation ranges for normal and impaired auditory-nerve (AN) responses. Neural cross-correlation coefficients quantified across-fiber envelope coding in syllabic (0–5 Hz), phonemic (5–64 Hz), and periodicity (64–300 Hz) modulation ranges. Spike trains were generated from a physiologically based AN model. Correlations were also computed using the model with selective hair-cell damage. Neural predictions revealed that envelope cross-correlation decreased with increased characteristic-frequency separation for all modulation ranges (with greater syllabic-envelope correlation than phonemic or periodicity). Syllabic envelope was highly correlated across many spectral channels, whereas phonemic and periodicity envelopes were correlated mainly between adjacent channels. Outer-hair-cell impairment increased the degree of cross-channel correlation for phonemic and periodicity ranges for speech in quiet and in noise, thereby reducing the number of independent neural information channels for envelope coding. In contrast, outer-hair-cell impairment was predicted to decrease cross-channel correlation for syllabic envelopes in noise, which may partially account for the reduced ability of hearing-impaired listeners to segregate speech in complex backgrounds. PMID:21682421

  4. A Predictive Coding Perspective on Beta Oscillations during Sentence-Level Language Comprehension

    PubMed Central

    Lewis, Ashley G.; Schoffelen, Jan-Mathijs; Schriefers, Herbert; Bastiaansen, Marcel

    2016-01-01

    Oscillatory neural dynamics have been steadily receiving more attention as a robust and temporally precise signature of network activity related to language processing. We have recently proposed that oscillatory dynamics in the beta and gamma frequency ranges measured during sentence-level comprehension might be best explained from a predictive coding perspective. Under our proposal we related beta oscillations to both the maintenance/change of the neural network configuration responsible for the construction and representation of sentence-level meaning, and to top–down predictions about upcoming linguistic input based on that sentence-level meaning. Here we zoom in on these particular aspects of our proposal, and discuss both old and new supporting evidence. Finally, we present some preliminary magnetoencephalography data from an experiment comparing Dutch subject- and object-relative clauses that was specifically designed to test our predictive coding framework. Initial results support the first of the two suggested roles for beta oscillations in sentence-level language comprehension. PMID:26973500

  5. A Predictive Coding Perspective on Beta Oscillations during Sentence-Level Language Comprehension.

    PubMed

    Lewis, Ashley G; Schoffelen, Jan-Mathijs; Schriefers, Herbert; Bastiaansen, Marcel

    2016-01-01

    Oscillatory neural dynamics have been steadily receiving more attention as a robust and temporally precise signature of network activity related to language processing. We have recently proposed that oscillatory dynamics in the beta and gamma frequency ranges measured during sentence-level comprehension might be best explained from a predictive coding perspective. Under our proposal we related beta oscillations to both the maintenance/change of the neural network configuration responsible for the construction and representation of sentence-level meaning, and to top-down predictions about upcoming linguistic input based on that sentence-level meaning. Here we zoom in on these particular aspects of our proposal, and discuss both old and new supporting evidence. Finally, we present some preliminary magnetoencephalography data from an experiment comparing Dutch subject- and object-relative clauses that was specifically designed to test our predictive coding framework. Initial results support the first of the two suggested roles for beta oscillations in sentence-level language comprehension. PMID:26973500

  6. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli.

    PubMed

    Kumar, Neeraj; Mutha, Pratik K

    2016-03-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function-perception-are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the "natural" sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  7. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  8. Non-coding RNAs Enabling Prognostic Stratification and Prediction of Therapeutic Response in Colorectal Cancer Patients.

    PubMed

    Perakis, Samantha O; Thomas, Joseph E; Pichler, Martin

    2016-01-01

    Colorectal cancer (CRC) is a heterogeneous disease and current treatment options for patients are associated with a wide range of outcomes and tumor responses. Although the traditional TNM staging system continues to serve as a crucial tool for estimating CRC prognosis and for stratification of treatment choices and long-term survival, it remains limited as it relies on macroscopic features and cases of surgical resection, fails to incorporate new molecular data and information, and cannot perfectly predict the variety of outcomes and responses to treatment associated with tumors of the same stage. Although additional histopathologic features have recently been applied in order to better classify individual tumors, the future might incorporate the use of novel molecular and genetic markers in order to maximize therapeutic outcome and to provide accurate prognosis. Such novel biomarkers, in addition to individual patient tumor phenotyping and other validated genetic markers, could facilitate the prediction of risk of progression in CRC patients and help assess overall survival. Recent findings point to the emerging role of non-protein-coding regions of the genome in their contribution to the progression of cancer and tumor formation. Two major subclasses of non-coding RNAs (ncRNAs), microRNAs and long non-coding RNAs, are often dysregulated in CRC and have demonstrated their diagnostic and prognostic potential as biomarkers. These ncRNAs are promising molecular classifiers and could assist in the stratification of patients into appropriate risk groups to guide therapeutic decisions and their expression patterns could help determine prognosis and predict therapeutic options in CRC. PMID:27573901

  9. Starvation stress during larval development reveals predictive adaptive response in adult worker honey bees (Apis mellifera)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A variety of organisms exhibit developmental plasticity that results in differences in adult morphology, physiology or behavior. This variation in the phenotype, called “Predictive Adaptive Response (PAR),” gives a selective advantage in an adult's environment if the adult experiences environments s...

  10. Validity of the Vocational Adaptation Rating Scale: Prediction of Mentally Retarded Workers' Placement in Sheltered Workshops.

    ERIC Educational Resources Information Center

    Malgady, Robert G.; And Others

    1980-01-01

    The validity of the Vocational Adaptation Rating Scale (VARS) for predicting placement of 125 mentally retarded workers in sheltered workshop settings was investigated. Results indicated low to moderate significant partial correlations with concurrent placement and one year follow-up placement (controlling IQ, age, and sex). (Author)

  11. Weighted Structural Regression: A Broad Class of Adaptive Methods for Improving Linear Prediction.

    ERIC Educational Resources Information Center

    Pruzek, Robert M.; Lepak, Greg M.

    1992-01-01

    Adaptive forms of weighted structural regression are developed and discussed. Bootstrapping studies indicate that the new methods have potential to recover known population regression weights and predict criterion score values routinely better than do ordinary least squares methods. The new methods are scale free and simple to compute. (SLD)

  12. An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks

    PubMed Central

    Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime

    2014-01-01

    We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490

  13. Adaptive immunity increases the pace and predictability of evolutionary change in commensal gut bacteria

    PubMed Central

    Barroso-Batista, João; Demengeot, Jocelyne; Gordo, Isabel

    2015-01-01

    Co-evolution between the mammalian immune system and the gut microbiota is believed to have shaped the microbiota's astonishing diversity. Here we test the corollary hypothesis that the adaptive immune system, directly or indirectly, influences the evolution of commensal species. We compare the evolution of Escherichia coli upon colonization of the gut of wild-type and Rag2−/− mice, which lack lymphocytes. We show that bacterial adaptation is slower in immune-compromised animals, a phenomenon explained by differences in the action of natural selection within each host. Emerging mutations exhibit strong beneficial effects in healthy hosts but substantial antagonistic pleiotropy in immune-deficient mice. This feature is due to changes in the composition of the gut microbiota, which differs according to the immune status of the host. Our results indicate that the adaptive immune system influences the tempo and predictability of E. coli adaptation to the mouse gut. PMID:26615893

  14. Comparison of secondary flows predicted by a viscous code and an inviscid code with experimental data for a turning duct

    NASA Technical Reports Server (NTRS)

    Schwab, J. R.; Povinelli, L. A.

    1984-01-01

    A comparison of the secondary flows computed by the viscous Kreskovsky-Briley-McDonald code and the inviscid Denton code with benchmark experimental data for turning duct is presented. The viscous code is a fully parabolized space-marching Navier-Stokes solver while the inviscid code is a time-marching Euler solver. The experimental data were collected by Taylor, Whitelaw, and Yianneskis with a laser Doppler velocimeter system in a 90 deg turning duct of square cross-section. The agreement between the viscous and inviscid computations was generally very good for the streamwise primary velocity and the radial secondary velocity, except at the walls, where slip conditions were specified for the inviscid code. The agreement between both the computations and the experimental data was not as close, especially at the 60.0 deg and 77.5 deg angular positions within the duct. This disagreement was attributed to incomplete modelling of the vortex development near the suction surface.

  15. Multisensor multipulse Linear Predictive Coding (LPC) analysis in noise for medium rate speech transmission

    NASA Astrophysics Data System (ADS)

    Preuss, R. D.

    1985-12-01

    The theory of multipulse linear predictive coding (LPC) analysis is extended to include the possible presence of acoustic noise, as for a telephone near a busy road. Models are developed assuming two signals are provided: the primary signal is the output of a microphone which samples the combined acoustic fields of the noise and the speech, while the secondary signal is the output of a microphone which samples the acoustic field of the noise alone. Analysis techniques to extract the multipulse LPC parameters from these two signals are developed; these techniques are developed as approximations to maximum likelihood analysis for the given model.

  16. A novel feature extraction scheme with ensemble coding for protein-protein interaction prediction.

    PubMed

    Du, Xiuquan; Cheng, Jiaxing; Zheng, Tingting; Duan, Zheng; Qian, Fulan

    2014-01-01

    Protein-protein interactions (PPIs) play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF) algorithm with the ensemble coding (EC) method. To reduce computational time, a feature selection method (DX) was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp. PMID:25046746

  17. Fast Prediction of HCCI Combustion with an Artificial Neural Network Linked to a Fluid Mechanics Code

    SciTech Connect

    Aceves, S M; Flowers, D L; Chen, J; Babaimopoulos, A

    2006-08-29

    We have developed an artificial neural network (ANN) based combustion model and have integrated it into a fluid mechanics code (KIVA3V) to produce a new analysis tool (titled KIVA3V-ANN) that can yield accurate HCCI predictions at very low computational cost. The neural network predicts ignition delay as a function of operating parameters (temperature, pressure, equivalence ratio and residual gas fraction). KIVA3V-ANN keeps track of the time history of the ignition delay during the engine cycle to evaluate the ignition integral and predict ignition for each computational cell. After a cell ignites, chemistry becomes active, and a two-step chemical kinetic mechanism predicts composition and heat generation in the ignited cells. KIVA3V-ANN has been validated by comparison with isooctane HCCI experiments in two different engines. The neural network provides reasonable predictions for HCCI combustion and emissions that, although typically not as good as obtained with the more physically representative multi-zone model, are obtained at a much reduced computational cost. KIVA3V-ANN can perform reasonably accurate HCCI calculations while requiring only 10% more computational effort than a motored KIVA3V run. It is therefore considered a valuable tool for evaluation of engine maps or other performance analysis tasks requiring multiple individual runs.

  18. The cortical organization of speech processing: feedback control and predictive coding the context of a dual-stream model.

    PubMed

    Hickok, Gregory

    2012-01-01

    Speech recognition is an active process that involves some form of predictive coding. This statement is relatively uncontroversial. What is less clear is the source of the prediction. The dual-stream model of speech processing suggests that there are two possible sources of predictive coding in speech perception: the motor speech system and the lexical-conceptual system. Here I provide an overview of the dual-stream model of speech processing and then discuss evidence concerning the source of predictive coding during speech recognition. I conclude that, in contrast to recent theoretical trends, the dorsal sensory-motor stream is not a source of forward prediction that can facilitate speech recognition. Rather, it is forward prediction coming out of the ventral stream that serves this function. PMID:22766458

  19. Techniques for the Enhancement of Linear Predictive Speech Coding in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Wrench, Alan A.

    Available from UMI in association with The British Library. Requires signed TDF. The Linear Prediction model was first applied to speech two and a half decades ago. Since then it has been the subject of intense research and continues to be one of the principal tools in the analysis of speech. Its mathematical tractability makes it a suitable subject for study and its proven success in practical applications makes the study worthwhile. The model is known to be unsuited to speech corrupted by background noise. This has led many researchers to investigate ways of enhancing the speech signal prior to Linear Predictive analysis. In this thesis this body of work is extended. The chosen application is low bit-rate (2.4 kbits/sec) speech coding. For this task the performance of the Linear Prediction algorithm is crucial because there is insufficient bandwidth to encode the error between the modelled speech and the original input. A review of the fundamentals of Linear Prediction and an independent assessment of the relative performance of methods of Linear Prediction modelling are presented. A new method is proposed which is fast and facilitates stability checking, however, its stability is shown to be unacceptably poorer than existing methods. A novel supposition governing the positioning of the analysis frame relative to a voiced speech signal is proposed and supported by observation. The problem of coding noisy speech is examined. Four frequency domain speech processing techniques are developed and tested. These are: (i) Combined Order Linear Prediction Spectral Estimation; (ii) Frequency Scaling According to an Aural Model; (iii) Amplitude Weighting Based on Perceived Loudness; (iv) Power Spectrum Squaring. These methods are compared with the Recursive Linearised Maximum a Posteriori method. Following on from work done in the frequency domain, a time domain implementation of spectrum squaring is developed. In addition, a new method of power spectrum estimation is

  20. Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity

    PubMed Central

    Latinus, Marianne; Belin, Pascal

    2011-01-01

    We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype. PMID:21847384

  1. CCFL in hot legs and steam generators and its prediction with the CATHARE code

    SciTech Connect

    Geffraye, G.; Bazin, P.; Pichon, P.

    1995-09-01

    This paper presents a study about the Counter-Current Flow Limitation (CCFL) prediction in hot legs and steam generators (SG) in both system test facilities and pressurized water reactors. Experimental data are analyzed, particularly the recent MHYRESA test data. Geometrical and scale effects on the flooding behavior are shown. The CATHARE code modelling problems concerning the CCFL prediction are discussed. A method which gives the user the possibility of controlling the flooding limit at a given location is developed. In order to minimize the user effect, a methodology is proposed to the user in case of a calculation with a counter-current flow between the upper plenum and the SF U-tubes. The following questions have to be made clear for the user: when to use the CATHARE CCFL option, which correlation to use, and where to locate the flooding limit.

  2. Respiratory motion prediction by using the adaptive neuro fuzzy inference system (ANFIS)

    NASA Astrophysics Data System (ADS)

    Kakar, Manish; Nyström, Håkan; Rye Aarup, Lasse; Jakobi Nøttrup, Trine; Rune Olsen, Dag

    2005-10-01

    The quality of radiation therapy delivered for treating cancer patients is related to set-up errors and organ motion. Due to the margins needed to ensure adequate target coverage, many breast cancer patients have been shown to develop late side effects such as pneumonitis and cardiac damage. Breathing-adapted radiation therapy offers the potential for precise radiation dose delivery to a moving target and thereby reduces the side effects substantially. However, the basic requirement for breathing-adapted radiation therapy is to track and predict the target as precisely as possible. Recent studies have addressed the problem of organ motion prediction by using different methods including artificial neural network and model based approaches. In this study, we propose to use a hybrid intelligent system called ANFIS (the adaptive neuro fuzzy inference system) for predicting respiratory motion in breast cancer patients. In ANFIS, we combine both the learning capabilities of a neural network and reasoning capabilities of fuzzy logic in order to give enhanced prediction capabilities, as compared to using a single methodology alone. After training ANFIS and checking for prediction accuracy on 11 breast cancer patients, it was found that the RMSE (root-mean-square error) can be reduced to sub-millimetre accuracy over a period of 20 s provided the patient is assisted with coaching. The average RMSE for the un-coached patients was 35% of the respiratory amplitude and for the coached patients 6% of the respiratory amplitude.

  3. IN-MACA-MCC: Integrated Multiple Attractor Cellular Automata with Modified Clonal Classifier for Human Protein Coding and Promoter Prediction.

    PubMed

    Pokkuluri, Kiran Sree; Inampudi, Ramesh Babu; Nedunuri, S S S N Usha Devi

    2014-01-01

    Protein coding and promoter region predictions are very important challenges of bioinformatics (Attwood and Teresa, 2000). The identification of these regions plays a crucial role in understanding the genes. Many novel computational and mathematical methods are introduced as well as existing methods that are getting refined for predicting both of the regions separately; still there is a scope for improvement. We propose a classifier that is built with MACA (multiple attractor cellular automata) and MCC (modified clonal classifier) to predict both regions with a single classifier. The proposed classifier is trained and tested with Fickett and Tung (1992) datasets for protein coding region prediction for DNA sequences of lengths 54, 108, and 162. This classifier is trained and tested with MMCRI datasets for protein coding region prediction for DNA sequences of lengths 252 and 354. The proposed classifier is trained and tested with promoter sequences from DBTSS (Yamashita et al., 2006) dataset and nonpromoters from EID (Saxonov et al., 2000) and UTRdb (Pesole et al., 2002) datasets. The proposed model can predict both regions with an average accuracy of 90.5% for promoter and 89.6% for protein coding region predictions. The specificity and sensitivity values of promoter and protein coding region predictions are 0.89 and 0.92, respectively. PMID:25132849

  4. FDNS code to predict wall heat fluxes or wall temperatures in rocket nozzles

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1993-01-01

    This report summarizes the findings on the NASA contract NAG8-212, Task No. 3. The overall project consists of three tasks, all of which have been successfully completed. In addition, some supporting supplemental work, not required by the contract, has been performed and is documented herein. Task 1 involved the modification of the wall functions in the code FDNS to use a Reynolds Analogy-based method. Task 2 involved the verification of the code against experimentally available data. The data chosen for comparison was from an experiment involving the injection of helium from a wall jet. Results obtained in completing this task also show the sensitivity of the FDNS code to unknown conditions at the injection slot. Task 3 required computation of the flow of hot exhaust gases through the P&W 40K subscale nozzle. Computations were performed both with and without film coolant injection. The FDNS program tends to overpredict heat fluxes, but, with suitable modeling of backside cooling, may give reasonable wall temperature predictions. For film cooling in the P&W 40K calorimeter subscale nozzle, the average wall temperature is reduced from 1750 R to about 1050 R by the film cooling. The average wall heat flux is reduced by a factor of three.

  5. Status and Plans for the TRANSP Interpretive and Predictive Simulation Code

    NASA Astrophysics Data System (ADS)

    Kaye, Stanley; Andre, Robert; Marina, Gorelenkova; Yuan, Xingqui; Hawryluk, Richard; Jardin, Steven; Poli, Francesca

    2015-11-01

    TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT_SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP also incorporates such source models as NUBEAM for neutral beam injection, GENRAY, TORAY, TORBEAM, TORIC and CQL3D for ICRH, LHCD, ECH and HHFW. The implementation of selected components makes efficient use of MPI for speed up of code calculations. TRANSP has a wide international user-base, and it is run on the FusionGrid to allow for timely support and quick turnaround by the PPPL Computational Plasma Physics Group. It is being used as a basis for both analysis and development of control algorithms and discharge operational scenarios, including simulation of ITER plasmas. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Progress on implementing TRANSP as a component in the ITER IMAS will also be described. This research was supported by the U.S. Department of Energy under contracts DE-AC02-09CH11466.

  6. Evaluation of damage-induced permeability using a three-dimensional Adaptive Continuum/Discontinuum Code (AC/DC)

    NASA Astrophysics Data System (ADS)

    Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger

    Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency

  7. Genome-Wide Association Analysis of Adaptation Using Environmentally Predicted Traits.

    PubMed

    van Heerwaarden, Joost; van Zanten, Martijn; Kruijer, Willem

    2015-10-01

    Current methods for studying the genetic basis of adaptation evaluate genetic associations with ecologically relevant traits or single environmental variables, under the implicit assumption that natural selection imposes correlations between phenotypes, environments and genotypes. In practice, observed trait and environmental data are manifestations of unknown selective forces and are only indirectly associated with adaptive genetic variation. In theory, improved estimation of these forces could enable more powerful detection of loci under selection. Here we present an approach in which we approximate adaptive variation by modeling phenotypes as a function of the environment and using the predicted trait in multivariate and univariate genome-wide association analysis (GWAS). Based on computer simulations and published flowering time data from the model plant Arabidopsis thaliana, we find that environmentally predicted traits lead to higher recovery of functional loci in multivariate GWAS and are more strongly correlated to allele frequencies at adaptive loci than individual environmental variables. Our results provide an example of the use of environmental data to obtain independent and meaningful information on adaptive genetic variation. PMID:26496492

  8. Genome-Wide Association Analysis of Adaptation Using Environmentally Predicted Traits

    PubMed Central

    van Zanten, Martijn

    2015-01-01

    Current methods for studying the genetic basis of adaptation evaluate genetic associations with ecologically relevant traits or single environmental variables, under the implicit assumption that natural selection imposes correlations between phenotypes, environments and genotypes. In practice, observed trait and environmental data are manifestations of unknown selective forces and are only indirectly associated with adaptive genetic variation. In theory, improved estimation of these forces could enable more powerful detection of loci under selection. Here we present an approach in which we approximate adaptive variation by modeling phenotypes as a function of the environment and using the predicted trait in multivariate and univariate genome-wide association analysis (GWAS). Based on computer simulations and published flowering time data from the model plant Arabidopsis thaliana, we find that environmentally predicted traits lead to higher recovery of functional loci in multivariate GWAS and are more strongly correlated to allele frequencies at adaptive loci than individual environmental variables. Our results provide an example of the use of environmental data to obtain independent and meaningful information on adaptive genetic variation. PMID:26496492

  9. Reading the second code: mapping epigenomes to understand plant growth, development, and adaptation to the environment.

    PubMed

    2012-06-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  10. Predictive coding in autism spectrum disorder and attention deficit hyperactivity disorder.

    PubMed

    Gonzalez-Gadea, Maria Luz; Chennu, Srivas; Bekinschtein, Tristan A; Rattazzi, Alexia; Beraudi, Ana; Tripicchio, Paula; Moyano, Beatriz; Soffita, Yamila; Steinberg, Laura; Adolfi, Federico; Sigman, Mariano; Marino, Julian; Manes, Facundo; Ibanez, Agustin

    2015-11-01

    Predictive coding has been proposed as a framework to understand neural processes in neuropsychiatric disorders. We used this approach to describe mechanisms responsible for attentional abnormalities in autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD). We monitored brain dynamics of 59 children (8-15 yr old) who had ASD or ADHD or who were control participants via high-density electroencephalography. We performed analysis at the scalp and source-space levels while participants listened to standard and deviant tone sequences. Through task instructions, we manipulated top-down expectation by presenting expected and unexpected deviant sequences. Children with ASD showed reduced superior frontal cortex (FC) responses to unexpected events but increased dorsolateral prefrontal cortex (PFC) activation to expected events. In contrast, children with ADHD exhibited reduced cortical responses in superior FC to expected events but strong PFC activation to unexpected events. Moreover, neural abnormalities were associated with specific control mechanisms, namely, inhibitory control in ASD and set-shifting in ADHD. Based on the predictive coding account, top-down expectation abnormalities could be attributed to a disproportionate reliance (precision) allocated to prior beliefs in ASD and to sensory input in ADHD. PMID:26311184

  11. Crater lake habitat predicts morphological diversity in adaptive radiations of cichlid fishes.

    PubMed

    Recknagel, Hans; Elmer, Kathryn R; Meyer, Axel

    2014-07-01

    Adaptive radiations provide an excellent opportunity for studying the correlates and causes for the origin of biodiversity. In these radiations, species diversity may be influenced by either the ecological and physical environment, intrinsic lineage effects, or both. Disentangling the relative contributions of these factors in generating biodiversity remains a major challenge in understanding why a lineage does or does not radiate. Here, we examined morphological variation in body shape for replicate flocks of Nicaraguan Midas cichlid fishes and tested its association with biological and physical characteristics of their crater lakes. We found that variability of body elongation, an adaptive trait in freshwater fishes, is mainly predicted by average lake depth (N = 6, P < 0.001, R(2) = 0.96). Other factors considered, including lake age, surface area, littoral zone area, number of co-occurring fish species, and genetic diversity of the Midas flock, did not significantly predict morphological variability. We also showed that lakes with a larger littoral zone have on average higher bodied Midas cichlids, indicating that Midas cichlid flocks are locally adapted to their crater lake habitats. In conclusion, we found that a lake's habitat predicts the magnitude and the diversity of body elongation in repeated cichlid adaptive radiations. PMID:24660780

  12. Predicting the evolutionary dynamics of seasonal adaptation to novel climates in Arabidopsis thaliana.

    PubMed

    Fournier-Level, Alexandre; Perry, Emily O; Wang, Jonathan A; Braun, Peter T; Migneault, Andrew; Cooper, Martha D; Metcalf, C Jessica E; Schmitt, Johanna

    2016-05-17

    Predicting whether and how populations will adapt to rapid climate change is a critical goal for evolutionary biology. To examine the genetic basis of fitness and predict adaptive evolution in novel climates with seasonal variation, we grew a diverse panel of the annual plant Arabidopsis thaliana (multiparent advanced generation intercross lines) in controlled conditions simulating four climates: a present-day reference climate, an increased-temperature climate, a winter-warming only climate, and a poleward-migration climate with increased photoperiod amplitude. In each climate, four successive seasonal cohorts experienced dynamic daily temperature and photoperiod variation over a year. We measured 12 traits and developed a genomic prediction model for fitness evolution in each seasonal environment. This model was used to simulate evolutionary trajectories of the base population over 50 y in each climate, as well as 100-y scenarios of gradual climate change following adaptation to a reference climate. Patterns of plastic and evolutionary fitness response varied across seasons and climates. The increased-temperature climate promoted genetic divergence of subpopulations across seasons, whereas in the winter-warming and poleward-migration climates, seasonal genetic differentiation was reduced. In silico "resurrection experiments" showed limited evolutionary rescue compared with the plastic response of fitness to seasonal climate change. The genetic basis of adaptation and, consequently, the dynamics of evolutionary change differed qualitatively among scenarios. Populations with fewer founding genotypes and populations with genetic diversity reduced by prior selection adapted less well to novel conditions, demonstrating that adaptation to rapid climate change requires the maintenance of sufficient standing variation. PMID:27140640

  13. Affinity regression predicts the recognition code of nucleic acid binding proteins

    PubMed Central

    Pelossof, Raphael; Singh, Irtisha; Yang, Julie L.; Weirauch, Matthew T.; Hughes, Timothy R.; Leslie, Christina S.

    2016-01-01

    Predicting the affinity profiles of nucleic acid-binding proteins directly from the protein sequence is a major unsolved problem. We present a statistical approach for learning the recognition code of a family of transcription factors (TFs) or RNA-binding proteins (RBPs) from high-throughput binding assays. Our method, called affinity regression, trains on protein binding microarray (PBM) or RNA compete experiments to learn an interaction model between proteins and nucleic acids, using only protein domain and probe sequences as inputs. By training on mouse homeodomain PBM profiles, our model correctly identifies residues that confer DNA-binding specificity and accurately predicts binding motifs for an independent set of divergent homeodomains. Similarly, learning from RNA compete profiles for diverse RBPs, our model can predict the binding affinities of held-out proteins and identify key RNA-binding residues. More broadly, we envision applying our method to model and predict biological interactions in any setting where there is a high-throughput ‘affinity’ readout. PMID:26571099

  14. Computer code to predict the heat of explosion of high energy materials.

    PubMed

    Muthurajan, H; Sivabalan, R; Pon Saravanan, N; Talawar, M B

    2009-01-30

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-à-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (DeltaH(e)) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R(2)=0.9721 with a linear equation y=0.9262x+101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials. PMID:18513863

  15. Soliciting scientific information and beliefs in predictive modeling and adaptive management

    NASA Astrophysics Data System (ADS)

    Glynn, P. D.; Voinov, A. A.; Shapiro, C. D.

    2015-12-01

    Post-normal science requires public engagement and adaptive corrections in addressing issues with high complexity and uncertainty. An adaptive management framework is presented for the improved management of natural resources and environments through a public participation process. The framework solicits the gathering and transformation and/or modeling of scientific information but also explicitly solicits the expression of participant beliefs. Beliefs and information are compared, explicitly discussed for alignments or misalignments, and ultimately melded back together as a "knowledge" basis for making decisions. An effort is made to recognize the human or participant biases that may affect the information base and the potential decisions. In a separate step, an attempt is made to recognize and predict the potential "winners" and "losers" (perceived or real) of any decision or action. These "winners" and "losers" include present human communities with different spatial, demographic or socio-economic characteristics as well as more dispersed or more diffusely characterized regional or global communities. "Winners" and "losers" may also include future human communities as well as communities of other biotic species. As in any adaptive management framework, assessment of predictions, iterative follow-through and adaptation of policies or actions is essential, and commonly very difficult or impossible to achieve. Recognizing beforehand the limits of adaptive management is essential. More generally, knowledge of the behavioral and economic sciences and of ethics and sociology will be key to a successful implementation of this adaptive management framework. Knowledge of biogeophysical processes will also be essential, but by definition of the issues being addressed, will always be incomplete and highly uncertain. The human dimensions of the issues addressed and the participatory processes used carry their own complexities and uncertainties. Some ideas and principles are

  16. Bipartite geminivirus host adaptation determined cooperatively by coding and noncoding sequences of the genome.

    PubMed

    Petty, I T; Carter, S C; Morra, M R; Jeffrey, J L; Olivey, H E

    2000-11-25

    Bipartite geminiviruses are small, plant-infecting viruses with genomes composed of circular, single-stranded DNA molecules, designated A and B. Although they are closely related genetically, individual bipartite geminiviruses frequently exhibit host-specific adaptation. Two such viruses are bean golden mosaic virus (BGMV) and tomato golden mosaic virus (TGMV), which are well adapted to common bean (Phaseolus vulgaris) and Nicotiana benthamiana, respectively. In previous studies, partial host adaptation was conferred on BGMV-based or TGMV-based hybrid viruses by separately exchanging open reading frames (ORFs) on DNA A or DNA B. Here we analyzed hybrid viruses in which all of the ORFs on both DNAs were exchanged except for AL1, which encodes a protein with strictly virus-specific activity. These hybrid viruses exhibited partial transfer of host-adapted phenotypes. In contrast, exchange of noncoding regions (NCRs) upstream from the AR1 and BR1 ORFs did not confer any host-specific gain of function on hybrid viruses. However, when the exchangeable ORFs and NCRs from TGMV were combined in a single BGMV-based hybrid virus, complete transfer of TGMV-like adaptation to N. benthamiana was achieved. Interestingly, the reciprocal TGMV-based hybrid virus displayed only partial gain of function in bean. This may be, in part, the result of defective virus-specific interactions between TGMV and BGMV sequences present in the hybrid, although a potential role in adaptation to bean for additional regions of the BGMV genome cannot be ruled out. PMID:11080490

  17. Remembering forward: Neural correlates of memory and prediction in human motor adaptation

    PubMed Central

    Scheidt, Robert A; Zimbelman, Janice L; Salowitz, Nicole M G; Suminski, Aaron J; Mosier, Kristine M; Houk, James; Simo, Lucia

    2011-01-01

    We used functional MR imaging (FMRI), a robotic manipulandum and systems identification techniques to examine neural correlates of predictive compensation for spring-like loads during goal-directed wrist movements in neurologically-intact humans. Although load changed unpredictably from one trial to the next, subjects nevertheless used sensorimotor memories from recent movements to predict and compensate upcoming loads. Prediction enabled subjects to adapt performance so that the task was accomplished with minimum effort. Population analyses of functional images revealed a distributed, bilateral network of cortical and subcortical activity supporting predictive load compensation during visual target capture. Cortical regions - including prefrontal, parietal and hippocampal cortices - exhibited trial-by-trial fluctuations in BOLD signal consistent with the storage and recall of sensorimotor memories or “states” important for spatial working memory. Bilateral activations in associative regions of the striatum demonstrated temporal correlation with the magnitude of kinematic performance error (a signal that could drive reward-optimizing reinforcement learning and the prospective scaling of previously learned motor programs). BOLD signal correlations with load prediction were observed in the cerebellar cortex and red nuclei (consistent with the idea that these structures generate adaptive fusimotor signals facilitating cancellation of expected proprioceptive feedback, as required for conditional feedback adjustments to ongoing motor commands and feedback error learning). Analysis of single subject images revealed that predictive activity was at least as likely to be observed in more than one of these neural systems as in just one. We conclude therefore that motor adaptation is mediated by predictive compensations supported by multiple, distributed, cortical and subcortical structures. PMID:21840405

  18. Analysis of prediction algorithms for residual compression in a lossy to lossless scalable video coding system based on HEVC

    NASA Astrophysics Data System (ADS)

    Heindel, Andreas; Wige, Eugen; Kaup, André

    2014-09-01

    Lossless image and video compression is required in many professional applications. However, lossless coding results in a high data rate, which leads to a long wait for the user when the channel capacity is limited. To overcome this problem, scalable lossless coding is an elegant solution. It provides a fast accessible preview by a lossy compressed base layer, which can be refined to a lossless output when the enhancement layer is received. Therefore, this paper presents a lossy to lossless scalable coding system where the enhancement layer is coded by means of intra prediction and entropy coding. Several algorithms are evaluated for the prediction step in this paper. It turned out that Sample-based Weighted Prediction is a reasonable choice for usual consumer video sequences and the Median Edge Detection algorithm is better suited for medical content from computed tomography. For both types of sequences the efficiency may be further improved by the much more complex Edge-Directed Prediction algorithm. In the best case, in total only about 2.7% additional data rate has to be invested for scalable coding compared to single-layer JPEG-LS compression for usual consumer video sequences. For the case of the medical sequences scalable coding is even more efficient than JPEG-LS compression for certain values of QP.

  19. Adaptive prediction of human eye pupil position and effects on wavefront errors

    NASA Astrophysics Data System (ADS)

    Garcia-Rissmann, Aurea; Kulcsár, Caroline; Raynaud, Henri-François; El Mrabet, Yamina; Sahin, Betul; Lamory, Barbara

    2011-03-01

    The effects of pupil motion on retinal imaging are studied in this paper. Involuntary eye or head movements are always present in the imaging procedure, decreasing the output quality and preventing a more detailed diagnostics. When the image acquisition is performed using an adaptive optics (AO) system, substantial gain is foreseen if pupil motion is accounted for. This can be achieved using a pupil tracker as the one developed by Imagine Eyes R®, which provides pupil position measurements at a 80Hz sampling rate. In any AO loop, there is inevitably a delay between the wavefront measurement and the correction applied to the deformable mirror, meaning that an optimal compensation requires prediction. We investigate several ways of predicting pupil movement, either by retaining the last value given by the pupil tracker, which is close to the optimal solution in the case of a pure random walk, or by performing position prediction thanks to auto-regressive (AR) models with parameters updated in real time. We show that a small improvement in prediction with respect to predicting with the latest measured value is obtained through adaptive AR modeling. We evaluate the wavefront errors obtained by computing the root mean square of the difference between a wavefront displaced by the assumed true position and the predicted one, as seen by the imaging system. The results confirm that pupil movements have to be compensated in order to minimize wavefront errors.

  20. Fine-Granularity Loading Schemes Using Adaptive Reed-Solomon Coding for xDSL-DMT Systems

    NASA Astrophysics Data System (ADS)

    Panigrahi, Saswat; Le-Ngoc, Tho

    2006-12-01

    While most existing loading algorithms for xDSL-DMT systems strive for the optimal energy distribution to maximize their rate, the amounts of bits loaded to subcarriers are constrained to be integers and the associated granularity losses can represent a significant percentage of the achievable data rate, especially in the presence of the peak-power constraint. To recover these losses, we propose a fine-granularity loading scheme using joint optimization of adaptive modulation and flexible coding parameters based on programmable Reed-Solomon (RS) codes and bit-error probability criterion. Illustrative examples of applications to VDSL-DMT systems indicate that the proposed scheme can offer a rate increase of about[InlineEquation not available: see fulltext.] in most cases as compared to various existing integer-bit-loading algorithms. This improvement is in good agreement with the theoretical estimates developed to quantify the granularity loss.

  1. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; Center for Quantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  2. SWAT system performance predictions. Project report. [SWAT (Short-Wavelength Adaptive Techniques)

    SciTech Connect

    Parenti, R.R.; Sasiela, R.J.

    1993-03-10

    In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.... Adaptive optics, Phase conjugation, Atmospheric turbulence Synthetic beacon, Laser guide star.

  3. An insula-frontostriatal network mediates flexible cognitive control by adaptively predicting changing control demands

    PubMed Central

    Jiang, Jiefeng; Beck, Jeffrey; Heller, Katherine; Egner, Tobias

    2015-01-01

    The anterior cingulate and lateral prefrontal cortices have been implicated in implementing context-appropriate attentional control, but the learning mechanisms underlying our ability to flexibly adapt the control settings to changing environments remain poorly understood. Here we show that human adjustments to varying control demands are captured by a reinforcement learner with a flexible, volatility-driven learning rate. Using model-based functional magnetic resonance imaging, we demonstrate that volatility of control demand is estimated by the anterior insula, which in turn optimizes the prediction of forthcoming demand in the caudate nucleus. The caudate's prediction of control demand subsequently guides the implementation of proactive and reactive attentional control in dorsal anterior cingulate and dorsolateral prefrontal cortices. These data enhance our understanding of the neuro-computational mechanisms of adaptive behaviour by connecting the classic cingulate-prefrontal cognitive control network to a subcortical control-learning mechanism that infers future demands by flexibly integrating remote and recent past experiences. PMID:26391305

  4. Follow you, follow me: continuous mutual prediction and adaptation in joint tapping.

    PubMed

    Konvalinka, Ivana; Vuust, Peter; Roepstorff, Andreas; Frith, Chris D

    2010-11-01

    To study the mechanisms of coordination that are fundamental to successful interactions we carried out a joint finger tapping experiment in which pairs of participants were asked to maintain a given beat while synchronizing to an auditory signal coming from the other person or the computer. When both were hearing each other, the pair became a coupled, mutually and continuously adaptive unit of two "hyper-followers", with their intertap intervals (ITIs) oscillating in opposite directions on a tap-to-tap basis. There was thus no evidence for the emergence of a leader-follower strategy. We also found that dyads were equally good at synchronizing with the irregular, but responsive other as with the predictable, unresponsive computer. However, they performed worse when the "other" was both irregular and unresponsive. We thus propose that interpersonal coordination is facilitated by the mutual abilities to (a) predict the other's subsequent action and (b) adapt accordingly on a millisecond timescale. PMID:20694920

  5. LPTA: Location Predictive and Time Adaptive Data Gathering Scheme with Mobile Sink for Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    This paper exploits sink mobility to prolong the lifetime of sensor networks while maintaining the data transmission delay relatively low. A location predictive and time adaptive data gathering scheme is proposed. In this paper, we introduce a sink location prediction principle based on loose time synchronization and deduce the time-location formulas of the mobile sink. According to local clocks and the time-location formulas of the mobile sink, nodes in the network are able to calculate the current location of the mobile sink accurately and route data packets timely toward the mobile sink by multihop relay. Considering that data packets generating from different areas may be different greatly, an adaptive dwelling time adjustment method is also proposed to balance energy consumption among nodes in the network. Simulation results show that our data gathering scheme enables data routing with less data transmission time delay and balance energy consumption among nodes. PMID:25302327

  6. DEMOCRITUS: An adaptive particle in cell (PIC) code for object-plasma interactions

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni

    2011-06-01

    A new method for the simulation of plasma materials interactions is presented. The method is based on the particle in cell technique for the description of the plasma and on the immersed boundary method for the description of the interactions between materials and plasma particles. A technique to adapt the local number of particles and grid adaptation are used to reduce the truncation error and the noise of the simulations, to increase the accuracy per unit cost. In the present work, the computational method is verified against known results. Finally, the simulation method is applied to a number of specific examples of practical scientific and engineering interest.

  7. Feasibility of using adaptive logic networks to predict compressor unit failure

    SciTech Connect

    Armstrong, W.W.; Chungying Chu; Thomas, M.M.

    1995-12-31

    In this feasibility study, an adaptive logic network (ALN) was trained to predict failures of turbine-driven compressor units using a large database of measurements. No expert knowledge about compressor systems was involved. The predictions used only the statistical properties of the measurements and the indications of failure types. A fuzzy set was used to model measurements typical of normal operation. It was constrained by a requirement imposed during ALN training, that it should have a shape similar to a Gaussian density, more precisely, that its logarithm should be convex-up. Initial results obtained using this approach to knowledge discovery in the database were encouraging.

  8. Ensemble framework based real-time respiratory motion prediction for adaptive radiotherapy applications.

    PubMed

    Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C

    2016-08-01

    Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. PMID:27238760

  9. Space Weather Prediction Error Bounding for Real-Time Ionospheric Threat Adaptation of GNSS Augmentation Systems

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yoon, M.; Lee, J.

    2014-12-01

    Current Global Navigation Satellite Systems (GNSS) augmentation systems attempt to consider all possible ionospheric events in their correction computations of worst-case errors. This conservatism can be mitigated by subdividing anomalous conditions and using different values of ionospheric threat-model bounds for each class. A new concept of 'real-time ionospheric threat adaptation' that adjusts the threat model in real time instead of always using the same 'worst-case' model was introduced in my previous research. The concept utilizes predicted values of space weather indices for determining the corresponding threat model based on the pre-defined worst-case threat as a function of space weather indices. Since space weather prediction is not reliable due to prediction errors, prediction errors are needed to be bounded to the required level of integrity of the system being supported. The previous research performed prediction error bounding using disturbance, storm time (Dst) index. The distribution of Dst prediction error over the 15-year data was bounded by applying 'inflated-probability density function (pdf) Gaussian bounding'. Since the error distribution has thick and non-Gaussian tails, investigation on statistical distributions which properly describe heavy tails with less conservatism is required for the system performance. This paper suggests two potential approaches for improving space weather prediction error bounding. First, we suggest using different statistical models when fit the error distribution, such as the Laplacian distribution which has fat tails, and the folded Gaussian cumulative distribution function (cdf) distribution. Second approach is to bound the error distribution by segregating data based on the overall level of solar activity. Bounding errors using only solar minimum period data will have less uncertainty and it may allow the use of 'solar cycle prediction' provided by NASA when implementing to real-time threat adaptation. Lastly

  10. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  11. How do different aspects of self-regulation predict successful adaptation to school?

    PubMed

    Neuenschwander, Regula; Röthlisberger, Marianne; Cimeli, Patrizia; Roebers, Claudia M

    2012-11-01

    Self-regulation plays an important role in successful adaptation to preschool and school contexts as well as in later academic achievement. The current study relates different aspects of self-regulation such as temperamental effortful control and executive functions (updating, inhibition, and shifting) to different aspects of adaptation to school such as learning-related behavior, school grades, and performance in standardized achievement tests. The relationship between executive functions/effortful control and academic achievement has been established in previous studies; however, little is known about their unique contributions to different aspects of adaptation to school and the interplay of these factors in young school children. Results of a 1-year longitudinal study (N=459) revealed that unique contributions of effortful control (parental report) to school grades were fully mediated by children's learning-related behavior. On the other hand, the unique contributions of executive functions (performance on tasks) to school grades were only partially mediated by children's learning-related behavior. Moreover, executive functions predicted performance in standardized achievement tests exclusively, with comparable predictive power for mathematical and reading/writing skills. Controlling for fluid intelligence did not change the pattern of prediction substantially, and fluid intelligence did not explain any variance above that of the two included aspects of self-regulation. Although effortful control and executive functions were not significantly related to each other, both aspects of self-regulation were shown to be important for fostering early learning and good classroom adjustment in children around transition to school. PMID:22920433

  12. Prediction-based manufacturing center self-adaptive demand side energy optimization in cyber physical systems

    NASA Astrophysics Data System (ADS)

    Sun, Xinyao; Wang, Xue; Wu, Jiangwei; Liu, Youda

    2014-05-01

    Cyber physical systems(CPS) recently emerge as a new technology which can provide promising approaches to demand side management(DSM), an important capability in industrial power systems. Meanwhile, the manufacturing center is a typical industrial power subsystem with dozens of high energy consumption devices which have complex physical dynamics. DSM, integrated with CPS, is an effective methodology for solving energy optimization problems in manufacturing center. This paper presents a prediction-based manufacturing center self-adaptive energy optimization method for demand side management in cyber physical systems. To gain prior knowledge of DSM operating results, a sparse Bayesian learning based componential forecasting method is introduced to predict 24-hour electric load levels for specific industrial areas in China. From this data, a pricing strategy is designed based on short-term load forecasting results. To minimize total energy costs while guaranteeing manufacturing center service quality, an adaptive demand side energy optimization algorithm is presented. The proposed scheme is tested in a machining center energy optimization experiment. An AMI sensing system is then used to measure the demand side energy consumption of the manufacturing center. Based on the data collected from the sensing system, the load prediction-based energy optimization scheme is implemented. By employing both the PSO and the CPSO method, the problem of DSM in the manufacturing center is solved. The results of the experiment show the self-adaptive CPSO energy optimization method enhances optimization by 5% compared with the traditional PSO optimization method.

  13. Discrete coding of stimulus value, reward expectation, and reward prediction error in the dorsal striatum.

    PubMed

    Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro

    2015-11-01

    To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201

  14. Predictive coding and multisensory integration: an attentional account of the multisensory mind

    PubMed Central

    Talsma, Durk

    2015-01-01

    Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top–down and bottom–up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top–down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation. PMID:25859192

  15. Prediction of explosive cylinder tests using equations of state from the PANDA code

    SciTech Connect

    Kerley, G.I.; Christian-Frear, T.L.

    1993-09-28

    The PANDA code is used to construct tabular equations of state (EOS) for the detonation products of 24 explosives having CHNO compositions. These EOS, together with a reactive burn model, are used in numerical hydrocode calculations of cylinder tests. The predicted detonation properties and cylinder wall velocities are found to give very good agreement with experimental data. Calculations of flat plate acceleration tests for the HMX-based explosive LX14 are also made and shown to agree well with the measurements. The effects of the reaction zone on both the cylinder and flat plate tests are discussed. For TATB-based explosives, the differences between experiment and theory are consistently larger than for other compositions and may be due to nonideal (finite dimameter) behavior.

  16. A computer code (SKINTEMP) for predicting transient missile and aircraft heat transfer characteristics

    NASA Astrophysics Data System (ADS)

    Cummings, Mary L.

    1994-09-01

    A FORTRAN computer code (SKINTEMP) has been developed to calculate transient missile/aircraft aerodynamic heating parameters utilizing basic flight parameters such as altitude, Mach number, and angle of attack. The insulated skin temperature of a vehicle surface on either the fuselage (axisymmetric body) or wing (two-dimensional body) is computed from a basic heat balance relationship throughout the entire spectrum (subsonic, transonic, supersonic, hypersonic) of flight. This calculation method employs a simple finite difference procedure which considers radiation, forced convection, and non-reactive chemistry. Surface pressure estimates are based on a modified Newtonian flow model. Eckert's reference temperature method is used as the forced convection heat transfer model. SKINTEMP predictions are compared with a limited number of test cases. SKINTEMP was developed as a tool to enhance the conceptual design process of high speed missiles and aircraft. Recommendations are made for possible future development of SKINTEMP to further support the design process.

  17. Prognostic and predictive values of long non-coding RNA LINC00472 in breast cancer.

    PubMed

    Shen, Yi; Katsaros, Dionyssios; Loo, Lenora W M; Hernandez, Brenda Y; Chong, Clayton; Canuto, Emilie Marion; Biglia, Nicoletta; Lu, Lingeng; Risch, Harvey; Chu, Wen-Ming; Yu, Herbert

    2015-04-20

    LINC00472 is a novel long intergenic non-coding RNA. We evaluated LINC00472 expression in breast tumor samples using RT-qPCR, performed a meta-analysis of over 20 microarray datasets from the Gene Expression Omnibus (GEO) database, and investigated the effect of LINC00472 expression on cell proliferation and migration in breast cancer cells transfected with a LINC00472-expressing vector. Our qPCR results showed that high LINC00472 expression was associated with less aggressive breast tumors and more favorable disease outcomes. Patients with high expression of LINC00472 had significantly reduced risk of relapse and death compared to those with low expression. Patients with high LINC00472 expression also had better responses to adjuvant chemo- or hormonal therapy than did patients with low expression. Results of meta-analysis on multiple studies from the GEO database were in agreement with the findings of our study. High LINC00472 was also associated with favorable molecular subtypes, Luminal A or normal-like tumors. Cell culture experiments showed that up-regulation of LINC00472 expression could suppress breast cancer cell proliferation and migration. Collectively, our clinical and in vitro studies suggest that LINC00472 is a tumor suppressor in breast cancer. Evaluating this long non-coding RNA in breast tumors may have prognostic and predictive value in the clinical management of breast cancer. PMID:25865225

  18. Prognostic and predictive values of long non-coding RNA LINC00472 in breast cancer

    PubMed Central

    Shen, Yi; Katsaros, Dionyssios; Loo, Lenora W. M.; Hernandez, Brenda Y.; Chong, Clayton; Canuto, Emilie Marion; Biglia, Nicoletta; Lu, Lingeng; Risch, Harvey; Chu, Wen-Ming; Yu, Herbert

    2015-01-01

    LINC00472 is a novel long intergenic non-coding RNA. We evaluated LINC00472 expression in breast tumor samples using RT-qPCR, performed a meta-analysis of over 20 microarray datasets from the Gene Expression Omnibus (GEO) database, and investigated the effect of LINC00472 expression on cell proliferation and migration in breast cancer cells transfected with a LINC00472-expressing vector. Our qPCR results showed that high LINC00472 expression was associated with less aggressive breast tumors and more favorable disease outcomes. Patients with high expression of LINC00472 had significantly reduced risk of relapse and death compared to those with low expression. Patients with high LINC00472 expression also had better responses to adjuvant chemo- or hormonal therapy than did patients with low expression. Results of meta-analysis on multiple studies from the GEO database were in agreement with the findings of our study. High LINC00472 was also associated with favorable molecular subtypes, Luminal A or normal-like tumors. Cell culture experiments showed that up-regulation of LINC00472 expression could suppress breast cancer cell proliferation and migration. Collectively, our clinical and in vitro studies suggest that LINC00472 is a tumor suppressor in breast cancer. Evaluating this long non-coding RNA in breast tumors may have prognostic and predictive value in the clinical management of breast cancer. PMID:25865225

  19. Prediction and characterization of small non-coding RNAs related to secondary metabolites in Saccharopolyspora erythraea.

    PubMed

    Liu, Wei-Bing; Shi, Yang; Yao, Li-Li; Zhou, Ying; Ye, Bang-Ce

    2013-01-01

    Saccharopolyspora erythraea produces a large number of secondary metabolites with biological activities, including erythromycin. Elucidation of the mechanisms through which the production of these secondary metabolites is regulated may help to identify new strategies for improved biosynthesis of erythromycin. In this paper, we describe the systematic prediction and analysis of small non-coding RNAs (sRNAs) in S. erythraea, with the aim to elucidate sRNA-mediated regulation of secondary metabolite biosynthesis. In silico and deep-sequencing technologies were applied to predict sRNAs in S. erythraea. Six hundred and forty-seven potential sRNA loci were identified, of which 382 cis-encoded antisense RNA are complementary to protein-coding regions and 265 predicted transcripts are located in intergenic regions. Six candidate sRNAs (sernc292, sernc293, sernc350, sernc351, sernc361, and sernc389) belong to four gene clusters (tpc3, pke, pks6, and nrps5) that are involved in secondary metabolite biosynthesis. Deep-sequencing data showed that the expression of all sRNAs in the strain HL3168 E3 (E3) was higher than that in NRRL23338 (M), except for sernc292 and sernc361 expression. The relative expression of six sRNAs in strain M and E3 were validated by qRT-PCR at three different time points (24, 48, and 72 h). The results showed that, at each time point, the transcription levels of sernc293, sernc350, sernc351, and sernc389 were higher in E3 than in M, with the largest difference observed at 72 h, whereas no signals for sernc292 and sernc361 were detected. sernc293, sernc350, sernc351, and sernc389 probably regulate iron transport, terpene metabolism, geosmin synthesis, and polyketide biosynthesis, respectively. The major significance of this study is the successful prediction and identification of sRNAs in genomic regions close to the secondary metabolism-related genes in S. erythraea. A better understanding of the sRNA-target interaction would help to elucidate the

  20. Contribution to the Prediction of the Fold Code: Application to Immunoglobulin and Flavodoxin Cases

    PubMed Central

    Banach, Mateusz; Prudhomme, Nicolas; Carpentier, Mathilde; Duprat, Elodie; Papandreou, Nikolaos; Kalinowska, Barbara; Chomilier, Jacques; Roterman, Irena

    2015-01-01

    Background Folding nucleus of globular proteins formation starts by the mutual interaction of a group of hydrophobic amino acids whose close contacts allow subsequent formation and stability of the 3D structure. These early steps can be predicted by simulation of the folding process through a Monte Carlo (MC) coarse grain model in a discrete space. We previously defined MIRs (Most Interacting Residues), as the set of residues presenting a large number of non-covalent neighbour interactions during such simulation. MIRs are good candidates to define the minimal number of residues giving rise to a given fold instead of another one, although their proportion is rather high, typically [15-20]% of the sequences. Having in mind experiments with two sequences of very high levels of sequence identity (up to 90%) but different folds, we combined the MIR method, which takes sequence as single input, with the “fuzzy oil drop” (FOD) model that requires a 3D structure, in order to estimate the residues coding for the fold. FOD assumes that a globular protein follows an idealised 3D Gaussian distribution of hydrophobicity density, with the maximum in the centre and minima at the surface of the “drop”. If the actual local density of hydrophobicity around a given amino acid is as high as the ideal one, then this amino acid is assigned to the core of the globular protein, and it is assumed to follow the FOD model. Therefore one obtains a distribution of the amino acids of a protein according to their agreement or rejection with the FOD model. Results We compared and combined MIR and FOD methods to define the minimal nucleus, or keystone, of two populated folds: immunoglobulin-like (Ig) and flavodoxins (Flav). The combination of these two approaches defines some positions both predicted as a MIR and assigned as accordant with the FOD model. It is shown here that for these two folds, the intersection of the predicted sets of residues significantly differs from random selection

  1. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Astrophysics Data System (ADS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  2. Adaptive coding of orofacial and speech actions in motor and somatosensory spaces with and without overt motor behavior.

    PubMed

    Sato, Marc; Vilain, Coriandre; Lamalle, Laurent; Grabski, Krystyna

    2015-02-01

    Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory-motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes. Common suppressed neural responses were observed in motor and posterior parietal regions in the achievement of both repeated overt and covert orofacial and speech actions, including the left premotor cortex and inferior frontal gyrus, the superior parietal cortex and adjacent intraprietal sulcus, and the left IC and the SMA. Interestingly, reduced activity of the auditory cortex was observed during overt but not covert speech production, a finding likely reflecting a motor rather an auditory imagery strategy by the participants. By providing evidence for adaptive changes in premotor and associative somatosensory brain areas, the observed RS suggests online state coding of both orofacial and speech actions in somatosensory and motor spaces with and without motor behavior and sensory feedback. PMID:25203272

  3. On the efficiency of image completion methods for intra prediction in video coding with large block structures

    NASA Astrophysics Data System (ADS)

    Doshkov, Dimitar; Jottrand, Oscar; Wiegand, Thomas; Ndjiki-Nya, Patrick

    2013-02-01

    Intra prediction is a fundamental tool in video coding with hybrid block-based architecture. Recent investigations have shown that one of the most beneficial elements for a higher compression performance in high-resolution videos is the incorporation of larger block structures. Thus in this work, we investigate the performance of novel intra prediction modes based on different image completion techniques in a new video coding scheme with large block structures. Image completion methods exploit the fact that high frequency image regions yield high coding costs when using classical H.264/AVC prediction modes. This problem is tackled by investigating the incorporation of several intra predictors using the concept of Laplace partial differential equation (PDE), Least Square (LS) based linear prediction and the Auto Regressive model. A major aspect of this article is the evaluation of the coding performance in a qualitative (i.e. coding efficiency) manner. Experimental results show significant improvements in compression (up to 7.41 %) by integrating the LS-based linear intra prediction.

  4. Age-Related Changes in Predictive Capacity Versus Internal Model Adaptability: Electrophysiological Evidence that Individual Differences Outweigh Effects of Age

    PubMed Central

    Bornkessel-Schlesewsky, Ina; Philipp, Markus; Alday, Phillip M.; Kretzschmar, Franziska; Grewe, Tanja; Gumpert, Maike; Schumacher, Petra B.; Schlesewsky, Matthias

    2015-01-01

    Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to

  5. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  6. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  7. Adaptive neuro-fuzzy and expert systems for power quality analysis and prediction of abnormal operation

    NASA Astrophysics Data System (ADS)

    Ibrahim, Wael Refaat Anis

    The present research involves the development of several fuzzy expert systems for power quality analysis and diagnosis. Intelligent systems for the prediction of abnormal system operation were also developed. The performance of all intelligent modules developed was either enhanced or completely produced through adaptive fuzzy learning techniques. Neuro-fuzzy learning is the main adaptive technique utilized. The work presents a novel approach to the interpretation of power quality from the perspective of the continuous operation of a single system. The research includes an extensive literature review pertaining to the applications of intelligent systems to power quality analysis. Basic definitions and signature events related to power quality are introduced. In addition, detailed discussions of various artificial intelligence paradigms as well as wavelet theory are included. A fuzzy-based intelligent system capable of identifying normal from abnormal operation for a given system was developed. Adaptive neuro-fuzzy learning was applied to enhance its performance. A group of fuzzy expert systems that could perform full operational diagnosis were also developed successfully. The developed systems were applied to the operational diagnosis of 3-phase induction motors and rectifier bridges. A novel approach for learning power quality waveforms and trends was developed. The technique, which is adaptive neuro fuzzy-based, learned, compressed, and stored the waveform data. The new technique was successfully tested using a wide variety of power quality signature waveforms, and using real site data. The trend-learning technique was incorporated into a fuzzy expert system that was designed to predict abnormal operation of a monitored system. The intelligent system learns and stores, in compressed format, trends leading to abnormal operation. The system then compares incoming data to the retained trends continuously. If the incoming data matches any of the learned trends, an

  8. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  9. Adapting SOYGRO V5.42 for prediction under climate change conditions

    SciTech Connect

    Pickering, N.B.; Jones, J.W.; Boote, K.J.

    1995-12-31

    In some studies of the impacts of climate change on global crop production, crop growth models were empirically adapted to improve their response to increased CO{sub 2} concentration and air temperature. This chapter evaluates the empirical adaptations of the photosynthesis and evapotranspiration (ET) algorithms used in the soybean [Glycine max (L.) Merr.] model, SOYGRO V5.42, by comparing it with a new model that includes mechanistic approaches for these two processes. The new evapotranspiration-photosynthesis sub-model (ETPHOT) uses a hedgerow light interception algorithm, a C{sub 3}-leaf biochemical photosynthesis submodel, and predicts canopy ET and temperatures using a three-zone energy balance. ETPHOT uses daily weather data, has an internal hourly time step, and sums hourly predictions to obtain daily gross photosynthesis and ET. The empirical ET and photosynthesis curves included in SOYGRO V5.42 for climate change prediction were similar to those predicted by the ETPHOT model. Under extreme conditions that promote high leaf temperatures, like in the humid tropics. SOYGRO V5.42 overestimated daily gross photosynthesis response to CO{sub 2} compared with the ETPHOT model. SOYGRO V5.42 also slightly overestimated daily gross photosynthesis at intermediate air temperatures and ambient CO{sub 2} concentrations. 80 refs., 12 figs.

  10. Environmentally adaptive acoustic transmission loss prediction in turbulent and nonturbulent atmospheres.

    PubMed

    Wichern, Gordon; Azimi-Sadjadi, Mahmood R; Mungiole, Michael

    2007-05-01

    An environmentally adaptive system for prediction of acoustic transmission loss (TL) in the atmosphere is developed in this paper. This system uses several back propagation neural network predictors, each corresponding to a specific environmental condition. The outputs of the expert predictors are combined using a fuzzy confidence measure and a nonlinear fusion system. Using this prediction methodology the computational intractability of traditional acoustic model-based approaches is eliminated. The proposed TL prediction system is tested on two synthetic acoustic data sets for a wide range of geometrical, source and environmental conditions including both nonturbulent and turbulent atmospheres. Test results of the system showed root mean square (RMS) errors of 1.84 dB for the nonturbulent and 1.36 dB for the turbulent conditions, respectively, which are acceptable levels for near real-time performance. Additionally, the environmentally adaptive system demonstrated improved TL prediction accuracy at high frequencies and large values of horizontal separation between source and receiver. PMID:17521880

  11. Adaptive Colour Contrast Coding in the Salamander Retina Efficiently Matches Natural Scene Statistics

    PubMed Central

    Vasserman, Genadiy; Schneidman, Elad; Segev, Ronen

    2013-01-01

    The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue) stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level. PMID:24205373

  12. Near-fault earthquake ground motion prediction by a high-performance spectral element numerical code

    SciTech Connect

    Paolucci, Roberto; Stupazzini, Marco

    2008-07-08

    Near-fault effects have been widely recognised to produce specific features of earthquake ground motion, that cannot be reliably predicted by 1D seismic wave propagation modelling, used as a standard in engineering applications. These features may have a relevant impact on the structural response, especially in the nonlinear range, that is hard to predict and to be put in a design format, due to the scarcity of significant earthquake records and of reliable numerical simulations. In this contribution a pilot study is presented for the evaluation of seismic ground-motions in the near-fault region, based on a high-performance numerical code for 3D seismic wave propagation analyses, including the seismic fault, the wave propagation path and the near-surface geological or topographical irregularity. For this purpose, the software package GeoELSE is adopted, based on the spectral element method. The set-up of the numerical benchmark of 3D ground motion simulation in the valley of Grenoble (French Alps) is chosen to study the effect of the complex interaction between basin geometry and radiation mechanism on the variability of earthquake ground motion.

  13. The WISGSK: A computer code for the prediction of a multistage axial compressor performance with water ingestion

    NASA Technical Reports Server (NTRS)

    Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A computer code is presented for the prediction of off-design axial flow compressor performance with water ingestion. Four processes were considered to account for the aero-thermo-mechanical interactions during operation with air-water droplet mixture flow: (1) blade performance change, (2) centrifuging of water droplets, (3) heat and mass transfer process between the gaseous and the liquid phases and (4) droplet size redistribution due to break-up. Stage and compressor performance are obtained by a stage stacking procedure using representative veocity diagrams at a rotor inlet and outlet mean radii. The Code has options for performance estimation with (1) mixtures of gas and (2) gas-water droplet mixtures, and therefore can take into account the humidity present in ambient conditions. A test case illustrates the method of using the Code. The Code follows closely the methodology and architecture of the NASA-STGSTK Code for the estimation of axial-flow compressor performance with air flow.

  14. Integration of Expressed Sequence Tag Data Flanking Predicted RNA Secondary Structures Facilitates Novel Non-Coding RNA Discovery

    PubMed Central

    Krzyzanowski, Paul M.; Price, Feodor D.; Muro, Enrique M.; Rudnicki, Michael A.; Andrade-Navarro, Miguel A.

    2011-01-01

    Many computational methods have been used to predict novel non-coding RNAs (ncRNAs), but none, to our knowledge, have explicitly investigated the impact of integrating existing cDNA-based Expressed Sequence Tag (EST) data that flank structural RNA predictions. To determine whether flanking EST data can assist in microRNA (miRNA) prediction, we identified genomic sites encoding putative miRNAs by combining functional RNA predictions with flanking ESTs data in a model consistent with miRNAs undergoing cleavage during maturation. In both human and mouse genomes, we observed that the inclusion of flanking ESTs adjacent to and not overlapping predicted miRNAs significantly improved the performance of various methods of miRNA prediction, including direct high-throughput sequencing of small RNA libraries. We analyzed the expression of hundreds of miRNAs predicted to be expressed during myogenic differentiation using a customized microarray and identified several known and predicted myogenic miRNA hairpins. Our results indicate that integrating ESTs flanking structural RNA predictions improves the quality of cleaved miRNA predictions and suggest that this strategy can be used to predict other non-coding RNAs undergoing cleavage during maturation. PMID:21698286

  15. Integration of expressed sequence tag data flanking predicted RNA secondary structures facilitates novel non-coding RNA discovery.

    PubMed

    Krzyzanowski, Paul M; Price, Feodor D; Muro, Enrique M; Rudnicki, Michael A; Andrade-Navarro, Miguel A

    2011-01-01

    Many computational methods have been used to predict novel non-coding RNAs (ncRNAs), but none, to our knowledge, have explicitly investigated the impact of integrating existing cDNA-based Expressed Sequence Tag (EST) data that flank structural RNA predictions. To determine whether flanking EST data can assist in microRNA (miRNA) prediction, we identified genomic sites encoding putative miRNAs by combining functional RNA predictions with flanking ESTs data in a model consistent with miRNAs undergoing cleavage during maturation. In both human and mouse genomes, we observed that the inclusion of flanking ESTs adjacent to and not overlapping predicted miRNAs significantly improved the performance of various methods of miRNA prediction, including direct high-throughput sequencing of small RNA libraries. We analyzed the expression of hundreds of miRNAs predicted to be expressed during myogenic differentiation using a customized microarray and identified several known and predicted myogenic miRNA hairpins. Our results indicate that integrating ESTs flanking structural RNA predictions improves the quality of cleaved miRNA predictions and suggest that this strategy can be used to predict other non-coding RNAs undergoing cleavage during maturation. PMID:21698286

  16. Benchmarking and qualification of the NUFREQ-NPW code for best estimate prediction of multi-channel core stability margins

    SciTech Connect

    Taleyarkhan, R.; Lahey, R.T. Jr.; McFarlane, A.F.; Podowski, M.Z.

    1988-01-01

    The NUFREQ-NPW code was modified and set up at Westinghouse, USA for mixed fuel type multi-channel core-wide stability analysis. The resulting code, NUFREQ-NPW, allows for variable axial power profiles between channel groups and can handle mixed fuel types. Various models incorporated into NUFREQ-NPW were systematically compared against the Westinghouse channel stability analysis code MAZDA-NF, for which the mathematical model was developed, in an entirely different manner. Excellent agreement was obtained which verified the thermal-hydraulic modeling and coding aspects. Detailed comparisons were also performed against nuclear-coupled reactor core stability data. All thirteen Peach Bottom-2 EOC-2/3 low flow stability tests were simulated. A key aspect for code qualification involved the development of a physically based empirical algorithm to correct for the effect of core inlet flow development on subcooled boiling. Various other modeling assumptions were tested and sensitivity studies performed. Good agreement was obtained between NUFREQ-NPW predictions and data. Moreover, predictions were generally on the conservative side. The results of detailed direct comparisons with experimental data using the NUFREQ-NPW code; have demonstrated that BWR core stability margins are conservatively predicted, and all data trends are captured with good accuracy. The methodology is thus suitable for BWR design and licensing purposes. 11 refs., 12 figs., 2 tabs.

  17. Predictive coding accounts of shared representations in parieto-insular networks.

    PubMed

    Ishida, Hiroaki; Suzuki, Keisuke; Grandi, Laura Clara

    2015-04-01

    The discovery of mirror neurons in the ventral premotor cortex (area F5) and inferior parietal cortex (area PFG) in the macaque monkey brain has provided the physiological evidence for direct matching of the intrinsic motor representations of the self and the visual image of the actions of others. The existence of mirror neurons implies that the brain has mechanisms reflecting shared self and other action representations. This may further imply that the neural basis self-body representations may also incorporate components that are shared with other-body representations. It is likely that such a mechanism is also involved in predicting other's touch sensations and emotions. However, the neural basis of shared body representations has remained unclear. Here, we propose a neural basis of body representation of the self and of others in both human and non-human primates. We review a series of behavioral and physiological findings which together paint a picture that the systems underlying such shared representations require integration of conscious exteroception and interoception subserved by a cortical sensory-motor network involving parieto-inner perisylvian circuits (the ventral intraparietal area [VIP]/inferior parietal area [PFG]-secondary somatosensory cortex [SII]/posterior insular cortex [pIC]/anterior insular cortex [aIC]). Based on these findings, we propose a computational mechanism of the shared body representation in the predictive coding (PC) framework. Our mechanism proposes that processes emerging from generative models embedded in these specific neuronal circuits play a pivotal role in distinguishing a self-specific body representation from a shared one. The model successfully accounts for normal and abnormal shared body phenomena such as mirror-touch synesthesia and somatoparaphrenia. In addition, it generates a set of testable experimental predictions. PMID:25447372

  18. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music

    PubMed Central

    Vuust, Peter; Witek, Maria A. G.

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813

  19. The Predictive Utility of Narcissism among Children and Adolescents: Evidence for a Distinction between Adaptive and Maladaptive Narcissism

    ERIC Educational Resources Information Center

    Barry, Christopher T.; Frick, Paul J.; Adler, Kristy K.; Grafeman, Sarah J.

    2007-01-01

    We examined the predictive utility of narcissism among a community sample of children and adolescents (N=98) longitudinally. Analyses focused on the differential utility between maladaptive and adaptive narcissism for predicting later delinquency. Maladaptive narcissism significantly predicted self-reported delinquency at one-, two-, and…

  20. Adaptability and Prediction of Anticipatory Muscular Activity Parameters to Different Movements in the Sitting Position.

    PubMed

    Chikh, Soufien; Watelain, Eric; Faupin, Arnaud; Pinti, Antonio; Jarraya, Mohamed; Garnier, Cyril

    2016-08-01

    Voluntary movement often causes postural perturbation that requires an anticipatory postural adjustment to minimize perturbation and increase the efficiency and coordination during execution. This systematic review focuses specifically on the relationship between the parameters of anticipatory muscular activities and movement finality in sitting position among adults, to study the adaptability and predictability of anticipatory muscular activities parameters to different movements and conditions in sitting position in adults. A systematic literature search was performed using PubMed, Science Direct, Web of Science, Springer-Link, Engineering Village, and EbscoHost. Inclusion and exclusion criteria were applied to retain the most rigorous and specific studies, yielding 76 articles, Seventeen articles were excluded at first reading, and after the application of inclusion and exclusion criteria, 23 were retained. In a sitting position, central nervous system activity precedes movement by diverse anticipatory muscular activities and shows the ability to adapt anticipatory muscular activity parameters to the movement direction, postural stability, or charge weight. In addition, these parameters could be adapted to the speed of execution, as found for the standing position. Parameters of anticipatory muscular activities (duration, order, and amplitude of muscle contractions constituting the anticipatory muscular activity) could be used as a predictive indicator of forthcoming movement. In addition, this systematic review may improve methodology in empirical studies and assistive technology for people with disabilities. PMID:27440765

  1. Adaptation of distortion product otoacoustic emissions predicts susceptibility to acoustic over-exposure in alert rabbits.

    PubMed

    Luebke, Anne E; Stagner, Barden B; Martin, Glen K; Lonsbury-Martin, Brenda L

    2014-04-01

    A noninvasive test was developed in rabbits based on fast adaptation measures for 2f1-f2 distortion-product otoacoustic emissions (DPOAEs). The goal was to evaluate the effective reflex activation, i.e., "functional strength," of both the descending medial olivocochlear efferent reflex (MOC-R) and the middle-ear muscle reflex (MEM-R) through sound activation. Classically, it is assumed that both reflexes contribute toward protecting the inner ear from cochlear damage caused by noise exposure. The DP-gram method described here evaluated the MOC-R effect on DPOAE levels over a two-octave (oct) frequency range. To estimate the related activation of the middle-ear muscles (MEMs), the MEM-R was measured by monitoring the level of the f1-primary tone throughout its duration. Following baseline measures, rabbits were subjected to noise over-exposure. A main finding was that the measured adaptive activity was highly variable between rabbits but less so between the ears of the same animal. Also, together, the MOC-R and MEM-R tests showed that, on average, DPOAE adaptation consisted of a combined contribution from both systems. Despite this shared involvement, the amount of DPOAE adaptation measured for a particular animal's ear predicted that ear's subsequent susceptibility to the noise over-exposure for alert but not for deeply anesthetized rabbits. PMID:25234992

  2. Genomic Measures to Predict Adaptation to Novel Sensorimotor Environments and Improve Personalization of Countermeasure Design

    NASA Technical Reports Server (NTRS)

    Kreutzberg, G. A.; Zanello, S.; Seidler, R. D.; Peters, B.; De Dios, Y. E.; Gadd, N. E.; Bloomberg, J. J.; Mulavara, A. P.

    2016-01-01

    Introduction. Astronauts experience sensorimotor disturbances during their initial exposure to microgravity and during the re-adaptation phase following a return to an Earth-gravitational environment. These alterations may affect crewmembers' ability to perform mission-critical functional tasks. Interestingly, astronauts have shown significant inter-subject variation in adaptive capability during gravitational transitions. The ability to predict the manner and degree to which individual astronauts would be affected would improve the efficacy of personalized countermeasure training programs designed to enhance sensorimotor adaptability. The success of such an approach depends on the development of predictive measures of sensorimotor adaptation, which would ascertain each crewmember's adaptive capacity. The goal of this study is to determine whether specific genetic polymorphisms have significant influence on sensorimotor adaptability, which can help inform the design of personalized training countermeasures. Methods. Subjects (n=15) were tested on their ability to negotiate a complex obstacle course for ten test trials while wearing up-down vision-displacing goggles. This presented a visuomotor challenge while doing a full body task. The first test trial time and the recovery rate over the ten trials were used as adaptability performance metrics. Four single nucleotide polymorphisms (SNPs) were selected for their role in neural pathways underlying sensorimotor adaptation and were identified in subjects' DNA extracted from saliva samples: catechol-O-methyl transferase (COMT, rs4680), dopamine receptor D2 (DRD2, rs1076560), brain-derived neurotrophic factor genes (BDNF, rs6265), and the DraI polymorphism of the alpha-2 adrenergic receptor. The relationship between the SNPs and test performance was assessed by assigning subjects a rank score based on their adaptability performance metrics and comparing gene expression between the top half and bottom half performers

  3. Adaptive neuro fuzzy inference system for compressional wave velocity prediction in a carbonate reservoir

    NASA Astrophysics Data System (ADS)

    Zoveidavianpoor, Mansoor; Samsuri, Ariffin; Shadizadeh, Seyed Reza

    2013-02-01

    Compressional-wave (Vp) data are key information for estimation of rock physical properties and formation evaluation in hydrocarbon reservoirs. However, the absence of Vp will significantly delay the application of specific risk-assessment approaches for reservoir exploration and development procedures. Since Vp is affected by several factors such as lithology, porosity, density, and etc., it is difficult to model their non-linear relationships using conventional approaches. In addition, currently available techniques are not efficient for Vp prediction, especially in carbonates. There is a growing interest in incorporating advanced technologies for an accurate prediction of lacking data in wells. The objectives of this study, therefore, are to analyze and predict Vp as a function of some conventional well logs by two approaches; Adaptive Neuro-Fuzzy Inference System (ANFIS) and Multiple Linear Regression (MLR). Also, the significant impact of selected input parameters on response variable will be investigated. A total of 2156 data points from a giant Middle Eastern carbonate reservoir, derived from conventional well logs and Dipole Sonic Imager (DSI) log were utilized in this study. The quality of the prediction was quantified in terms of the mean squared error (MSE), correlation coefficient (R-square), and prediction efficiency error (PEE). Results show that the ANFIS outperforms MLR with MSE of 0.0552, R-square of 0.964, and PEE of 2%. It is posited that porosity has a significant impact in predicting Vp in the investigated carbonate reservoir.

  4. Test results of a 40 kW Stirling engine and comparison with the NASA-Lewis computer code predictions

    NASA Astrophysics Data System (ADS)

    Allen, D.; Cairelli, J.

    1985-12-01

    A Stirling engine was tested without auxiliaries at NASA-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations with those predicted by the code. The measured data tended to be lower than the computer code predictions. The silicon carbide foam regenerators appear to be structurally suitable, but the foam matrix tested severely reduced performance.

  5. Test results of a 40 kW Stirling engine and comparison with the NASA-Lewis computer code predictions

    NASA Technical Reports Server (NTRS)

    Allen, D.; Cairelli, J.

    1985-01-01

    A Stirling engine was tested without auxiliaries at NASA-Lewis. Three different regenerator configurations were tested with hydrogen. The test objectives were (1) to obtain steady-state and dynamic engine data, including indicated power, for validation of an existing computer model for this engine; and (2) to evaluate structurally the use of silicon carbide regenerators. This paper presents comparisons of the measured brake performance, indicated mean effective pressure, and cyclic pressure variations with those predicted by the code. The measured data tended to be lower than the computer code predictions. The silicon carbide foam regenerators appear to be structurally suitable, but the foam matrix tested severely reduced performance.

  6. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359

  7. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  8. Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.

  9. Incremental Validity of Personality Measures in Predicting Underwater Performance and Adaptation.

    PubMed

    Colodro, Joaquín; Garcés-de-Los-Fayos, Enrique J; López-García, Juan J; Colodro-Conde, Lucía

    2015-01-01

    Intelligence and personality traits are currently considered effective predictors of human behavior and job performance. However, there are few studies about their relevance in the underwater environment. Data from a sample of military personnel performing scuba diving courses were analyzed with regression techniques, testing the contribution of individual differences and ascertaining the incremental validity of the personality in an environment with extreme psychophysical demands. The results confirmed the incremental validity of personality traits (ΔR 2 = .20, f 2 = .25) over the predictive contribution of general mental ability (ΔR 2 = .07, f 2 = .08) in divers' performance. Moreover, personality (R(L)2 = .34) also showed a higher validity to predict underwater adaptation than general mental ability ( R(L)2 = .09). The ROC curve indicated 86% of the maximum possible discrimination power for the prediction of underwater adaptation, AUC = .86, p < .001, 95% CI (.82-.90). These findings confirm the shift and reversal of incremental validity of dispositional traits in the underwater environment and the relevance of personality traits as predictors of an effective response to the changing circumstances of military scuba diving. They also may improve the understanding of the behavioral effects and psychophysiological complications of diving and can also provide guidance for psychological intervention and prevention of risk in this extreme environment. PMID:26055931

  10. Better prediction by use of co-data: adaptive group-regularized ridge regression.

    PubMed

    van de Wiel, Mark A; Lien, Tonje G; Verlaat, Wina; van Wieringen, Wessel N; Wilting, Saskia M

    2016-02-10

    For many high-dimensional studies, additional information on the variables, like (genomic) annotation or external p-values, is available. In the context of binary and continuous prediction, we develop a method for adaptive group-regularized (logistic) ridge regression, which makes structural use of such 'co-data'. Here, 'groups' refer to a partition of the variables according to the co-data. We derive empirical Bayes estimates of group-specific penalties, which possess several nice properties: (i) They are analytical. (ii) They adapt to the informativeness of the co-data for the data at hand. (iii) Only one global penalty parameter requires tuning by cross-validation. In addition, the method allows use of multiple types of co-data at little extra computational effort. We show that the group-specific penalties may lead to a larger distinction between 'near-zero' and relatively large regression parameters, which facilitates post hoc variable selection. The method, termed GRridge, is implemented in an easy-to-use R-package. It is demonstrated on two cancer genomics studies, which both concern the discrimination of precancerous cervical lesions from normal cervix tissues using methylation microarray data. For both examples, GRridge clearly improves the predictive performances of ordinary logistic ridge regression and the group lasso. In addition, we show that for the second study, the relatively good predictive performance is maintained when selecting only 42 variables. PMID:26365903

  11. On using an adaptive neural network to predict lung tumor motion during respiration for radiotherapy applications

    SciTech Connect

    Isaksson, Marcus; Jalden, Joakim; Murphy, Martin J.

    2005-12-15

    In this study we address the problem of predicting the position of a moving lung tumor during respiration on the basis of external breathing signals--a technique used for beam gating, tracking, and other dynamic motion management techniques in radiation therapy. We demonstrate the use of neural network filters to correlate tumor position with external surrogate markers while simultaneously predicting the motion ahead in time, for situations in which neither the breathing pattern nor the correlation between moving anatomical elements is constant in time. One pancreatic cancer patient and two lung cancer patients with mid/upper lobe tumors were fluoroscopically imaged to observe tumor motion synchronously with the movement of external chest markers during free breathing. The external marker position was provided as input to a feed-forward neural network that correlated the marker and tumor movement to predict the tumor position up to 800 ms in advance. The predicted tumor position was compared to its observed position to establish the accuracy with which the filter could dynamically track tumor motion under nonstationary conditions. These results were compared to simplified linear versions of the filter. The two lung cancer patients exhibited complex respiratory behavior in which the correlation between surrogate marker and tumor position changed with each cycle of breathing. By automatically and continuously adjusting its parameters to the observations, the neural network achieved better tracking accuracy than the fixed and adaptive linear filters. Variability and instability in human respiration complicate the task of predicting tumor position from surrogate breathing signals. Our results show that adaptive signal-processing filters can provide more accurate tumor position estimates than simpler stationary filters when presented with nonstationary breathing motion.

  12. User's guide for a flat wake rotor inflow/wake velocity prediction code, DOWN

    NASA Technical Reports Server (NTRS)

    Wilson, John C.

    1991-01-01

    A computer code named DOWN was created to implement a flat wake theory for the calculation of rotor inflow and wake velocities. A brief description of the code methodology and instructions for its use are given. The code will be available from NASA's Computer Software Management and Information Center (COSMIC).

  13. An adaptive algorithm for removing the blocking artifacts in block-transform coded images

    NASA Astrophysics Data System (ADS)

    Yang, Jingzhong; Ma, Zheng

    2005-11-01

    JPEG and MPEG compression standards adopt the macro block encoding approach, but this method can lead to annoying blocking effects-the artificial rectangular discontinuities in the decoded images. Many powerful postprocessing algorithms have been developed to remove the blocking effects. However, all but the simplest algorithms can be too complex for real-time applications, such as video decoding. We propose an adaptive and easy-to-implement algorithm that can removes the artificial discontinuities. This algorithm contains two steps, firstly, to perform a fast linear smoothing of the block edge's pixel by average value replacement strategy, the next one, by comparing the variance that is derived from the difference of the processed image with a reasonable threshold, to determine whether the first step should stop or not. Experiments have proved that this algorithm can quickly remove the artificial discontinuities without destroying the key information of the decoded images, it is robust to different images and transform strategy.

  14. Nonlinear model identification and adaptive model predictive control using neural networks.

    PubMed

    Akpan, Vincent A; Hassapis, George D

    2011-04-01

    This paper presents two new adaptive model predictive control algorithms, both consisting of an on-line process identification part and a predictive control part. Both parts are executed at each sampling instant. The predictive control part of the first algorithm is the Nonlinear Model Predictive Control strategy and the control part of the second algorithm is the Generalized Predictive Control strategy. In the identification parts of both algorithms the process model is approximated by a series-parallel neural network structure which is trained by a recursive least squares (ARLS) method. The two control algorithms have been applied to: 1) the temperature control of a fluidized bed furnace reactor (FBFR) of a pilot plant and 2) the auto-pilot control of an F-16 aircraft. The training and validation data of the neural network are obtained from the open-loop simulation of the FBFR and the nonlinear F-16 aircraft models. The identification and control simulation results show that the first algorithm outperforms the second one at the expense of extra computation time. PMID:21281932

  15. Healthy individuals maintain adaptive stimulus evaluation under predictable and unpredictable threat.

    PubMed

    Klinkenberg, Isabelle A G; Rehbein, Maimu A; Steinberg, Christian; Klahn, Anna Luisa; Zwanzger, Peter; Zwitserlood, Pienie; Junghöfer, Markus

    2016-08-01

    The anxiety inducing paradigms such as the threat-of-shock paradigm have provided ample data on the emotional processing of predictable and unpredictable threat, but little is known about the processing of aversive, threat-irrelevant stimuli in these paradigms. We investigated how the predictability of threat influences the neural visual processing of threat-irrelevant fearful and neutral faces. Thirty-two healthy individuals participated in an NPU-threat test, consisting of a safe or neutral condition (N) and a predictable (P) as well as an unpredictable (U) threat condition, using audio-visual threat stimuli. In all NPU-conditions, we registered participants' brain responses to threat-irrelevant faces via magnetoencephalography. The data showed that increasing unpredictability of threat evoked increasing emotion regulation during face processing predominantly in dorsolateral prefrontal cortex regions during an early to mid-latency time interval. Importantly, we obtained only main effects but no significant interaction of facial expression and conditions of different threat predictability, neither in behavioral nor in neural data. Healthy individuals with average trait anxiety are thus able to maintain adaptive stimulus evaluation processes under predictable and unpredictable threat conditions. PMID:27208859

  16. Bulbar Microcircuit Model Predicts Connectivity and Roles of Interneurons in Odor Coding

    PubMed Central

    Gilra, Aditya; Bhalla, Upinder S.

    2015-01-01

    Stimulus encoding by primary sensory brain areas provides a data-rich context for understanding their circuit mechanisms. The vertebrate olfactory bulb is an input area having unusual two-layer dendro-dendritic connections whose roles in odor coding are unclear. To clarify these roles, we built a detailed compartmental model of the rat olfactory bulb that synthesizes a much wider range of experimental observations on bulbar physiology and response dynamics than has hitherto been modeled. We predict that superficial-layer inhibitory interneurons (periglomerular cells) linearize the input-output transformation of the principal neurons (mitral cells), unlike previous models of contrast enhancement. The linearization is required to replicate observed linear summation of mitral odor responses. Further, in our model, action-potentials back-propagate along lateral dendrites of mitral cells and activate deep-layer inhibitory interneurons (granule cells). Using this, we propose sparse, long-range inhibition between mitral cells, mediated by granule cells, to explain how the respiratory phases of odor responses of sister mitral cells can be sometimes decorrelated as observed, despite receiving similar receptor input. We also rule out some alternative mechanisms. In our mechanism, we predict that a few distant mitral cells receiving input from different receptors, inhibit sister mitral cells differentially, by activating disjoint subsets of granule cells. This differential inhibition is strong enough to decorrelate their firing rate phases, and not merely modulate their spike timing. Thus our well-constrained model suggests novel computational roles for the two most numerous classes of interneurons in the bulb. PMID:25942312

  17. A 4.8 kbps code-excited linear predictive coder

    NASA Technical Reports Server (NTRS)

    Tremain, Thomas E.; Campbell, Joseph P., Jr.; Welch, Vanoy C.

    1988-01-01

    A secure voice system STU-3 capable of providing end-to-end secure voice communications (1984) was developed. The terminal for the new system will be built around the standard LPC-10 voice processor algorithm. The performance of the present STU-3 processor is considered to be good, its response to nonspeech sounds such as whistles, coughs and impulse-like noises may not be completely acceptable. Speech in noisy environments also causes problems with the LPC-10 voice algorithm. In addition, there is always a demand for something better. It is hoped that LPC-10's 2.4 kbps voice performance will be complemented with a very high quality speech coder operating at a higher data rate. This new coder is one of a number of candidate algorithms being considered for an upgraded version of the STU-3 in late 1989. The problems of designing a code-excited linear predictive (CELP) coder to provide very high quality speech at a 4.8 kbps data rate that can be implemented on today's hardware are considered.

  18. Improving the Salammbo code modelling and using it to better predict radiation belts dynamics

    NASA Astrophysics Data System (ADS)

    Maget, Vincent; Sicard-Piet, Angelica; Grimald, Sandrine Rochel; Boscher, Daniel

    2016-07-01

    In the framework of the FP7-SPACESTORM project, one objective is to improve the reliability of the model-based predictions performed of the radiation belt dynamics (first developed during the FP7-SPACECAST project). In this purpose we have analyzed and improved the way the simulations using the ONERA Salammbô code are performed, especially in : - Better controlling the driving parameters of the simulation; - Improving the initialization of the simulation in order to be more accurate at most energies for L values between 4 to 6; - Improving the physics of the model. For first point a statistical analysis of the accuracy of the Kp index has been conducted. For point two we have based our method on a long duration simulation in order to extract typical radiation belt states depending on the solar wind stress and geomagnetic activity. For last point we have first improved separately the modelling of different processes acting in the radiation belts and then, we have analyzed the global improvements obtained when simulating them together. We'll discuss here on all these points and on the balance that has to be taken into account between modeled processes to globally improve the radiation belt modelling.

  19. Liner Optimization Studies Using the Ducted Fan Noise Prediction Code TBIEM3D

    NASA Technical Reports Server (NTRS)

    Dunn, M. H.; Farassat, F.

    1998-01-01

    In this paper we demonstrate the usefulness of the ducted fan noise prediction code TBIEM3D as a liner optimization design tool. Boundary conditions on the interior duct wall allow for hard walls or a locally reacting liner with axially segmented, circumferentially uniform impedance. Two liner optimization studies are considered in which farfield noise attenuation due to the presence of a liner is maximized by adjusting the liner impedance. In the first example, the dependence of optimal liner impedance on frequency and liner length is examined. Results show that both the optimal impedance and attenuation levels are significantly influenced by liner length and frequency. In the second example, TBIEM3D is used to compare radiated sound pressure levels between optimal and non-optimal liner cases at conditions designed to simulate take-off. It is shown that significant noise reduction is achieved for most of the sound field by selecting the optimal or near optimal liner impedance. Our results also indicate that there is relatively large region of the impedance plane over which optimal or near optimal liner behavior is attainable. This is an important conclusion for the designer since there are variations in liner characteristics due to manufacturing imprecisions.

  20. CoRAL: predicting non-coding RNAs from small RNA-sequencing data.

    PubMed

    Leung, Yuk Yee; Ryvkin, Paul; Ungar, Lyle H; Gregory, Brian D; Wang, Li-San

    2013-08-01

    The surprising observation that virtually the entire human genome is transcribed means we know little about the function of many emerging classes of RNAs, except their astounding diversities. Traditional RNA function prediction methods rely on sequence or alignment information, which are limited in their abilities to classify the various collections of non-coding RNAs (ncRNAs). To address this, we developed Classification of RNAs by Analysis of Length (CoRAL), a machine learning-based approach for classification of RNA molecules. CoRAL uses biologically interpretable features including fragment length and cleavage specificity to distinguish between different ncRNA populations. We evaluated CoRAL using genome-wide small RNA sequencing data sets from four human tissue types and were able to classify six different types of RNAs with ∼80% cross-validation accuracy. Analysis by CoRAL revealed that microRNAs, small nucleolar and transposon-derived RNAs are highly discernible and consistent across all human tissue types assessed, whereas long intergenic ncRNAs, small cytoplasmic RNAs and small nuclear RNAs show less consistent patterns. The ability to reliably annotate loci across tissue types demonstrates the potential of CoRAL to characterize ncRNAs using small RNA sequencing data in less well-characterized organisms. PMID:23700308

  1. Temporal integration of multisensory stimuli in autism spectrum disorder: a predictive coding perspective.

    PubMed

    Chan, Jason S; Langer, Anne; Kaiser, Jochen

    2016-08-01

    Recently, a growing number of studies have examined the role of multisensory temporal integration in people with autism spectrum disorder (ASD). Some studies have used temporal order judgments or simultaneity judgments to examine the temporal binding window, while others have employed multisensory illusions, such as the sound-induced flash illusion (SiFi). The SiFi is an illusion created by presenting two beeps along with one flash. Participants perceive two flashes if the stimulus-onset asynchrony (SOA) between the two flashes is brief. The temporal binding window can be measured by modulating the SOA between the beeps. Each of these tasks has been used to compare the temporal binding window in people with ASD and typically developing individuals; however, the results have been mixed. While temporal order and simultaneity judgment tasks have shown little temporal binding window differences between groups, studies using the SiFi have found a wider temporal binding window in ASD compared to controls. In this paper, we discuss these seemingly contradictory findings and suggest that predictive coding may be able to explain the differences between these tasks. PMID:27324803

  2. The effect of LPC (Linear Predictive Coding) processing on the recognition of unfamiliar speakers

    NASA Astrophysics Data System (ADS)

    Schmidt-Nielsen, A.; Stern, K. R.

    1985-09-01

    The effect of narrowband digital processing, using a linear predictive coding (LPC) algorithm at 2400 bits/s, on the recognition of previously unfamiliar speakers was investigated. Three sets of five speakers each (two sets of males differing in rated voice distinctiveness and one set of females) were tested for speaker recognition in two separate experiments using a familiarization-test procedure. In the first experiment three groups of listeners each heard a single set of speakers in both voice processing conditions, and in the second two groups of listeners each heard all three sets of speakers in a single voice processing condition. There were significant differences among speaker sets both with and without LPC processing, with the low distinctive males generally more poorly recognized than the other groups. There was also an interaction of speaker set and voice processing condition; the low distinctive males were no less recognizable over LPC than they were unprocessed, and one speaker in particular was actually better recognized over LPC. Although it seems that on the whole LPC processing reduces speaker recognition, the reverse may be the case for some speakers in some contexts. This suggests that one should be cautious about comparing speaker recognition for different voi ce systems of the basis of a single set of speakers. It also presents a serious obstacle to the development of a reliable standardized test of speaker recognizability.

  3. Prediction of ultrasonic pulse velocity for enhanced peat bricks using adaptive neuro-fuzzy methodology.

    PubMed

    Motamedi, Shervin; Roy, Chandrabhushan; Shamshirband, Shahaboddin; Hashim, Roslan; Petković, Dalibor; Song, Ki-Il

    2015-08-01

    Ultrasonic pulse velocity is affected by defects in material structure. This study applied soft computing techniques to predict the ultrasonic pulse velocity for various peats and cement content mixtures for several curing periods. First, this investigation constructed a process to simulate the ultrasonic pulse velocity with adaptive neuro-fuzzy inference system. Then, an ANFIS network with neurons was developed. The input and output layers consisted of four and one neurons, respectively. The four inputs were cement, peat, sand content (%) and curing period (days). The simulation results showed efficient performance of the proposed system. The ANFIS and experimental results were compared through the coefficient of determination and root-mean-square error. In conclusion, use of ANFIS network enhances prediction and generation of strength. The simulation results confirmed the effectiveness of the suggested strategies. PMID:25957464

  4. A Predictive Approach to Nonparametric Inference for Adaptive Sequential Sampling of Psychophysical Experiments

    PubMed Central

    Benner, Philipp; Elze, Tobias

    2012-01-01

    We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem. PMID:22822269

  5. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering-CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes-MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  6. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  7. Predictive analytics of environmental adaptability in multi-omic network models

    PubMed Central

    Angione, Claudio; Lió, Pietro

    2015-01-01

    Bacterial phenotypic traits and lifestyles in response to diverse environmental conditions depend on changes in the internal molecular environment. However, predicting bacterial adaptability is still difficult outside of laboratory controlled conditions. Many molecular levels can contribute to the adaptation to a changing environment: pathway structure, codon usage, metabolism. To measure adaptability to changing environmental conditions and over time, we develop a multi-omic model of Escherichia coli that accounts for metabolism, gene expression and codon usage at both transcription and translation levels. After the integration of multiple omics into the model, we propose a multiobjective optimization algorithm to find the allowable and optimal metabolic phenotypes through concurrent maximization or minimization of multiple metabolic markers. In the condition space, we propose Pareto hypervolume and spectral analysis as estimators of short term multi-omic (transcriptomic and metabolic) evolution, thus enabling comparative analysis of metabolic conditions. We therefore compare, evaluate and cluster different experimental conditions, models and bacterial strains according to their metabolic response in a multidimensional objective space, rather than in the original space of microarray data. We finally validate our methods on a phenomics dataset of growth conditions. Our framework, named METRADE, is freely available as a MATLAB toolbox. PMID:26482106

  8. Predictive analytics of environmental adaptability in multi-omic network models.

    PubMed

    Angione, Claudio; Lió, Pietro

    2015-01-01

    Bacterial phenotypic traits and lifestyles in response to diverse environmental conditions depend on changes in the internal molecular environment. However, predicting bacterial adaptability is still difficult outside of laboratory controlled conditions. Many molecular levels can contribute to the adaptation to a changing environment: pathway structure, codon usage, metabolism. To measure adaptability to changing environmental conditions and over time, we develop a multi-omic model of Escherichia coli that accounts for metabolism, gene expression and codon usage at both transcription and translation levels. After the integration of multiple omics into the model, we propose a multiobjective optimization algorithm to find the allowable and optimal metabolic phenotypes through concurrent maximization or minimization of multiple metabolic markers. In the condition space, we propose Pareto hypervolume and spectral analysis as estimators of short term multi-omic (transcriptomic and metabolic) evolution, thus enabling comparative analysis of metabolic conditions. We therefore compare, evaluate and cluster different experimental conditions, models and bacterial strains according to their metabolic response in a multidimensional objective space, rather than in the original space of microarray data. We finally validate our methods on a phenomics dataset of growth conditions. Our framework, named METRADE, is freely available as a MATLAB toolbox. PMID:26482106

  9. Predicting organismal vulnerability to climate warming: roles of behaviour, physiology and adaptation

    PubMed Central

    Huey, Raymond B.; Kearney, Michael R.; Krockenberger, Andrew; Holtum, Joseph A. M.; Jess, Mellissa; Williams, Stephen E.

    2012-01-01

    A recently developed integrative framework proposes that the vulnerability of a species to environmental change depends on the species' exposure and sensitivity to environmental change, its resilience to perturbations and its potential to adapt to change. These vulnerability criteria require behavioural, physiological and genetic data. With this information in hand, biologists can predict organisms most at risk from environmental change. Biologists and managers can then target organisms and habitats most at risk. Unfortunately, the required data (e.g. optimal physiological temperatures) are rarely available. Here, we evaluate the reliability of potential proxies (e.g. critical temperatures) that are often available for some groups. Several proxies for ectotherms are promising, but analogous ones for endotherms are lacking. We also develop a simple graphical model of how behavioural thermoregulation, acclimation and adaptation may interact to influence vulnerability over time. After considering this model together with the proxies available for physiological sensitivity to climate change, we conclude that ectotherms sharing vulnerability traits seem concentrated in lowland tropical forests. Their vulnerability may be exacerbated by negative biotic interactions. Whether tropical forest (or other) species can adapt to warming environments is unclear, as genetic and selective data are scant. Nevertheless, the prospects for tropical forest ectotherms appear grim. PMID:22566674

  10. Presence of Motor-Intentional Aiming Deficit Predicts Functional Improvement of Spatial Neglect with Prism Adaptation

    PubMed Central

    Goedert, Kelly M.; Chen, Peii; Boston, Raymond C.; Foundas, Anne L.; Barrett, A. M.

    2013-01-01

    Spatial neglect is a debilitating disorder for which there is no agreed upon course of rehabilitation. The lack of consensus on treatment may result from systematic differences in the syndromes’ characteristics, with spatial cognitive deficits potentially affecting perceptual-attentional Where or motor-intentional Aiming spatial processing. Heterogeneity of response to treatment might be explained by different treatment impact on these dissociated deficits: prism adaptation, for example, might reduce Aiming deficits without affecting Where spatial deficits. Here, we tested the hypothesis that classifying patients by their profile of Where-vs-Aiming spatial deficit would predict response to prism adaptation, and specifically that patients with Aiming bias would have better recovery than those with isolated Where bias. We classified the spatial errors of 24 sub-acute right-stroke survivors with left spatial neglect as: 1) isolated Where bias, 2) isolated Aiming bias or 3) both. Participants then completed two weeks of prism adaptation treatment. They also completed the Behavioral Inattention Test (BIT) and Catherine Bergego Scale (CBS) tests of neglect recovery weekly for six weeks. As hypothesized, participants with only Aiming deficits improved on the CBS, whereas, those with only Where deficits did not improve. Participants with both deficits demonstrated intermediate improvement. These results support behavioral classification of spatial neglect patients as a potential valuable tool for assigning targeted, effective early rehabilitation. PMID:24376064

  11. Hybrid Model Predictive Control for Sequential Decision Policies in Adaptive Behavioral Interventions

    PubMed Central

    Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E.; Downs, Danielle S.; Savage, Jennifer S.

    2015-01-01

    Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or “just-in-time” behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components. PMID:25635157

  12. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  13. A Risk-based Model Predictive Control Approach to Adaptive Interventions in Behavioral Health.

    PubMed

    Zafra-Cabeza, Ascensión; Rivera, Daniel E; Collins, Linda M; Ridao, Miguel A; Camacho, Eduardo F

    2011-07-01

    This paper examines how control engineering and risk management techniques can be applied in the field of behavioral health through their use in the design and implementation of adaptive behavioral interventions. Adaptive interventions are gaining increasing acceptance as a means to improve prevention and treatment of chronic, relapsing disorders, such as abuse of alcohol, tobacco, and other drugs, mental illness, and obesity. A risk-based Model Predictive Control (MPC) algorithm is developed for a hypothetical intervention inspired by Fast Track, a real-life program whose long-term goal is the prevention of conduct disorders in at-risk children. The MPC-based algorithm decides on the appropriate frequency of counselor home visits, mentoring sessions, and the availability of after-school recreation activities by relying on a model that includes identifiable risks, their costs, and the cost/benefit assessment of mitigating actions. MPC is particularly suited for the problem because of its constraint-handling capabilities, and its ability to scale to interventions involving multiple tailoring variables. By systematically accounting for risks and adapting treatment components over time, an MPC approach as described in this paper can increase intervention effectiveness and adherence while reducing waste, resulting in advantages over conventional fixed treatment. A series of simulations are conducted under varying conditions to demonstrate the effectiveness of the algorithm. PMID:21643450

  14. Motion-vector-based adaptive quantization in MPEG-4 fine granular scalable coding

    NASA Astrophysics Data System (ADS)

    Yang, Shuping; Lin, Xinggang; Wang, Guijin

    2003-05-01

    Selective enhancement mechanism of Fine-Granular-Scalability (FGS) In MPEG-4 is able to enhance specific objects under bandwidth variation. A novel technique for self-adaptive enhancement of interested regions based on Motion Vectors (MVs) of the base layer is proposed, which is suitable for those video sequences having still background and what we are interested in is only the moving objects in the scene, such as news broadcasting, video surveillance, Internet education, etc. Motion vectors generated during base layer encoding are obtained and analyzed. A Gaussian model is introduced to describe non-moving macroblocks which may have non-zero MVs caused by random noise or luminance variation. MVs of these macroblocks are set to zero to prevent them from being enhanced. A segmentation algorithm, region growth, based on MV values is exploited to separate foreground from background. Post-process is needed to reduce the influence of burst noise so that only the interested moving regions are left. Applying the result in selective enhancement during enhancement layer encoding can significantly improves the visual quality of interested regions within an aforementioned video transmitted at different bit-rate in our experiments.

  15. Optimal Multitrial Prediction Combination and Subject-Specific Adaptation for Minimal Training Brain Switch Designs.

    PubMed

    Spyrou, Loukianos; Blokland, Yvonne; Farquhar, Jason; Bruhn, Jorgen

    2016-06-01

    Brain-Computer Interface (BCI) systems are traditionally designed by taking into account user-specific data to enable practical use. More recently, subject independent (SI) classification algorithms have been developed which bypass the subject specific adaptation and enable rapid use of the system. A brain switch is a particular BCI system where the system is required to distinguish from two separate mental tasks corresponding to the on-off commands of a switch. Such applications require a low false positive rate (FPR) while having an acceptable response time (RT) until the switch is activated. In this work, we develop a methodology that produces optimal brain switch behavior through subject specific (SS) adaptation of: a) a multitrial prediction combination model and b) an SI classification model. We propose a statistical model of combining classifier predictions that enables optimal FPR calibration through a short calibration session. We trained an SI classifier on a training synchronous dataset and tested our method on separate holdout synchronous and asynchronous brain switch experiments. Although our SI model obtained similar performance between training and holdout datasets, 86% and 85% for the synchronous and 69% and 66% for the asynchronous the between subject FPR and TPR variability was high (up to 62%). The short calibration session was then employed to alleviate that problem and provide decision thresholds that achieve when possible a target FPR=1% with good accuracy for both datasets. PMID:26529768

  16. Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data

    PubMed Central

    Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  17. A predictive model to inform adaptive management of double-crested cormorants and fisheries in Michigan

    USGS Publications Warehouse

    Tsehaye, Iyob; Jones, Michael L.; Irwin, Brian J.; Fielder, David G.; Breck, James E.; Luukkonen, David R.

    2015-01-01

    The proliferation of double-crested cormorants (DCCOs; Phalacrocorax auritus) in North America has raised concerns over their potential negative impacts on game, cultured and forage fishes, island and terrestrial resources, and other colonial water birds, leading to increased public demands to reduce their abundance. By combining fish surplus production and bird functional feeding response models, we developed a deterministic predictive model representing bird–fish interactions to inform an adaptive management process for the control of DCCOs in multiple colonies in Michigan. Comparisons of model predictions with observations of changes in DCCO numbers under management measures implemented from 2004 to 2012 suggested that our relatively simple model was able to accurately reconstruct past DCCO population dynamics. These comparisons helped discriminate among alternative parameterizations of demographic processes that were poorly known, especially site fidelity. Using sensitivity analysis, we also identified remaining critical uncertainties (mainly in the spatial distributions of fish vs. DCCO feeding areas) that can be used to prioritize future research and monitoring needs. Model forecasts suggested that continuation of existing control efforts would be sufficient to achieve long-term DCCO control targets in Michigan and that DCCO control may be necessary to achieve management goals for some DCCO-impacted fisheries in the state. Finally, our model can be extended by accounting for parametric or ecological uncertainty and including more complex assumptions on DCCO–fish interactions as part of the adaptive management process.

  18. Self-adaptive MOEA feature selection for classification of bankruptcy prediction data.

    PubMed

    Gaspar-Cunha, A; Recio, G; Costa, L; Estébanez, C

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  19. Predicting neutron diffusion eigenvalues with a query-based adaptive neural architecture.

    PubMed

    Lysenko, M G; Wong, H I; Maldonado, G I

    1999-01-01

    A query-based approach for adaptively retraining and restructuring a two-hidden-layer artificial neural network (ANN) has been developed for the speedy prediction of the fundamental mode eigenvalue of the neutron diffusion equation, a standard nuclear reactor core design calculation which normally requires the iterative solution of a large-scale system of nonlinear partial differential equations (PDE's). The approach developed focuses primarily upon the adaptive selection of training and cross-validation data and on artificial neural-network (ANN) architecture adjustments, with the objective of improving the accuracy and generalization properties of ANN-based neutron diffusion eigenvalue predictions. For illustration, the performance of a "bare bones" feedforward multilayer perceptron (MLP) is upgraded through a variety of techniques; namely, nonrandom initial training set selection, adjoint function input weighting, teacher-student membership and equivalence queries for generation of appropriate training data, and a dynamic node architecture (DNA) implementation. The global methodology is flexible in that it can "wrap around" any specific training algorithm selected for the static calculations (i.e., training iterations with a fixed training set and architecture). Finally, the improvements obtained are carefully contrasted against past works reported in the literature. PMID:18252578

  20. The biology of developmental plasticity and the Predictive Adaptive Response hypothesis

    PubMed Central

    Bateson, Patrick; Gluckman, Peter; Hanson, Mark

    2014-01-01

    Many forms of developmental plasticity have been observed and these are usually beneficial to the organism. The Predictive Adaptive Response (PAR) hypothesis refers to a form of developmental plasticity in which cues received in early life influence the development of a phenotype that is normally adapted to the environmental conditions of later life. When the predicted and actual environments differ, the mismatch between the individual's phenotype and the conditions in which it finds itself can have adverse consequences for Darwinian fitness and, later, for health. Numerous examples exist of the long-term effects of cues indicating a threatening environment affecting the subsequent phenotype of the individual organism. Other examples consist of the long-term effects of variations in environment within a normal range, particularly in the individual's nutritional environment. In mammals the cues to developing offspring are often provided by the mother's plane of nutrition, her body composition or stress levels. This hypothetical effect in humans is thought to be important by some scientists and controversial by others. In resolving the conflict, distinctions should be drawn between PARs induced by normative variations in the developmental environment and the ill effects on development of extremes in environment such as a very poor or very rich nutritional environment. Tests to distinguish between different developmental processes impacting on adult characteristics are proposed. Many of the mechanisms underlying developmental plasticity involve molecular epigenetic processes, and their elucidation in the context of PARs and more widely has implications for the revision of classical evolutionary theory. PMID:24882817

  1. Simulation of transport in the ignited ITER with 1.5-D predictive code

    NASA Astrophysics Data System (ADS)

    Becker, G.

    1995-01-01

    The confinement in the bulk and scrape-off layer plasmas of the ITER EDA and CDA is investigated with special versions of the 1.5-D BALDUR predictive transport code for the case of peaked density profiles (Cu=1.0). The code self-consistently computes 2-D equilibria and solves 1-D transport equations with empirical transport coefficients for the ohmic, L and ELMy H mode regimes. Self-sustained steady state thermonuclear burn is demonstrated for up to 500 s. It is shown to be compatible with the strong radiation losses for divertor heat load reduction caused by the seeded impurities iron, neon and argon. The corresponding global and local energy and particle transport are presented. The required radiation corrected energy confinement times of the EDA and CDA are found to be close to 4 s, which is attainable according to the ITER ELMy H mode scalings. In the reference cases, the steady state helium fraction is 7%, which already causes significant dilution of the DT fuel. The fractions of iron, neon and argon needed for the prescribed radiative power loss are given. It is shown that high radiative losses from the confinement zone, mainly by bremsstrahlung, cannot be avoided. The radiation profiles of iron and argon are found to be the same, with two thirds of the total radiation being emitted from closed flux surfaces. Fuel dilution due to iron and argon is small. The neon radiation is more peripheral, since only half of the total radiative power is lost within the separatrix. But neon is found to cause high fuel. Dilution. The combined dilution effect by helium and neon conflicts with burn control, self-sustained burn and divertor power reduction. Raising the helium fraction above 10% leads to the same difficulties owing to fuel dilution. The high helium levels of the present EDA design are thus unacceptable. For the reference EDA case, the self-consistent electron density and temperature at the separatrix are 5.6*1019 m-3 and 130 eV, respectively. The bootstrap

  2. Adaptive network based on fuzzy inference system for equilibrated urea concentration prediction.

    PubMed

    Azar, Ahmad Taher

    2013-09-01

    Post-dialysis urea rebound (PDUR) has been attributed mostly to redistribution of urea from different compartments, which is determined by variations in regional blood flows and transcellular urea mass transfer coefficients. PDUR occurs after 30-90min of short or standard hemodialysis (HD) sessions and after 60min in long 8-h HD sessions, which is inconvenient. This paper presents adaptive network based on fuzzy inference system (ANFIS) for predicting intradialytic (Cint) and post-dialysis urea concentrations (Cpost) in order to predict the equilibrated (Ceq) urea concentrations without any blood sampling from dialysis patients. The accuracy of the developed system was prospectively compared with other traditional methods for predicting equilibrated urea (Ceq), post dialysis urea rebound (PDUR) and equilibrated dialysis dose (eKt/V). This comparison is done based on root mean squares error (RMSE), normalized mean square error (NRMSE), and mean absolute percentage error (MAPE). The ANFIS predictor for Ceq achieved mean RMSE values of 0.3654 and 0.4920 for training and testing, respectively. The statistical analysis demonstrated that there is no statistically significant difference found between the predicted and the measured values. The percentage of MAE and RMSE for testing phase is 0.63% and 0.96%, respectively. PMID:23806679

  3. Prediction of antimicrobial peptides based on the adaptive neuro-fuzzy inference system application.

    PubMed

    Fernandes, Fabiano C; Rigden, Daniel J; Franco, Octavio L

    2012-01-01

    Antimicrobial peptides (AMPs) are widely distributed defense molecules and represent a promising alternative for solving the problem of antibiotic resistance. Nevertheless, the experimental time required to screen putative AMPs makes computational simulations based on peptide sequence analysis and/or molecular modeling extremely attractive. Artificial intelligence methods acting as simulation and prediction tools are of great importance in helping to efficiently discover and design novel AMPs. In the present study, state-of-the-art published outcomes using different prediction methods and databases were compared to an adaptive neuro-fuzzy inference system (ANFIS) model. Data from our study showed that ANFIS obtained an accuracy of 96.7% and a Matthew's Correlation Coefficient (MCC) of0.936, which proved it to be an efficient model for pattern recognition in antimicrobial peptide prediction. Furthermore, a lower number of input parameters were needed for the ANFIS model, improving the speed and ease of prediction. In summary, due to the fuzzy nature ofAMP physicochemical properties, the ANFIS approach presented here can provide an efficient solution for screening putative AMP sequences and for exploration of properties characteristic of AMPs. PMID:23193592

  4. Prediction of stochastic blade responses using measured wind-speed data as input to the FLAP code

    SciTech Connect

    Wright, A.D.; Thresher, R.W.

    1988-11-01

    Accurately predicting wind turbine blade loads and response is important in predicting the fatigue life of wind turbines. The necessity of including turbulent wind effects in structural dynamics model has long been recognized. At SERI, the structural dynamics model, or FLAP (Force and Loads Analysis Program), is being modified to include turbulent wind fluctuations in predicting rotor blade forces and moments. The objective of this paper is to show FLAP code predictions compared to measured blade loads, using actual anemometer array data and a curve-fitting routine to form series expansion coefficients as the turbulence input to FLAP. The predictions are performed for a three-blade upwind field test turbine. An array of nine anemometers was located 0.8 rotor diameters (D) upwind of the turbine, and data from each anemometer are used in a least-squares curve-fitting routine to obtain a series expansion of the turbulence field over the rotor disk. Three 10-min data cases are used to compare FLAP predictions to measured results. Each case represents a different mean wind speed and turbulence intensity. The time series of coefficients in the expansion of the turbulent velocity field are input to the FLAP code. Time series of predicted flap-bending, moments at two blade radial stations are obtained, and power spectra of the predictions are then compared to power spectra of the measured blade bending moments. Conclusions are then drawn about the FLAP code's ability to predict the blade loads for these three data cases. Recommendations for future work are also made. 9 refs., 12 figs., 4 tabs.

  5. Prediction of stochastic blade responses using measured wind-speed data as input to the FLAP code

    SciTech Connect

    Wright, A.D.; Thresher, R.W. )

    1990-11-01

    Accurately predicting wind turbine blade loads and response is important in predicting the fatigue life of wind turbines. The necessity of including turbulent wind effects in structural dynamics models has long been recognized. At SERI, the structural dynamics model, or FLAP (Force and Loads Analysis Program), is being modified to include turbulent wind fluctuations in predicting rotor blade forces and moments. The objective of this paper is to show FLAP code predictions compared to measured blade loads using actual anemometer array data and a curve-fitting routine to form series expansion coefficients as the turbulence input to FLAP. The predictions are performed for a three-bladed upwind field test turbine. An array of nine anemometers was located 0.8 rotor diameters (D) upwind of the turbine, and data from each anemometer are used in a least-squares curve-fitting routine to obtain a series expansion of the turbulence field over the rotor disk. Three 10-min data cases are used to compare FLAP predictions to measured results. Each case represents a different mean wind speed and turbulence intensity. The time series of coefficients in the expansion of the turbulent velocity field are input to the FLAP code. Time series of predicted flap-bending moments at two blade radial stations are obtained, and power spectra of the predictions are then compared to power spectra of the measured blade bending moments. Conclusions are then drawn about the FLAP codes' ability to predict the blade loads for these three data cases. Recommendations for future work are also made.

  6. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  7. Frequency-domain stress prediction algorithm for the LIFE2 fatigue analysis code

    SciTech Connect

    Sutherland, H.J.

    1992-01-01

    The LIFE2 computer code is a fatigue/fracture analysis code that is specialized to the analysis of wind turbine components. The numerical formulation of the code uses a series of cycle mount matrices to describe the cyclic stress states imposed upon the turbine. However, many structural analysis techniques yield frequency-domain stress spectra and a large body of experimental loads (stress) data is reported in the frequency domain. To permit the analysis of this class of data, a Fourier analysis module has been added to the code. The module transforms the frequency spectrum to an equivalent time series suitable for rainflow counting by other modules in the code. This paper describes the algorithms incorporated into the code and uses experimental data to illustrate their use. 10 refs., 11 figs.

  8. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  9. Interest Level in 2-Year-Olds with Autism Spectrum Disorder Predicts Rate of Verbal, Nonverbal, and Adaptive Skill Acquisition

    ERIC Educational Resources Information Center

    Klintwall, Lars; Macari, Suzanne; Eikeseth, Svein; Chawarska, Katarzyna

    2015-01-01

    Recent studies have suggested that skill acquisition rates for children with autism spectrum disorders receiving early interventions can be predicted by child motivation. We examined whether level of interest during an Autism Diagnostic Observation Schedule assessment at 2?years predicts subsequent rates of verbal, nonverbal, and adaptive skill…

  10. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  11. Multi-peaked adaptive landscape for chikungunya virus evolution predicts continued fitness optimization in Aedes albopictus mosquitoes.

    PubMed

    Tsetsarkin, Konstantin A; Chen, Rubing; Yun, Ruimei; Rossi, Shannan L; Plante, Kenneth S; Guerbois, Mathilde; Forrester, Naomi; Perng, Guey Chuen; Sreekumar, Easwaran; Leal, Grace; Huang, Jing; Mukhopadhyay, Suchetana; Weaver, Scott C

    2014-01-01

    Host species-specific fitness landscapes largely determine the outcome of host switching during pathogen emergence. Using chikungunya virus (CHIKV) to study adaptation to a mosquito vector, we evaluated mutations associated with recently evolved sub-lineages. Multiple Aedes albopictus-adaptive fitness peaks became available after CHIKV acquired an initial adaptive (E1-A226V) substitution, permitting rapid lineage diversification observed in nature. All second-step mutations involved replacements by glutamine or glutamic acid of E2 glycoprotein amino acids in the acid-sensitive region, providing a framework to anticipate additional A. albopictus-adaptive mutations. The combination of second-step adaptive mutations into a single, 'super-adaptive' fitness peak also predicted the future emergence of CHIKV strains with even greater transmission efficiency in some current regions of endemic circulation, followed by their likely global spread. PMID:24933611

  12. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better. PMID:20174010

  13. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  14. STGSTK: A computer code for predicting multistage axial flow compressor performance by a meanline stage stacking method

    NASA Technical Reports Server (NTRS)

    Steinke, R. J.

    1982-01-01

    A FORTRAN computer code is presented for off-design performance prediction of axial-flow compressors. Stage and compressor performance is obtained by a stage-stacking method that uses representative velocity diagrams at rotor inlet and outlet meanline radii. The code has options for: (1) direct user input or calculation of nondimensional stage characteristics; (2) adjustment of stage characteristics for off-design speed and blade setting angle; (3) adjustment of rotor deviation angle for off-design conditions; and (4) SI or U.S. customary units. Correlations from experimental data are used to model real flow conditions. Calculations are compared with experimental data.

  15. Predicting animal δ18O: Accounting for diet and physiological adaptation

    NASA Astrophysics Data System (ADS)

    Kohn, Matthew J.

    1996-12-01

    Theoretical predictions and measured isotope variations indicate that diet and physiological adaptation have a significant impact on animals δ18O and cannot be ignored. A generalized model is therefore developed for the prediction of animal body water and phosphate δ18O to incorporate these factors quantitatively. Application of the model reproduces most published compositions and compositional trends for mammals and birds. A moderate dependence of animal δ18O on humidity is predicted for drought-tolerant animals, and the correlation between humidity and North American deer bone composition as corrected for local meteoric water is predicted within the scatter of the data. In contrast to an observed strong correlation between kangaroo δ18O and humidity ( Δδ 18O/Δh ˜ 2.5 ± 0.4‰/10% r.h.), the predicted humidity dependence is only 1.3 - 1.7‰/10% r.h., and it is inferred that drinking water in hot dry areas of Australia is enriched in 18O over rainwater. Differences in physiology and water turnover readily explain the observed differences in δ18O for several herbivore genera in East Africa, excepting antelopes. Antelope models are more sensitive to biological fractionations, and adjustments to the flux of transcutaneous water vapor within experimentally measured ranges allows their δ18O values to be matched. Models of the seasonal changes of forage composition for two regions with dissimilar climates show that significant seasonal variations in animal isotope composition are expected, and that animals with different physiologies and diets track climate differently. Analysis of different genera with disparate sensitivities to surface water and humidity will allow the most accurate quantification of past climate changes.

  16. Predicting animal δ18O: Accounting for diet and physiological adaptation

    NASA Astrophysics Data System (ADS)

    Kohn, Matthew J.

    1996-12-01

    Theoretical predictions and measured isotope variations indicate that diet and physiological adaptation have a significant impact on animals δ18O and cannot be ignored. A generalized model is therefore developed for the prediction of animal body water and phosphate δ18O to incorporate these factors quantitatively. Application of the model reproduces most published compositions and compositional trends for mammals and birds. A moderate dependence of animal δ18O on humidity is predicted for drought-tolerant animals, and the correlation between humidity and North American deer bone composition as corrected for local meteoric water is predicted within the scatter of the data. In contrast to an observed strong correlation between kangaroo δ18O and humidity (Δδ18O/Δh ∼ 2.5± 0.4‰/10%r.h.), the predicted humidity dependence is only 1.3 - 1.7‰/10% r.h., and it is inferred that drinking water in hot dry areas of Australia is enriched in 18O over rainwater. Differences in physiology and water turnover readily explain the observed differences in δ18O for several herbivore genera in East Africa, excepting antelopes. Antelope models are more sensitive to biological fractionations, and adjustments to the flux of transcutaneous water vapor within experimentally measured ranges allows their δ18O values to be matched. Models of the seasonal changes of forage composition for two regions with dissimilar climates show that significant seasonal variations in animal isotope composition are expected, and that animals with different physiologies and diets track climate differently. Analysis of different genera with disparate sensitivities to surface water and humidity will allow the most accurate quantification of past climate changes.

  17. Verification of computational aerodynamic predictions for complex hypersonic vehicles using the INCA{trademark} code

    SciTech Connect

    Payne, J.L.; Walker, M.A.

    1995-01-01

    This paper describes a process of combining two state-of-the-art CFD tools, SPRINT and INCA, in a manner which extends the utility of both codes beyond what is possible from either code alone. The speed and efficiency of the PNS code, SPRING, has been combined with the capability of a Navier-Stokes code to model fully elliptic, viscous separated regions on high performance, high speed flight systems. The coupled SPRINT/INCA capability is applicable for design and evaluation of high speed flight vehicles in the supersonic to hypersonic speed regimes. This paper describes the codes involved, the interface process and a few selected test cases which illustrate the SPRINT/INCA coupling process. Results have shown that the combination of SPRINT and INCA produces correct results and can lead to improved computational analyses for complex, three-dimensional problems.

  18. Static and predictive tomographic reconstruction for wide-field multi-object adaptive optics systems.

    PubMed

    Correia, C; Jackson, K; Véran, J-P; Andersen, D; Lardière, O; Bradley, C

    2014-01-01

    Multi-object adaptive optics (MOAO) systems are still in their infancy: their complex optical designs for tomographic, wide-field wavefront sensing, coupled with open-loop (OL) correction, make their calibration a challenge. The correction of a discrete number of specific directions in the field allows for streamlined application of a general class of spatio-angular algorithms, initially proposed in Whiteley et al. [J. Opt. Soc. Am. A15, 2097 (1998)], which is compatible with partial on-line calibration. The recent Learn & Apply algorithm from Vidal et al. [J. Opt. Soc. Am. A27, A253 (2010)] can then be reinterpreted in a broader framework of tomographic algorithms and is shown to be a special case that exploits the particulars of OL and aperture-plane phase conjugation. An extension to embed a temporal prediction step to tackle sky-coverage limitations is discussed. The trade-off between lengthening the camera integration period, therefore increasing system lag error, and the resulting improvement in SNR can be shifted to higher guide-star magnitudes by introducing temporal prediction. The derivation of the optimal predictor and a comparison to suboptimal autoregressive models is provided using temporal structure functions. It is shown using end-to-end simulations of Raven, the MOAO science, and technology demonstrator for the 8 m Subaru telescope that prediction allows by itself the use of 1-magnitude-fainter guide stars. PMID:24561945

  19. Predicting demographically sustainable rates of adaptation: can great tit breeding time keep pace with climate change?

    PubMed Central

    Gienapp, Phillip; Lof, Marjolein; Reed, Thomas E.; McNamara, John; Verhulst, Simon; Visser, Marcel E.

    2013-01-01

    Populations need to adapt to sustained climate change, which requires micro-evolutionary change in the long term. A key question is how the rate of this micro-evolutionary change compares with the rate of environmental change, given that theoretically there is a ‘critical rate of environmental change’ beyond which increased maladaptation leads to population extinction. Here, we parametrize two closely related models to predict this critical rate using data from a long-term study of great tits (Parus major). We used stochastic dynamic programming to predict changes in optimal breeding time under three different climate scenarios. Using these results we parametrized two theoretical models to predict critical rates. Results from both models agreed qualitatively in that even ‘mild’ rates of climate change would be close to these critical rates with respect to great tit breeding time, while for scenarios close to the upper limit of IPCC climate projections the calculated critical rates would be clearly exceeded with possible consequences for population persistence. We therefore tentatively conclude that micro-evolution, together with plasticity, would rescue only the population from mild rates of climate change, although the models make many simplifying assumptions that remain to be tested. PMID:23209174

  20. An adaptive distance-based group contribution method for thermodynamic property prediction.

    PubMed

    He, Tanjin; Li, Shuang; Chi, Yawei; Zhang, Hong-Bo; Wang, Zhi; Yang, Bin; He, Xin; You, Xiaoqing

    2016-09-14

    In the search for an accurate yet inexpensive method to predict thermodynamic properties of large hydrocarbon molecules, we have developed an automatic and adaptive distance-based group contribution (DBGC) method. The method characterizes the group interaction within a molecule with an exponential decay function of the group-to-group distance, defined as the number of bonds between the groups. A database containing the molecular bonding information and the standard enthalpy of formation (Hf,298K) for alkanes, alkenes, and their radicals at the M06-2X/def2-TZVP//B3LYP/6-31G(d) level of theory was constructed. Multiple linear regression (MLR) and artificial neural network (ANN) fitting were used to obtain the contributions from individual groups and group interactions for further predictions. Compared with the conventional group additivity (GA) method, the DBGC method predicts Hf,298K for alkanes more accurately using the same training sets. Particularly for some highly branched large hydrocarbons, the discrepancy with the literature data is smaller for the DBGC method than the conventional GA method. When extended to other molecular classes, including alkenes and radicals, the overall accuracy level of this new method is still satisfactory. PMID:27522953

  1. Assessing the Predictive Capability of the LIFEIV Nuclear Fuel Performance Code using Sequential Calibration

    SciTech Connect

    Stull, Christopher J.; Williams, Brian J.; Unal, Cetin

    2012-07-05

    This report considers the problem of calibrating a numerical model to data from an experimental campaign (or series of experimental tests). The issue is that when an experimental campaign is proposed, only the input parameters associated with each experiment are known (i.e. outputs are not known because the experiments have yet to be conducted). Faced with such a situation, it would be beneficial from the standpoint of resource management to carefully consider the sequence in which the experiments are conducted. In this way, the resources available for experimental tests may be allocated in a way that best 'informs' the calibration of the numerical model. To address this concern, the authors propose decomposing the input design space of the experimental campaign into its principal components. Subsequently, the utility (to be explained) of each experimental test to the principal components of the input design space is used to formulate the sequence in which the experimental tests will be used for model calibration purposes. The results reported herein build on those presented and discussed in [1,2] wherein Verification & Validation and Uncertainty Quantification (VU) capabilities were applied to the nuclear fuel performance code LIFEIV. In addition to the raw results from the sequential calibration studies derived from the above, a description of the data within the context of the Predictive Maturity Index (PMI) will also be provided. The PMI [3,4] is a metric initiated and developed at Los Alamos National Laboratory to quantitatively describe the ability of a numerical model to make predictions in the absence of experimental data, where it is noted that 'predictions in the absence of experimental data' is not synonymous with extrapolation. This simply reflects the fact that resources do not exist such that each and every execution of the numerical model can be compared against experimental data. If such resources existed, the justification for numerical models

  2. Prediction of flood abnormalities for improved public safety using a modified adaptive neuro-fuzzy inference system.

    PubMed

    Aqil, M; Kita, I; Yano, A; Nishiyama, S

    2006-01-01

    It is widely accepted that an efficient flood alarm system may significantly improve public safety and mitigate economical damages caused by inundations. In this paper, a modified adaptive neuro-fuzzy system is proposed to modify the traditional neuro-fuzzy model. This new method employs a rule-correction based algorithm to replace the error back propagation algorithm that is employed by the traditional neuro-fuzzy method in backward pass calculation. The final value obtained during the backward pass calculation using the rule-correction algorithm is then considered as a mapping function of the learning mechanism of the modified neuro-fuzzy system. Effectiveness of the proposed identification technique is demonstrated through a simulation study on the flood series of the Citarum River in Indonesia. The first four-year data (1987 to 1990) was used for model training/calibration, while the other remaining data (1991 to 2002) was used for testing the model. The number of antecedent flows that should be included in the input variables was determined by two statistical methods, i.e. autocorrelation and partial autocorrelation between the variables. Performance accuracy of the model was evaluated in terms of two statistical indices, i.e. mean average percentage error and root mean square error. The algorithm was developed in a decision support system environment in order to enable users to process the data. The decision support system is found to be useful due to its interactive nature, flexibility in approach, and evolving graphical features, and can be adopted for any similar situation to predict the streamflow. The main data processing includes gauging station selection, input generation, lead-time selection/generation, and length of prediction. This program enables users to process the flood data, to train/test the model using various input options, and to visualize results. The program code consists of a set of files, which can be modified as well to match other

  3. Classification of Arabidopsis thaliana gene sequences: clustering of coding sequences into two groups according to codon usage improves gene prediction.

    PubMed

    Mathé, C; Peresetsky, A; Déhais, P; Van Montagu, M; Rouzé, P

    1999-02-01

    While genomic sequences are accumulating, finding the location of the genes remains a major issue that can be solved only for about a half of them by homology searches. Prediction methods are thus required, but unfortunately are not fully satisfying. Most prediction methods implicitly assume a unique model for genes. This is an oversimplification as demonstrated by the possibility to group coding sequences into several classes in Escherichia coli and other genomes. As no classification existed for Arabidopsis thaliana, we classified genes according to the statistical features of their coding sequences. A clustering algorithm using a codon usage model was developed and applied to coding sequences from A. thaliana, E. coli, and a mixture of both. By using it, Arabidopsis sequences were clustered into two classes. The CU1 and CU2 classes differed essentially by the choice of pyrimidine bases at the codon silent sites: CU2 genes often use C whereas CU1 genes prefer T. This classification discriminated the Arabidopsis genes according to their expressiveness, highly expressed genes being clustered in CU2 and genes expected to have a lower expression, such as the regulatory genes, in CU1. The algorithm separated the sequences of the Escherichia-Arabidopsis mixed data set into five classes according to the species, except for one class. This mixed class contained 89 % Arabidopsis genes from CU1 and 11 % E. coli genes, mostly horizontally transferred. Interestingly, most genes encoding organelle-targeted proteins, except the photosynthetic and photoassimilatory ones, were clustered in CU1. By tailoring the GeneMark CDS prediction algorithm to the observed coding sequence classes, its quality of prediction was greatly improved. Similar improvement can be expected with other prediction systems. PMID:9925779

  4. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    NASA Astrophysics Data System (ADS)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  5. Age of first words predicts cognitive ability and adaptive skills in children with ASD

    PubMed Central

    Mayo, Jessica; Chlebowski, Colby; Fein, Deborah A.; Eigsti, Inge-Marie

    2015-01-01

    Acquiring useful language by age 5 has been identified as a strong predictor of positive outcomes in individuals with ASD. This study examined the relationship between age of language acquisition and later functioning in children with ASD (n = 119). First word acquisition at a range of ages was probed for its relationship to cognitive ability and adaptive behaviors at 52 months. Results indicated that although producing first words predicted better outcome at every age examined, producing first words by 24 months was a particularly strong predictor of better outcomes. This finding suggests that the historic criterion for positive prognosis (i.e., “useful language by age 5”) can be updated to a more specific criterion with an earlier developmental time point. PMID:22673858

  6. Predicted performance benefits of an adaptive digital engine control system of an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Myers, L. P.; Ray, R. J.

    1985-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrating engine-airframe control systems. Currently this is accomplished on the NASA Ames Research Center's F-15 airplane. The two control modes used to implement the systems are an integrated flightpath management mode and in integrated adaptive engine control system (ADECS) mode. The ADECS mode is a highly integrated mode in which the airplane flight conditions, the resulting inlet distortion, and the available engine stall margin are continually computed. The excess stall margin is traded for thrust. The predicted increase in engine performance due to the ADECS mode is presented in this report.

  7. Adaptive Scheduling for QoS Virtual Machines under Different Resource Allocation - Performance Effects and Predictability

    NASA Astrophysics Data System (ADS)

    Sodan, Angela C.

    Virtual machines have become an important approach to provide performance isolation and performance guarantees (QoS) on cluster servers and on many-core SMP servers. Many-core CPUs are a current trend in CPU design and require jobs to be parallel for exploitation of the performance potential. Very promising for batch job scheduling with virtual machines on both cluster servers and many-core SMP servers is adaptive scheduling which can adjust sizes of parallel jobs to consider different load situations and different resource availability. Then, the resource allocation and resource partitioning can be determined at virtual-machine level and be propagated down to the job sizes. The paper investigates job re-sizing and virtual-machine resizing, and the effects which the efficiency curve of the jobs has on the resulting performance. Additionally, the paper presents a simple, yet effective queuing-model approach for predicting performance under different resource allocation.

  8. Predicted performance benefits of an adaptive digital engine control system on an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Burcham, F. W., Jr.; Myers, L. P.; Ray, R. J.

    1985-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrating engine-airframe control systems. Currently this is accomplished on the NASA Ames Research Center's F-15 airplane. The two control modes used to implement the systems are an integrated flightpath management mode and an integrated adaptive engine control system (ADECS) mode. The ADECS mode is a highly integrated mode in which the airplane flight conditions, the resulting inlet distortion, and the available engine stall margin are continually computed. The excess stall margin is traded for thrust. The predicted increase in engine performance due to the ADECS mode is presented in this report.

  9. Adaptive neuro-fuzzy prediction of modulation transfer function of optical lens system

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Anuar, Nor Badrul; Md Nasir, Mohd Hairul Nizam; Pavlović, Nenad T.; Akib, Shatirah

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the adaptive neuro-fuzzy (ANFIS) estimator is designed and adapted to predict MTF value of the actual optical system. Neural network in ANFIS adjusts parameters of membership function in the fuzzy logic of the fuzzy inference system. The back propagation learning algorithm is used for training this network. This intelligent estimator is implemented using MATLAB/Simulink and the performances are investigated. The simulation results presented in this paper show the effectiveness of the developed method.

  10. Bioinformatics Approach for Prediction of Functional Coding/Noncoding Simple Polymorphisms (SNPs/Indels) in Human BRAF Gene.

    PubMed

    Hassan, Mohamed M; Omer, Shaza E; Khalf-Allah, Rahma M; Mustafa, Razaz Y; Ali, Isra S; Mohamed, Sofia B

    2016-01-01

    This study was carried out for Homo sapiens single variation (SNPs/Indels) in BRAF gene through coding/non-coding regions. Variants data was obtained from database of SNP even last update of November, 2015. Many bioinformatics tools were used to identify functional SNPs and indels in proteins functions, structures and expressions. Results shown, for coding polymorphisms, 111 SNPs predicted as highly damaging and six other were less. For UTRs, showed five SNPs and one indel were altered in micro RNAs binding sites (3' UTR), furthermore nil SNP or indel have functional altered in transcription factor binding sites (5' UTR). In addition for 5'/3' splice sites, analysis showed that one SNP within 5' splice site and one Indel in 3' splice site showed potential alteration of splicing. In conclude these previous functional identified SNPs and indels could lead to gene alteration, which may be directly or indirectly contribute to the occurrence of many diseases. PMID:27478437

  11. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  12. A temporal predictive code for voice motor control: Evidence from ERP and behavioral responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R

    2016-04-01

    The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556

  13. Stability of executive function and predictions to adaptive behavior from middle childhood to pre-adolescence

    PubMed Central

    Harms, Madeline B.; Zayas, Vivian; Meltzoff, Andrew N.; Carlson, Stephanie M.

    2014-01-01

    The shift from childhood to adolescence is characterized by rapid remodeling of the brain and increased risk-taking behaviors. Current theories hypothesize that developmental enhancements in sensitivity to affective environmental cues in adolescence may undermine executive function (EF) and increase the likelihood of problematic behaviors. In the current study, we examined the extent to which EF in childhood predicts EF in early adolescence. We also tested whether individual differences in neural responses to affective cues (rewards/punishments) in childhood serve as a biological marker for EF, sensation-seeking, academic performance, and social skills in early adolescence. At age 8, 84 children completed a gambling task while event-related potentials (ERPs) were recorded. We examined the extent to which selections resulting in rewards or losses in this task elicited (i) the P300, a post-stimulus waveform reflecting the allocation of attentional resources toward a stimulus, and (ii) the SPN, a pre-stimulus anticipatory waveform reflecting a neural representation of a “hunch” about an outcome that originates in insula and ventromedial PFC. Children also completed a Dimensional Change Card-Sort (DCCS) and Flanker task to measure EF. At age 12, 78 children repeated the DCCS and Flanker and completed a battery of questionnaires. Flanker and DCCS accuracy at age 8 predicted Flanker and DCCS performance at age 12, respectively. Individual differences in the magnitude of P300 (to losses vs. rewards) and SPN (preceding outcomes with a high probability of punishment) at age 8 predicted self-reported sensation seeking (lower) and teacher-rated academic performance (higher) at age 12. We suggest there is stability in EF from age 8 to 12, and that childhood neural sensitivity to reward and punishment predicts individual differences in sensation seeking and adaptive behaviors in children entering adolescence. PMID:24795680

  14. Prediction of Scour Depth around Bridge Piers using Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

    NASA Astrophysics Data System (ADS)

    Valyrakis, Manousos; Zhang, Hanqing

    2014-05-01

    Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health of river systems but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the application of a Machine Learning model that has been successfully employed in Water Engineering, namely an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation.

  15. Inferring the Frequency Spectrum of Derived Variants to Quantify Adaptive Molecular Evolution in Protein-Coding Genes of Drosophila melanogaster.

    PubMed

    Keightley, Peter D; Campos, José L; Booker, Tom R; Charlesworth, Brian

    2016-06-01

    Many approaches for inferring adaptive molecular evolution analyze the unfolded site frequency spectrum (SFS), a vector of counts of sites with different numbers of copies of derived alleles in a sample of alleles from a population. Accurate inference of the high-copy-number elements of the SFS is difficult, however, because of misassignment of alleles as derived vs. ancestral. This is a known problem with parsimony using outgroup species. Here we show that the problem is particularly serious if there is variation in the substitution rate among sites brought about by variation in selective constraint levels. We present a new method for inferring the SFS using one or two outgroups that attempts to overcome the problem of misassignment. We show that two outgroups are required for accurate estimation of the SFS if there is substantial variation in selective constraints, which is expected to be the case for nonsynonymous sites in protein-coding genes. We apply the method to estimate unfolded SFSs for synonymous and nonsynonymous sites in a population of Drosophila melanogaster from phase 2 of the Drosophila Population Genomics Project. We use the unfolded spectra to estimate the frequency and strength of advantageous and deleterious mutations and estimate that ∼50% of amino acid substitutions are positively selected but that <0.5% of new amino acid mutations are beneficial, with a scaled selection strength of Nes ≈ 12. PMID:27098912

  16. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  17. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  18. Fan Noise Prediction System Development: Source/Radiation Field Coupling and Workstation Conversion for the Acoustic Radiation Code

    NASA Technical Reports Server (NTRS)

    Meyer, H. D.

    1993-01-01

    The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.

  19. Executive Function Predicts Adaptive Behavior in Children with Histories of Heavy Prenatal Alcohol Exposure and Attention Deficit/Hyperactivity Disorder

    PubMed Central

    Ware, Ashley L.; Crocker, Nicole; O’Brien, Jessica W.; Deweese, Benjamin N.; Roesch, Scott C.; Coles, Claire D.; Kable, Julie A.; May, Philip A.; Kalberg, Wendy O.; Sowell, Elizabeth R.; Jones, Kenneth Lyons; Riley, Edward P.; Mattson, Sarah N.

    2011-01-01

    Purpose of Study Prenatal exposure to alcohol often results in disruption to discrete cognitive and behavioral domains, including executive function (EF) and adaptive functioning. In the current study, the relation between these two domains was examined in children with histories of heavy prenatal alcohol exposure, non-exposed children with a diagnosis of attention-deficit/hyperactivity disorder (ADHD), and typically developing controls. Methods As part of a multisite study, three groups of children (8-18y, M = 12.10) were tested: children with histories of heavy prenatal alcohol exposure (ALC, N=142), non-exposed children with ADHD (ADHD, N=82), and typically developing controls (CON, N=133) who did not have ADHD or a history of prenatal alcohol exposure. Children completed subtests of the Delis-Kaplan Executive Function System (D-KEFS) and their primary caregivers completed the Vineland Adaptive Behavior Scales-II (VABS). Data were analyzed using regression analyses. Results Analyses showed that EF measures were predictive of adaptive abilities and significant interactions between D-KEFS measures and group were present. For the ADHD group, the relation between adaptive abilities and EF was more general, with three of the four EF measures showing a significant relation with adaptive score. In contrast, for the ALC group, this relation was specific to the nonverbal EF measures. In the CON group, performance on EF tasks did not predict adaptive scores over the influence of age. Conclusion These results support prior research in ADHD suggesting that EF deficits are predictive of poorer adaptive behavior and extend this finding to include children with heavy prenatal exposure to alcohol. However, the relation between EF and adaptive ability differed by group, suggesting unique patterns of abilities in these children. These results provide enhanced understanding of adaptive deficits in these populations, as well as demonstrate the ecological validity of laboratory

  20. Positive predictive values of the coding for bisphosphonate therapy among cancer patients in the Danish National Patient Registry

    PubMed Central

    Nielsson, Malene Schou; Erichsen, Rune; Frøslev, Trine; Taylor, Aliki; Acquavella, John; Ehrenstein, Vera

    2012-01-01

    Background The purpose of this study was to estimate the positive predictive value (PPV) of the coding for bisphosphonate treatment in selected cancer patients from the Danish National Patient Registry (DNPR). Methods Through the DNPR, we identified all patients with recorded cancer of the breast, prostate, lung, kidney, and with multiple myeloma. We restricted the study sample to patients with bisphosphonate treatment recorded during an admission to Aalborg Hospital, Denmark, from 2005 through 2009. We retrieved and reviewed medical records of these patients from the initial cancer diagnosis onwards to confirm or rule out bisphosphonate therapy. We calculated the PPV of the treatment coding as the proportion of patients with confirmed bisphosphonate treatment. Results We retrieved and reviewed the medical records of 60 cancer patients with treatment codes corresponding to bisphosphonate therapy. Recorded code corresponded to treatment administered intravenously for 59 of 60 patients, corresponding to a PPV of 98.3% (95% confidence interval 92.5–99.8). In the remaining patient, bisphosphonate treatment was also confirmed but was an orally administered bisphosphonate; thus, the treatment for any bisphosphonate regardless of administration was confirmed for all 60 patients (PPV of 100%, 95% confidence interval 95.9–100.0). Conclusion The PPV of bisphosphonate treatment coding among cancer patients in the DNPR is very high and the recorded treatment nearly always corresponds to intravenous administration. PMID:22977313

  1. A multi-timescale adaptive threshold model for the SAI tactile afferent to predict response to mechanical vibration

    PubMed Central

    Jahangiri, Anila F.; Gerling, Gregory J.

    2011-01-01

    The Leaky Integrate and Fire (LIF) model of a neuron is one of the best known models for a spiking neuron. A current limitation of the LIF model is that it may not accurately reproduce the dynamics of an action potential. There have recently been some studies suggesting that a LIF coupled with a multi-timescale adaptive threshold (MAT) may increase LIF’s accuracy in predicting spikes in cortical neurons. We propose a mechanotransduction process coupled with a LIF model with multi-timescale adaptive threshold to model slowly adapting type I (SAI) mechanoreceptor in monkey’s glabrous skin. In order to test the performance of the model, the spike timings predicted by this MAT model are compared with neural data. We also test a fixed threshold variant of the model by comparing its outcome with the neural data. Initial results indicate that the MAT model predicts spike timings better than a fixed threshold LIF model only. PMID:21814636

  2. An adaptive lattice Boltzmann method for predicting turbulent wake fields in wind parks

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2014-11-01

    Wind turbines create large-scale wake structures that can affect downstream turbines considerably. Numerical simulation of the turbulent flow field is a viable approach in order to obtain a better understanding of these interactions and to optimize the turbine placement in wind parks. Yet, the development of effective computational methods for predictive wind farm simulation is challenging. As an alternative approach to presently employed vortex and actuator-based methods, we are currently developing a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that shows good potential for effective wind turbine wake prediction. Since the method is formulated in an Eulerian frame of reference and on a dynamically changing nonuniform Cartesian grid, even moving boundaries can be considered rather easily. The presentation will describe all crucial components of the numerical method and discuss first verification computations. Among other configurations, simulations of the wake fields created by multiple Vesta V27 turbines will be shown.

  3. Eye-pupil displacement and prediction: effects on residual wavefront in adaptive optics retinal imaging

    PubMed Central

    Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea

    2016-01-01

    This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems. PMID:27231607

  4. Facets and mechanisms of adaptive pain behavior: predictive regulation and action

    PubMed Central

    Morrison, India; Perini, Irene; Dunham, James

    2013-01-01

    Neural mechanisms underlying nociception and pain perception are considered to serve the ultimate goal of limiting tissue damage. However, since pain usually occurs in complex environments and situations that call for elaborate control over behavior, simple avoidance is insufficient to explain a range of mammalian pain responses, especially in the presence of competing goals. In this integrative review we propose a Predictive Regulation and Action (PRA) model of acute pain processing. It emphasizes evidence that the nervous system is organized to anticipate potential pain and to adjust behavior before the risk of tissue damage becomes critical. Regulatory processes occur on many levels, and can be dynamically influenced by local interactions or by modulation from other brain areas in the network. The PRA model centers on neural substrates supporting the predictive nature of pain processing, as well as on finely-calibrated yet versatile regulatory processes that ultimately affect behavior. We outline several operational categories of pain behavior, from spinally-mediated reflexes to adaptive voluntary action, situated at various neural levels. An implication is that neural processes that track potential tissue damage in terms of behavioral consequences are an integral part of pain perception. PMID:24348358

  5. Predictive simulation of wind turbine wake interaction with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2015-11-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed the first version of a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The presentation will describe the employed algorithms and present relevant verification and validation computations. For instance, power and thrust coefficients of a Vestas V27 turbine are predicted within 5% of the manufacturer's specifications. Simulations of three Vestas V27-225kW turbines in triangular arrangement analyze the reduction in power production due to upstream wake generation for different inflow conditions.

  6. Effects of Protein Conformation in Docking: Improved Pose Prediction through Protein Pocket Adaptation

    PubMed Central

    Jain, Ajay N.

    2009-01-01

    Computational methods for docking ligands have been shown to be remarkably dependent on precise protein conformation, where acceptable results in pose prediction have been generally possible only in the artificial case of re-docking a ligand into a protein binding site whose conformation was determined in the presence of the same ligand (the “cognate” docking problem). In such cases, on well curated protein/ligand complexes, accurate dockings can be returned as top-scoring over 75% of the time using tools such as Surflex-Dock. A critical application of docking in modeling for lead optimization requires accurate pose prediction for novel ligands, ranging from simple synthetic analogs to very different molecular scaffolds. Typical results for widely used programs in the “cross-docking case” (making use of a single fixed protein conformation) have rates closer to 20% success. By making use of protein conformations from multiple complexes, Surflex-Dock yields an average success rate of 61% across eight pharmaceutically relevant targets. Following docking, protein pocket adaptation and rescoring identifies single pose families that are correct an average of 67% of the time. Consideration of the best of two pose families (from alternate scoring regimes) yields a 75% mean success rate. PMID:19340588

  7. Eye-pupil displacement and prediction: effects on residual wavefront in adaptive optics retinal imaging.

    PubMed

    Kulcsár, Caroline; Raynaud, Henri-François; Garcia-Rissmann, Aurea

    2016-03-01

    This paper studies the effect of pupil displacements on the best achievable performance of retinal imaging adaptive optics (AO) systems, using 52 trajectories of horizontal and vertical displacements sampled at 80 Hz by a pupil tracker (PT) device on 13 different subjects. This effect is quantified in the form of minimal root mean square (rms) of the residual phase affecting image formation, as a function of the delay between PT measurement and wavefront correction. It is shown that simple dynamic models identified from data can be used to predict horizontal and vertical pupil displacements with greater accuracy (in terms of average rms) over short-term time horizons. The potential impact of these improvements on residual wavefront rms is investigated. These results allow to quantify the part of disturbances corrected by retinal imaging systems that are caused by relative displacements of an otherwise fixed or slowy-varying subject-dependent aberration. They also suggest that prediction has a limited impact on wavefront rms and that taking into account PT measurements in real time improves the performance of AO retinal imaging systems. PMID:27231607

  8. A Parallel Implicit Adaptive Mesh Refinement Algorithm for Predicting Unsteady Fully-Compressible Reactive Flows

    NASA Astrophysics Data System (ADS)

    Northrup, Scott A.

    A new parallel implicit adaptive mesh refinement (AMR) algorithm is developed for the prediction of unsteady behaviour of laminar flames. The scheme is applied to the solution of the system of partial-differential equations governing time-dependent, two- and three-dimensional, compressible laminar flows for reactive thermally perfect gaseous mixtures. A high-resolution finite-volume spatial discretization procedure is used to solve the conservation form of these equations on body-fitted multi-block hexahedral meshes. A local preconditioning technique is used to remove numerical stiffness and maintain solution accuracy for low-Mach-number, nearly incompressible flows. A flexible block-based octree data structure has been developed and is used to facilitate automatic solution-directed mesh adaptation according to physics-based refinement criteria. The data structure also enables an efficient and scalable parallel implementation via domain decomposition. The parallel implicit formulation makes use of a dual-time-stepping like approach with an implicit second-order backward discretization of the physical time, in which a Jacobian-free inexact Newton method with a preconditioned generalized minimal residual (GMRES) algorithm is used to solve the system of nonlinear algebraic equations arising from the temporal and spatial discretization procedures. An additive Schwarz global preconditioner is used in conjunction with block incomplete LU type local preconditioners for each sub-domain. The Schwarz preconditioning and block-based data structure readily allow efficient and scalable parallel implementations of the implicit AMR approach on distributed-memory multi-processor architectures. The scheme was applied to solutions of steady and unsteady laminar diffusion and premixed methane-air combustion and was found to accurately predict key flame characteristics. For a premixed flame under terrestrial gravity, the scheme accurately predicted the frequency of the natural

  9. Code requirements document: MODFLOW 2.1: A program for predicting moderator flow patterns

    SciTech Connect

    Peterson, P.F.; Paik, I.K.

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors.

  10. Code requirements document: MODFLOW 2. 1: A program for predicting moderator flow patterns

    SciTech Connect

    Peterson, P.F. . Dept. of Nuclear Engineering); Paik, I.K. )

    1992-03-01

    Sudden changes in the temperature of flowing liquids can result in transient buoyancy forces which strongly impact the flow hydrodynamics via flow stratification. These effects have been studied for the case of potential flow of stratified liquids to line sinks, but not for moderator flow in SRS reactors. Standard codes, such as TRAC and COMMIX, do not have the capability to capture the stratification effect, due to strong numerical diffusion which smears away the hot/cold fluid interface. A related problem with standard codes is the inability to track plumes injected into the liquid flow, again due to numerical diffusion. The combined effects of buoyant stratification and plume dispersion have been identified as being important in operation of the Supplementary Safety System which injects neutron-poison ink into SRS reactors to provide safe shutdown in the event of safety rod failure. The MODFLOW code discussed here provides transient moderator flow pattern information with stratification effects, and tracks the location of ink plumes in the reactor. The code, written in Fortran, is compiled for Macintosh II computers, and includes subroutines for interactive control and graphical output. Removing the graphics capabilities, the code can also be compiled on other computers. With graphics, in addition to the capability to perform safety related computations, MODFLOW also provides an easy tool for becoming familiar with flow distributions in SRS reactors.

  11. Adaptation of Sediment Connectivity Index for Swedish catchments and application for flood prediction of roads

    NASA Astrophysics Data System (ADS)

    Cantone, Carolina; Kalantari, Zahra; Cavalli, Marco; Crema, Stefano

    2016-04-01

    Climate changes are predicted to increase precipitation intensities and occurrence of extreme rainfall events in the near future. Scandinavia has been identified as one of the most sensitive regions in Europe to such changes; therefore, an increase in the risk for flooding, landslides and soil erosion is to be expected also in Sweden. An increase in the occurrence of extreme weather events will impose greater strain on the built environment and major transport infrastructures such as roads and railways. This research aimed to identify the risk of flooding at the road-stream intersections, crucial locations where water and debris can accumulate and cause failures of the existing drainage facilities. Two regions in southwest of Sweden affected by an extreme rainfall event in August 2014, were used for calibrating and testing a statistical flood prediction model. A set of Physical Catchment Descriptors (PCDs) including road and catchment characteristics was identified for the modelling. Moreover, a GIS-based topographic Index of Sediment Connectivity (IC) was used as PCD. The novelty of this study relies on the adaptation of IC for describing sediment connectivity in lowland areas taking into account contribution of soil type, land use and different patterns of precipitation during the event. A weighting factor for IC was calculated by estimating runoff calculated with SCS Curve Number method, assuming a constant value of precipitation for a given time period, corresponding to the critical event. The Digital Elevation Model of the study site was reconditioned at the drainage facilities locations to consider the real flow path in the analysis. These modifications led to highlight the role of rainfall patterns and surface runoff for modelling sediment delivery in lowland areas. Moreover, it was observed that integrating IC into the statistic prediction model increased its accuracy and performance. After the calibration procedure in one of the study areas, the model was

  12. Thermal treatments of foods: a predictive general-purpose code for heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Barba, Anna Angela

    2005-05-01

    Thermal treatments of foods required accurate processing protocols. In this context, mathematical modeling of heat and mass transfer can play an important role in the control and definition of the process parameters as well as to design processing systems. In this work a code able to simulate heat and mass transfer phenomena within solid bodies has been developed. The code has been written with the ability of describing different geometries and it can account for any kind of different initial/boundary conditions. Transport phenomena within multi-layer bodies can be described, and time/position dependent material parameters can be implemented. Finally, the code has been validated by comparison with a problem for which the analytical solution is known, and by comparison with a differential scanning calorimetry signal that described the heating treatment of a raw potato (Solanum tuberosum).

  13. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model

    PubMed Central

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  14. Performance Improvement of the Goertzel Algorithm in Estimating of Protein Coding Regions Using Modified Anti-notch Filter and Linear Predictive Coding Model.

    PubMed

    Farsani, Mahsa Saffari; Sahhaf, Masoud Reza Aghabozorgi; Abootalebi, Vahid

    2016-01-01

    The aim of this paper is to improve the performance of the conventional Goertzel algorithm in determining the protein coding regions in deoxyribonucleic acid (DNA) sequences. First, the symbolic DNA sequences are converted into numerical signals using electron ion interaction potential method. Then by combining the modified anti-notch filter and linear predictive coding model, we proposed an efficient algorithm to achieve the performance improvement in the Goertzel algorithm for estimating genetic regions. Finally, a thresholding method is applied to precisely identify the exon and intron regions. The proposed algorithm is applied to several genes, including genes available in databases BG570 and HMR195 and the results are compared to other methods based on the nucleotide level evaluation criteria. Results demonstrate that our proposed method reduces the number of incorrect nucleotides which are estimated to be in the noncoding region. In addition, the area under the receiver operating characteristic curve has improved by the factor of 1.35 and 1.12 in HMR195 and BG570 datasets respectively, in comparison with the conventional Goertzel algorithm. PMID:27563569

  15. The expression level of small non-coding RNAs derived from the first exon of protein-coding genes is predictive of cancer status

    PubMed Central

    Zovoilis, Athanasios; Mungall, Andrew J; Moore, Richard; Varhol, Richard; Chu, Andy; Wong, Tina; Marra, Marco; Jones, Steven JM

    2014-01-01

    Small non-coding RNAs (smRNAs) are known to be significantly enriched near the transcriptional start sites of genes. However, the functional relevance of these smRNAs remains unclear, and they have not been associated with human disease. Within the cancer genome atlas project (TCGA), we have generated small RNA datasets for many tumor types. In prior cancer studies, these RNAs have been regarded as transcriptional “noise,” due to their apparent chaotic distribution. In contrast, we demonstrate their striking potential to distinguish efficiently between cancer and normal tissues and classify patients with cancer to subgroups of distinct survival outcomes. This potential to predict cancer status is restricted to a subset of these smRNAs, which is encoded within the first exon of genes, highly enriched within CpG islands and negatively correlated with DNA methylation levels. Thus, our data show that genome-wide changes in the expression levels of small non-coding RNAs within first exons are associated with cancer. PMID:24534129

  16. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, A

    2005-09-08

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO{sub 2} concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  17. Organizational Changes to Thyroid Regulation in Alligator mississippiensis: Evidence for Predictive Adaptive Responses

    PubMed Central

    Boggs, Ashley S. P.; Lowers, Russell H.; Cloy-McCoy, Jessica A.; Guillette, Louis J.

    2013-01-01

    During embryonic development, organisms are sensitive to changes in thyroid hormone signaling which can reset the hypothalamic-pituitary-thyroid axis. It has been hypothesized that this developmental programming is a ‘predictive adaptive response’, a physiological adjustment in accordance with the embryonic environment that will best aid an individual's survival in a similar postnatal environment. When the embryonic environment is a poor predictor of the external environment, the developmental changes are no longer adaptive and can result in disease states. We predicted that endocrine disrupting chemicals (EDCs) and environmentally-based iodide imbalance could lead to developmental changes to the thyroid axis. To explore whether iodide or EDCs could alter developmental programming, we collected American alligator eggs from an estuarine environment with high iodide availability and elevated thyroid-specific EDCs, a freshwater environment contaminated with elevated agriculturally derived EDCs, and a reference freshwater environment. We then incubated them under identical conditions. We examined plasma thyroxine and triiodothyronine concentrations, thyroid gland histology, plasma inorganic iodide, and somatic growth at one week (before external nutrition) and ten months after hatching (on identical diets). Neonates from the estuarine environment were thyrotoxic, expressing follicular cell hyperplasia (p = 0.01) and elevated plasma triiodothyronine concentrations (p = 0.0006) closely tied to plasma iodide concentrations (p = 0.003). Neonates from the freshwater contaminated site were hypothyroid, expressing thyroid follicular cell hyperplasia (p = 0.01) and depressed plasma thyroxine concentrations (p = 0.008). Following a ten month growth period under identical conditions, thyroid histology (hyperplasia p = 0.04; colloid depletion p = 0.01) and somatic growth (body mass p<0.0001; length p = 0.02) remained altered among the

  18. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  19. A new code for predicting the thermo-mechanical and irradiation behavior of metallic fuels in sodium fast reactors

    NASA Astrophysics Data System (ADS)

    Karahan, Aydın; Buongiorno, Jacopo

    2010-01-01

    An engineering code to predict the irradiation behavior of U-Zr and U-Pu-Zr metallic alloy fuel pins and UO2-PuO2 mixed oxide fuel pins in sodium-cooled fast reactors was developed. The code was named Fuel Engineering and Structural analysis Tool (FEAST). FEAST has several modules working in coupled form with an explicit numerical algorithm. These modules describe fission gas release and fuel swelling, fuel chemistry and restructuring, temperature distribution, fuel-clad chemical interaction, and fuel and clad mechanical analysis including transient creep-fracture for the clad. Given the fuel pin geometry, composition and irradiation history, FEAST can analyze fuel and clad thermo-mechanical behavior at both steady-state and design-basis (non-disruptive) transient scenarios. FEAST was written in FORTRAN-90 and has a simple input file similar to that of the LWR fuel code FRAPCON. The metal-fuel version is called FEAST-METAL, and is described in this paper. The oxide-fuel version, FEAST-OXIDE is described in a companion paper. With respect to the old Argonne National Laboratory code LIFE-METAL and other same-generation codes, FEAST-METAL emphasizes more mechanistic, less empirical models, whenever available. Specifically, fission gas release and swelling are modeled with the GRSIS algorithm, which is based on detailed tracking of fission gas bubbles within the metal fuel. Migration of the fuel constituents is modeled by means of thermo-transport theory. Fuel-clad chemical interaction models based on precipitation kinetics were developed for steady-state operation and transients. Finally, a transient intergranular creep-fracture model for the clad, which tracks the nucleation and growth of the cavities at the grain boundaries, was developed for and implemented in the code. Reducing the empiricism in the constitutive models should make it more acceptable to extrapolate FEAST-METAL to new fuel compositions and higher burnup, as envisioned in advanced sodium reactors

  20. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    NASA Astrophysics Data System (ADS)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.