Science.gov

Sample records for dynamic neural fields

  1. Metastable dynamics in heterogeneous neural fields

    PubMed Central

    Schwappach, Cordula; Hutt, Axel; beim Graben, Peter

    2015-01-01

    We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671

  2. Neural field theory with variance dynamics.

    PubMed

    Robinson, P A

    2013-06-01

    Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.

  3. Investigation on Amari's dynamical neural field with global constant inhibition.

    PubMed

    Jin, Dequan; Peng, Jigen

    2015-11-01

    In this paper, the properties of Amari's dynamical neural field with global constant inhibition induced by its kernel are investigated. Amari's dynamical neural field illustrates many neurophysiological phenomena successfully and has been applied to unsupervised learning like data clustering in recent years. In its applications, the stationary solution to Amari's dynamical neural field plays an important role that the underlying patterns being perceived are usually presented as the excited region in it. However, the type of stationary solution to dynamical neural field with typical kernel is often sensitive to parameters of its kernel that limits its range of application. Different from dynamical neural field with typical kernel that have been discussed a lot, there are few theoretical results on dynamical neural field with global constant inhibitory kernel that has already shown better performance in practice. In this paper, some important results on existence and stability of stationary solution to dynamical neural field with global constant inhibitory kernel are obtained. All of these results show that such kind of dynamical neural field has better potential for missions like data clustering than those with typical kernels, which provide a theoretical basis of its further extensive application.

  4. Neural Population Dynamics Modeled by Mean-Field Graphs

    NASA Astrophysics Data System (ADS)

    Kozma, Robert; Puljic, Marko

    2011-09-01

    In this work we apply random graph theory approach to describe neural population dynamics. There are important advantages of using random graph theory approach in addition to ordinary and partial differential equations. The mathematical theory of large-scale random graphs provides an efficient tool to describe transitions between high- and low-dimensional spaces. Recent advances in studying neural correlates of higher cognition indicate the significance of sudden changes in space-time neurodynamics, which can be efficiently described as phase transitions in the neuropil medium. Phase transitions are rigorously defined mathematically on random graph sequences and they can be naturally generalized to a class of percolation processes called neuropercolation. In this work we employ mean-field graphs with given vertex degree distribution and edge strength distribution. We demonstrate the emergence of collective oscillations in the style of brains.

  5. Dynamic neural fields as a step toward cognitive neuromorphic architectures

    PubMed Central

    Sandamirskaya, Yulia

    2014-01-01

    Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620

  6. The dynamic neural field approach to cognitive robotics.

    PubMed

    Erlhagen, Wolfram; Bicho, Estela

    2006-09-01

    This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work.

  7. TUTORIAL: The dynamic neural field approach to cognitive robotics

    NASA Astrophysics Data System (ADS)

    Erlhagen, Wolfram; Bicho, Estela

    2006-09-01

    This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work. .

  8. Neural masses and fields in dynamic causal modeling

    PubMed Central

    Moran, Rosalyn; Pinotsis, Dimitris A.; Friston, Karl

    2013-01-01

    Dynamic causal modeling (DCM) provides a framework for the analysis of effective connectivity among neuronal subpopulations that subtend invasive (electrocorticograms and local field potentials) and non-invasive (electroencephalography and magnetoencephalography) electrophysiological responses. This paper reviews the suite of neuronal population models including neural masses, fields and conductance-based models that are used in DCM. These models are expressed in terms of sets of differential equations that allow one to model the synaptic underpinnings of connectivity. We describe early developments using neural mass models, where convolution-based dynamics are used to generate responses in laminar-specific populations of excitatory and inhibitory cells. We show that these models, though resting on only two simple transforms, can recapitulate the characteristics of both evoked and spectral responses observed empirically. Using an identical neuronal architecture, we show that a set of conductance based models—that consider the dynamics of specific ion-channels—present a richer space of responses; owing to non-linear interactions between conductances and membrane potentials. We propose that conductance-based models may be more appropriate when spectra present with multiple resonances. Finally, we outline a third class of models, where each neuronal subpopulation is treated as a field; in other words, as a manifold on the cortical surface. By explicitly accounting for the spatial propagation of cortical activity through partial differential equations (PDEs), we show that the topology of connectivity—through local lateral interactions among cortical layers—may be inferred, even in the absence of spatially resolved data. We also show that these models allow for a detailed analysis of structure–function relationships in the cortex. Our review highlights the relationship among these models and how the hypothesis asked of empirical data suggests an appropriate

  9. Behavioral dynamics and neural grounding of a dynamic field theory of multi-object tracking.

    PubMed

    Spencer, J P; Barich, K; Goldberg, J; Perone, S

    2012-09-01

    The ability to dynamically track moving objects in the environment is crucial for efficient interaction with the local surrounds. Here, we examined this ability in the context of the multi-object tracking (MOT) task. Several theories have been proposed to explain how people track moving objects; however, only one of these previous theories is implemented in a real-time process model, and there has been no direct contact between theories of object tracking and the growing neural literature using ERPs and fMRI. Here, we present a neural process model of object tracking that builds from a Dynamic Field Theory of spatial cognition. Simulations reveal that our dynamic field model captures recent behavioral data examining the impact of speed and tracking duration on MOT performance. Moreover, we show that the same model with the same trajectories and parameters can shed light on recent ERP results probing how people distribute attentional resources to targets vs. distractors. We conclude by comparing this new theory of object tracking to other recent accounts, and discuss how the neural grounding of the theory might be effectively explored in future work.

  10. Neural field simulator: two-dimensional spatio-temporal dynamics involving finite transmission speed

    PubMed Central

    Nichols, Eric J.; Hutt, Axel

    2015-01-01

    Neural Field models (NFM) play an important role in the understanding of neural population dynamics on a mesoscopic spatial and temporal scale. Their numerical simulation is an essential element in the analysis of their spatio-temporal dynamics. The simulation tool described in this work considers scalar spatially homogeneous neural fields taking into account a finite axonal transmission speed and synaptic temporal derivatives of first and second order. A text-based interface offers complete control of field parameters and several approaches are used to accelerate simulations. A graphical output utilizes video hardware acceleration to display running output with reduced computational hindrance compared to simulators that are exclusively software-based. Diverse applications of the tool demonstrate breather oscillations, static and dynamic Turing patterns and activity spreading with finite propagation speed. The simulator is open source to allow tailoring of code and this is presented with an extension use case. PMID:26539105

  11. The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields

    PubMed Central

    Deco, Gustavo; Jirsa, Viktor K.; Robinson, Peter A.; Breakspear, Michael; Friston, Karl

    2008-01-01

    The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space–time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences

  12. Integrating verbal and nonverbal communication in a dynamic neural field architecture for human-robot interaction.

    PubMed

    Bicho, Estela; Louro, Luís; Erlhagen, Wolfram

    2010-01-01

    How do humans coordinate their intentions, goals and motor behaviors when performing joint action tasks? Recent experimental evidence suggests that resonance processes in the observer's motor system are crucially involved in our ability to understand actions of others', to infer their goals and even to comprehend their action-related language. In this paper, we present a control architecture for human-robot collaboration that exploits this close perception-action linkage as a means to achieve more natural and efficient communication grounded in sensorimotor experiences. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of neural populations that encode in their activation patterns goals, actions and shared task knowledge. We validate the verbal and nonverbal communication skills of the robot in a joint assembly task in which the human-robot team has to construct toy objects from their components. The experiments focus on the robot's capacity to anticipate the user's needs and to detect and communicate unexpected events that may occur during joint task execution.

  13. Dynamics of neural cryptography

    NASA Astrophysics Data System (ADS)

    Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido

    2007-05-01

    Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.

  14. Stochastic mean-field formulation of the dynamics of diluted neural networks

    NASA Astrophysics Data System (ADS)

    Angulo-Garcia, D.; Torcini, A.

    2015-02-01

    We consider pulse-coupled leaky integrate-and-fire neural networks with randomly distributed synaptic couplings. This random dilution induces fluctuations in the evolution of the macroscopic variables and deterministic chaos at the microscopic level. Our main aim is to mimic the effect of the dilution as a noise source acting on the dynamics of a globally coupled nonchaotic system. Indeed, the evolution of a diluted neural network can be well approximated as a fully pulse-coupled network, where each neuron is driven by a mean synaptic current plus additive noise. These terms represent the average and the fluctuations of the synaptic currents acting on the single neurons in the diluted system. The main microscopic and macroscopic dynamical features can be retrieved with this stochastic approximation. Furthermore, the microscopic stability of the diluted network can be also reproduced, as demonstrated from the almost coincidence of the measured Lyapunov exponents in the deterministic and stochastic cases for an ample range of system sizes. Our results strongly suggest that the fluctuations in the synaptic currents are responsible for the emergence of chaos in this class of pulse-coupled networks.

  15. Stochastic mean-field formulation of the dynamics of diluted neural networks.

    PubMed

    Angulo-Garcia, D; Torcini, A

    2015-02-01

    We consider pulse-coupled leaky integrate-and-fire neural networks with randomly distributed synaptic couplings. This random dilution induces fluctuations in the evolution of the macroscopic variables and deterministic chaos at the microscopic level. Our main aim is to mimic the effect of the dilution as a noise source acting on the dynamics of a globally coupled nonchaotic system. Indeed, the evolution of a diluted neural network can be well approximated as a fully pulse-coupled network, where each neuron is driven by a mean synaptic current plus additive noise. These terms represent the average and the fluctuations of the synaptic currents acting on the single neurons in the diluted system. The main microscopic and macroscopic dynamical features can be retrieved with this stochastic approximation. Furthermore, the microscopic stability of the diluted network can be also reproduced, as demonstrated from the almost coincidence of the measured Lyapunov exponents in the deterministic and stochastic cases for an ample range of system sizes. Our results strongly suggest that the fluctuations in the synaptic currents are responsible for the emergence of chaos in this class of pulse-coupled networks.

  16. Learning to recognize objects on the fly: a neurally based dynamic field approach.

    PubMed

    Faubel, Christian; Schöner, Gregor

    2008-05-01

    Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework.

  17. Dynamical Mean-Field Equations for a Neural Network with Spike Timing Dependent Plasticity

    NASA Astrophysics Data System (ADS)

    Mayer, Jörg; Ngo, Hong-Viet V.; Schuster, Heinz Georg

    2012-09-01

    We study the discrete dynamics of a fully connected network of threshold elements interacting via dynamically evolving synapses displaying spike timing dependent plasticity. Dynamical mean-field equations, which become exact in the thermodynamical limit, are derived to study the behavior of the system driven with uncorrelated and correlated Gaussian noise input. We use correlated noise to verify that our model gives account to the fact that correlated noise provides stronger drive for synaptic modification. Further we find that stochastic independent input leads to a noise dependent transition to the coherent state where all neurons fire together, most notably there exists an optimal noise level for the enhancement of synaptic potentiation in our model.

  18. Dynamic interactions in neural networks

    SciTech Connect

    Arbib, M.A. ); Amari, S. )

    1989-01-01

    The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.

  19. A dynamic neural field model of visual working memory and change detection.

    PubMed

    Johnson, Jeffrey S; Spencer, John P; Luck, Steven J; Schöner, Gregor

    2009-05-01

    Efficient visually guided behavior depends on the ability to form, retain, and compare visual representations for objects that may be separated in space and time. This ability relies on a short-term form of memory known as visual working memory. Although a considerable body of research has begun to shed light on the neurocognitive systems subserving this form of memory, few theories have addressed these processes in an integrated, neurally plausible framework. We describe a layered neural architecture that implements encoding and maintenance, and links these processes to a plausible comparison process. In addition, the model makes the novel prediction that change detection will be enhanced when metrically similar features are remembered. Results from experiments probing memory for color and for orientation were consistent with this novel prediction. These findings place strong constraints on models addressing the nature of visual working memory and its underlying mechanisms.

  20. Neural response dynamics of spiking and local field potential activity depend on CRT monitor refresh rate in the tree shrew primary visual cortex.

    PubMed

    Veit, Julia; Bhattacharyya, Anwesha; Kretz, Robert; Rainer, Gregor

    2011-11-01

    Entrainment of neural activity to luminance impulses during the refresh of cathode ray tube monitor displays has been observed in the primary visual cortex (V1) of humans and macaque monkeys. This entrainment is of interest because it tends to temporally align and thus synchronize neural responses at the millisecond timescale. Here we show that, in tree shrew V1, both spiking and local field potential activity are also entrained at cathode ray tube refresh rates of 120, 90, and 60 Hz, with weakest but still significant entrainment even at 120 Hz, and strongest entrainment occurring in cortical input layer IV. For both luminance increments ("white" stimuli) and decrements ("black" stimuli), refresh rate had a strong impact on the temporal dynamics of the neural response for subsequent luminance impulses. Whereas there was rapid, strong attenuation of spikes and local field potential to prolonged visual stimuli composed of luminance impulses presented at 120 Hz, attenuation was nearly absent at 60-Hz refresh rate. In addition, neural onset latencies were shortest at 120 Hz and substantially increased, by ∼15 ms, at 60 Hz. In terms of neural response amplitude, black responses dominated white responses at all three refresh rates. However, black/white differences were much larger at 60 Hz than at higher refresh rates, suggesting a mechanism that is sensitive to stimulus timing. Taken together, our findings reveal many similarities between V1 of macaque and tree shrew, while underscoring a greater temporal sensitivity of the tree shrew visual system.

  1. Creative-Dynamics Approach To Neural Intelligence

    NASA Technical Reports Server (NTRS)

    Zak, Michail A.

    1992-01-01

    Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.

  2. Dynamical systems, attractors, and neural circuits

    PubMed Central

    Miller, Paul

    2016-01-01

    Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic—they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions. PMID:27408709

  3. Neural dynamics based on the recognition of neural fingerprints

    PubMed Central

    Carrillo-Medina, José Luis; Latorre, Roberto

    2015-01-01

    Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531

  4. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  5. Non-Lipschitzian neural dynamics

    NASA Technical Reports Server (NTRS)

    Barhen, Jacob; Zak, Michail; Toomarian, Nikzad

    1990-01-01

    A novel approach is presented which is motivated by an attempt to remove one of the most fundamental limitations of artificial neural networks: their rigid behavior as compared with even the simplest biological systems. It is demonstrated that non-Lipschitzian dynamics, based on the faliure of the Lipschitz conditions at repellers, displays a new qualitative effect, i.e., a multichoice response to periodic external excitations. This makes it possible to construct unpredictable systems, represented in the form of coupled activation and learning dynamical equations. It is shown that unpredictable systems can be controlled by sign strings which uniquely define the system behavior by specifying the direction of the motions at the critical points. Unpredictable systems driven by sign strings are extremely flexible and can serve as a powerful tool for complex pattern recognition.

  6. Coupling layers regularizes wave propagation in stochastic neural fields

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Zachary P.

    2014-02-01

    We explore how layered architectures influence the dynamics of stochastic neural field models. Our main focus is how the propagation of waves of neural activity in each layer is affected by interlaminar coupling. Synaptic connectivities within and between each layer are determined by integral kernels of an integrodifferential equation describing the temporal evolution of neural activity. Excitatory neural fields, with purely positive connectivities, support traveling fronts in each layer, whose speeds are increased when coupling between layers is considered. Studying the effects of noise, we find coupling reduces the variance in the position of traveling fronts, as long as the noise sources to each layer are not completely correlated. Neural fields with asymmetric connectivity support traveling pulses whose speeds are decreased by interlaminar coupling. Again, coupling reduces the variance in traveling pulse position. Asymptotic analysis is performed using a small-noise expansion, assuming interlaminar connectivity scales similarly.

  7. Dynamic Alignment Models for Neural Coding

    PubMed Central

    Kollmorgen, Sepp; Hahnloser, Richard H. R.

    2014-01-01

    Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448

  8. The Complexity of Dynamics in Small Neural Circuits

    PubMed Central

    Panzeri, Stefano

    2016-01-01

    Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737

  9. Global rhythmic activities in hippocampal neural fields and neural coding.

    PubMed

    Ventriglia, Francesco

    2006-01-01

    Global oscillations of the neural field represent some of the most interesting expressions of the hippocampal activity, being related also to learning and memory. To study oscillatory activities of the CA3 field in theta range, a model of this sub-field of Hippocampus has been formulated. The model describes the firing activity of CA3 neuronal populations within the frame of a kinetic theory of neural systems and it has been used for computer simulations. The results show that the propagation of activities induced in the neural field by hippocampal afferents occurs only in narrow time windows confined by inhibitory barrages, whose time-course follows the theta rhythm. Moreover, during each period of a theta wave, the entire CA3 field bears a firing activity with peculiar space-time patterns, a sort of specific imprint, which can induce effects with similar patterns on brain regions driven by the hippocampal formation. The simulation has also demonstrated the ability of medial septum to influence the global activity of the CA3 pyramidal population through the control of the population of inhibitory interneurons. At last, the possible involvement of global population oscillations in neural coding has been discussed.

  10. A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.

    PubMed

    Richter, Mathis; Lins, Jonas; Schöner, Gregor

    2017-01-01

    Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition.

  11. Neural network with formed dynamics of activity

    SciTech Connect

    Dunin-Barkovskii, V.L.; Osovets, N.B.

    1995-03-01

    The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.

  12. On lateral competition in dynamic neural networks

    SciTech Connect

    Bellyustin, N.S.

    1995-02-01

    Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.

  13. Dynamics and kinematics of simple neural systems

    SciTech Connect

    Rabinovich, M. |; Selverston, A.; Rubchinsky, L.; Huerta, R.

    1996-09-01

    The dynamics of simple neural systems is of interest to both biologists and physicists. One of the possible roles of such systems is the production of rhythmic patterns, and their alterations (modification of behavior, processing of sensory information, adaptation, control). In this paper, the neural systems are considered as a subject of modeling by the dynamical systems approach. In particular, we analyze how a stable, ordinary behavior of a small neural system can be described by simple finite automata models, and how more complicated dynamical systems modeling can be used. The approach is illustrated by biological and numerical examples: experiments with and numerical simulations of the stomatogastric central pattern generators network of the California spiny lobster. {copyright} {ital 1996 American Institute of Physics.}

  14. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding.

    PubMed

    Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio

    2012-08-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of

  15. Large deviations for nonlocal stochastic neural fields.

    PubMed

    Kuehn, Christian; Riedler, Martin G

    2014-04-17

    We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers' law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations.Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20.

  16. Large Deviations for Nonlocal Stochastic Neural Fields

    PubMed Central

    2014-01-01

    We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297

  17. Axonal Velocity Distributions in Neural Field Equations

    PubMed Central

    Bojak, Ingo; Liley, David T. J.

    2010-01-01

    By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, long-range propagation of activity in MFMs is often approximated with partial differential equations (PDEs). However, PDE approximations in current use correspond to underlying axonal velocity distributions incompatible with experimental measurements. In order to rectify this deficiency, we here introduce novel propagation PDEs that give rise to smooth unimodal distributions of axonal conduction velocities. We also argue that velocities estimated from fibre diameters in slice and from latency measurements, respectively, relate quite differently to such distributions, a significant point for any phenomenological description. Our PDEs are then successfully fit to fibre diameter data from human corpus callosum and rat subcortical white matter. This allows for the first time to simulate long-range conduction in the mammalian brain with realistic, convenient PDEs. Furthermore, the obtained results suggest that the propagation of activity in rat and human differs significantly beyond mere scaling. The dynamical consequences of our new formulation are investigated in the context of a well known neural field model. On the basis of Turing instability analyses, we conclude that pattern formation is more easily initiated using our more realistic propagator. By increasing characteristic conduction velocities, a smooth transition can occur from self-sustaining bulk oscillations to travelling waves of various wavelengths, which may influence axonal growth during development. Our analytic results are also corroborated numerically using simulations on a large spatial grid. Thus we provide here a comprehensive analysis of empirically constrained activity propagation in the context of MFMs, which will allow more realistic studies

  18. Nonlinear dynamics of neural delayed feedback

    SciTech Connect

    Longtin, A.

    1990-01-01

    Neural delayed feedback is a property shared by many circuits in the central and peripheral nervous systems. The evolution of the neural activity in these circuits depends on their present state as well as on their past states, due to finite propagation time of neural activity along the feedback loop. These systems are often seen to undergo a change from a quiescent state characterized by low level fluctuations to an oscillatory state. We discuss the problem of analyzing this transition using techniques from nonlinear dynamics and stochastic processes. Our main goal is to characterize the nonlinearities which enable autonomous oscillations to occur and to uncover the properties of the noise sources these circuits interact with. The concepts are illustrated on the human pupil light reflex (PLR) which has been studied both theoretically and experimentally using this approach. 5 refs., 3 figs.

  19. Waves, bumps, and patterns in neural field theories.

    PubMed

    Coombes, S

    2005-08-01

    Neural field models of firing rate activity have had a major impact in helping to develop an understanding of the dynamics seen in brain slice preparations. These models typically take the form of integro-differential equations. Their non-local nature has led to the development of a set of analytical and numerical tools for the study of waves, bumps and patterns, based around natural extensions of those used for local differential equation models. In this paper we present a review of such techniques and show how recent advances have opened the way for future studies of neural fields in both one and two dimensions that can incorporate realistic forms of axo-dendritic interactions and the slow intrinsic currents that underlie bursting behaviour in single neurons.

  20. The neural dynamics of sensory focus

    PubMed Central

    Clarke, Stephen E.; Longtin, André; Maler, Leonard

    2015-01-01

    Coordinated sensory and motor system activity leads to efficient localization behaviours; but what neural dynamics enable object tracking and what are the underlying coding principles? Here we show that optimized distance estimation from motion-sensitive neurons underlies object tracking performance in weakly electric fish. First, a relationship is presented for determining the distance that maximizes the Fisher information of a neuron's response to object motion. When applied to our data, the theory correctly predicts the distance chosen by an electric fish engaged in a tracking behaviour, which is associated with a bifurcation between tonic and burst modes of spiking. Although object distance, size and velocity alter the neural response, the location of the Fisher information maximum remains invariant, demonstrating that the circuitry must actively adapt to maintain ‘focus' during relative motion. PMID:26549346

  1. Dynamic Attractors and Basin Class Capacity in Binary Neural Networks

    DTIC Science & Technology

    1994-12-21

    The wide repertoire of attractors and basins of attraction that appear in dynamic neural networks not only serve as models of brain activity patterns...limitations of static neural networks by use of dynamic attractors and their basins. The results show that dynamic networks have a high capacity for

  2. Beyond mean field theory: statistical field theory for neural networks

    PubMed Central

    Buice, Michael A; Chow, Carson C

    2014-01-01

    Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi–Peliti–Janssen formalism, are particularly useful in this regard. PMID:25243014

  3. Dynamical system modeling via signal reduction and neural network simulation

    SciTech Connect

    Paez, T.L.; Hunter, N.F.

    1997-11-01

    Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.

  4. Electronic neural network for dynamic resource allocation

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Eberhardt, S. P.; Daud, T.

    1991-01-01

    A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.

  5. Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.

    PubMed

    Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu

    2016-07-14

    This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.

  6. Two-photon imaging and analysis of neural network dynamics

    NASA Astrophysics Data System (ADS)

    Lütcke, Henry; Helmchen, Fritjof

    2011-08-01

    The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.

  7. EDITORIAL: Special issue on applied neurodynamics: from neural dynamics to neural engineering Special issue on applied neurodynamics: from neural dynamics to neural engineering

    NASA Astrophysics Data System (ADS)

    Chiel, Hillel J.; Thomas, Peter J.

    2011-12-01

    , the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools

  8. Monitoring Scientific Developments from a Dynamic Perspective: Self-Organized Structuring To Map Neural Network Research.

    ERIC Educational Resources Information Center

    Noyons, E. C. M.; van Raan, A. F. J.

    1998-01-01

    Using bibliometric mapping techniques, authors developed a methodology of self-organized structuring of scientific fields which was applied to neural network research. Explores the evolution of a data generated field structure by monitoring the interrelationships between subfields, the internal structure of subfields, and the dynamic features of…

  9. Neural dynamics during repetitive visual stimulation

    NASA Astrophysics Data System (ADS)

    Tsoneva, Tsvetomira; Garcia-Molina, Gary; Desain, Peter

    2015-12-01

    Objective. Steady-state visual evoked potentials (SSVEPs), the brain responses to repetitive visual stimulation (RVS), are widely utilized in neuroscience. Their high signal-to-noise ratio and ability to entrain oscillatory brain activity are beneficial for their applications in brain-computer interfaces, investigation of neural processes underlying brain rhythmic activity (steady-state topography) and probing the causal role of brain rhythms in cognition and emotion. This paper aims at analyzing the space and time EEG dynamics in response to RVS at the frequency of stimulation and ongoing rhythms in the delta, theta, alpha, beta, and gamma bands. Approach.We used electroencephalography (EEG) to study the oscillatory brain dynamics during RVS at 10 frequencies in the gamma band (40-60 Hz). We collected an extensive EEG data set from 32 participants and analyzed the RVS evoked and induced responses in the time-frequency domain. Main results. Stable SSVEP over parieto-occipital sites was observed at each of the fundamental frequencies and their harmonics and sub-harmonics. Both the strength and the spatial propagation of the SSVEP response seem sensitive to stimulus frequency. The SSVEP was more localized around the parieto-occipital sites for higher frequencies (>54 Hz) and spread to fronto-central locations for lower frequencies. We observed a strong negative correlation between stimulation frequency and relative power change at that frequency, the first harmonic and the sub-harmonic components over occipital sites. Interestingly, over parietal sites for sub-harmonics a positive correlation of relative power change and stimulation frequency was found. A number of distinct patterns in delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz) and beta (15-30 Hz) bands were also observed. The transient response, from 0 to about 300 ms after stimulation onset, was accompanied by increase in delta and theta power over fronto-central and occipital sites, which returned to baseline

  10. Using neural networks for dynamic light scattering time series processing

    NASA Astrophysics Data System (ADS)

    Chicea, Dan

    2017-04-01

    A basic experiment to record dynamic light scattering (DLS) time series was assembled using basic components. The DLS time series processing using the Lorentzian function fit was considered as reference. A Neural Network was designed and trained using simulated frequency spectra for spherical particles in the range 0–350 nm, assumed to be scattering centers, and the neural network design and training procedure are described in detail. The neural network output accuracy was tested both on simulated and on experimental time series. The match with the DLS results, considered as reference, was good serving as a proof of concept for using neural networks in fast DLS time series processing.

  11. Neural Networks for Dynamic Flight Control

    DTIC Science & Technology

    1993-12-01

    uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid

  12. Shaping the learning curve: epigenetic dynamics in neural plasticity

    PubMed Central

    Bronfman, Zohar Z.; Ginsburg, Simona; Jablonka, Eva

    2014-01-01

    A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation, and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network, and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies. PMID:25071483

  13. Absolute stability and synchronization in neural field models with transmission delays

    NASA Astrophysics Data System (ADS)

    Kao, Chiu-Yen; Shih, Chih-Wen; Wu, Chang-Hong

    2016-08-01

    Neural fields model macroscopic parts of the cortex which involve several populations of neurons. We consider a class of neural field models which are represented by integro-differential equations with transmission time delays which are space-dependent. The considered domains underlying the systems can be bounded or unbounded. A new approach, called sequential contracting, instead of the conventional Lyapunov functional technique, is employed to investigate the global dynamics of such systems. Sufficient conditions for the absolute stability and synchronization of the systems are established. Several numerical examples are presented to demonstrate the theoretical results.

  14. Neural network based dynamic controllers for industrial robots.

    PubMed

    Oh, S Y; Shin, W C; Kim, H G

    1995-09-01

    The industrial robot's dynamic performance is frequently measured by positioning accuracy at high speeds and a good dynamic controller is essential that can accurately compute robot dynamics at a servo rate high enough to ensure system stability. A real-time dynamic controller for an industrial robot is developed here using neural networks. First, an efficient time-selectable hidden layer architecture has been developed based on system dynamics localized in time, which lends itself to real-time learning and control along with enhanced mapping accuracy. Second, the neural network architecture has also been specially tuned to accommodate servo dynamics. This not only facilitates the system design through reduced sensing requirements for the controller but also enhances the control performance over the control architecture neglecting servo dynamics. Experimental results demonstrate the controller's excellent learning and control performances compared with a conventional controller and thus has good potential for practical use in industrial robots.

  15. Pulsating fronts in periodically modulated neural field models

    NASA Astrophysics Data System (ADS)

    Coombes, S.; Laing, C. R.

    2011-01-01

    We consider a coarse-grained neural field model for synaptic activity in spatially extended cortical tissue that possesses an underlying periodicity in its microstructure. The model is written as an integrodifferential equation with periodic modulation of a translationally invariant spatial kernel. This modulation can have a strong effect on wave propagation through the tissue, including the creation of pulsating fronts with widely varying speeds and wave-propagation failure. Here we develop a new analysis for the study of such phenomena, using two complementary techniques. The first uses linearized information from the leading edge of a traveling periodic wave to obtain wave speed estimates for pulsating fronts, and the second develops an interface description for waves in the full nonlinear model. For weak modulation and a Heaviside firing rate function the interface dynamics can be analyzed exactly and gives predictions that are in excellent agreement with direct numerical simulations. Importantly, the interface dynamics description improves on the standard homogenization calculation, which is restricted to modulation that is both fast and weak.

  16. Saccadic reaction times in gap/overlap paradigms: a model based on integration of intentional and visual information on neural, dynamic fields.

    PubMed

    Kopecz, K

    1995-10-01

    The systematic variations of regular saccadic reaction times induced in gap/overlap paradigms are addressed by a quantitative model. Intentional and visual information are integrated on a retinotopic representation of visual space, on which activity dynamics is related to movement initiation. Using a specific conception of "motor preparation", known effects of general warnings and fixation point on- and offsets are reproduced. Results of new experiments are predicted and the extent to which fixation point offsets are specific to ocular responses is analyzed in the light of the exposed model architecture. Relations of the theoretical framework to neurophysiological findings are discussed.

  17. Gap junctions: their importance for the dynamics of neural circuits.

    PubMed

    Rela, Lorena; Szczupak, Lidia

    2004-12-01

    Electrical coupling through gap junctions constitutes a mode of signal transmission between neurons (electrical synaptic transmission). Originally discovered in invertebrates and in lower vertebrates, electrical synapses have recently been reported in immature and adult mammalian nervous systems. This has renewed the interest in understanding the role of electrical synapses in neural circuit function and signal processing. The present review focuses on the role of gap junctions in shaping the dynamics of neural networks by forming electrical synapses between neurons. Electrical synapses have been shown to be important elements in coincidence detection mechanisms and they can produce complex input-output functions when arranged in combination with chemical synapses. We postulate that these synapses may also be important in redefining neuronal compartments, associating anatomically distinct cellular structures into functional units. The original view of electrical synapses as static connecting elements in neural circuits has been revised and a considerable amount of evidence suggests that electrical synapses substantially affect the dynamics of neural circuits.

  18. Exploring the neural dynamics underpinning individual differences in sentence comprehension.

    PubMed

    Prat, Chantel S; Just, Marcel Adam

    2011-08-01

    This study used functional magnetic resonance imaging to investigate individual differences in the neural underpinnings of sentence comprehension, with a focus on neural adaptability (dynamic configuration of neural networks with changing task demands). Twenty-seven undergraduates, with varying working memory capacities and vocabularies, read sentences that were either syntactically simple or complex under conditions of varying extrinsic working memory demands (sentences alone or preceded by to-be-remembered words or nonwords). All readers showed greater neural adaptability when extrinsic working memory demands were low, suggesting that adaptability is related to resource availability. Higher capacity readers showed greater neural adaptability (greater increase in activation with increasing syntactic complexity) across conditions than did lower capacity readers. Higher capacity readers also showed better maintenance of or increase in synchronization of activation between brain regions as tasks became more demanding. Larger vocabulary was associated with more efficient use of cortical resources (reduced activation in frontal regions) in all conditions but was not associated with greater neural adaptability or synchronization. The distinct characterizations of verbal working memory capacity and vocabulary suggest that dynamic facets of brain function such as adaptability and synchronization may underlie individual differences in more general information processing abilities, whereas neural efficiency may more specifically reflect individual differences in language experience.

  19. Measuring Whole-Brain Neural Dynamics and Behavior of Freely-Moving C. elegans

    NASA Astrophysics Data System (ADS)

    Shipley, Frederick; Nguyen, Jeffrey; Plummer, George; Shaevitz, Joshua; Leifer, Andrew

    2015-03-01

    Bridging the gap between an organism's neural dynamics and its ultimate behavior is the fundamental goal of neuroscience. Previously, to probe neural dynamics, we have been limited to measuring from a limited number of neurons, whether by electrode or optogenetic measurements. Here we present an instrument to simultaneously monitor neural activity from every neuron in a freely moving Caenorhabditis elegans' head, while recording behavior at the same time. Previously, whole-brain imaging has been demonstrated in C. elegans, but only in restrained and anesthetized animals (1). For studying neural coding of behavior it is crucial to study neural activity in freely behaving animals. Neural activity is recorded optically from cells expressing a calcium indicator, GCaMP6. Real time computer vision tracks the worm's position in x-y, while a piezo stage sweeps through the brain in z, yielding five brain-volumes per second. Behavior is recorded under infrared, dark-field imaging. This tool will allow us to directly correlate neural activity with behavior and we will present progress toward this goal. Thank you to the Simons Foundation and Princeton University for supporting this research.

  20. Neural Dynamics Underlying Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Shah, Ankoor S.; Bressler, Steven L.; Knuth, Kevin H.; Ding, Ming-Zhou; Mehta, Ashesh D.; Ulbert, Istvan; Schroeder, Charles E.

    2003-01-01

    There are two opposing hypotheses about the brain mechanisms underlying sensory event-related potentials (ERPs). One holds that sensory ERPs are generated by phase resetting of ongoing electroencephalographic (EEG) activity, and the other that they result from signal averaging of stimulus-evoked neural responses. We tested several contrasting predictions of these hypotheses by direct intracortical analysis of neural activity in monkeys. Our findings clearly demonstrate evoked response contributions to the sensory ERP in the monkey, and they suggest the likelihood that a mixed (Evoked/Phase Resetting) model may account for the generation of scalp ERPs in humans.

  1. Neural Computations in a Dynamical System with Multiple Time Scales

    PubMed Central

    Mi, Yuanyuan; Lin, Xiaohan; Wu, Si

    2016-01-01

    Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569

  2. Discriminating lysosomal membrane protein types using dynamic neural network.

    PubMed

    Tripathi, Vijay; Gupta, Dwijendra Kumar

    2014-01-01

    This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.

  3. Scale-Free Neural and Physiological Dynamics in Naturalistic Stimuli Processing

    PubMed Central

    Lin, Amy

    2016-01-01

    Abstract Neural activity recorded at multiple spatiotemporal scales is dominated by arrhythmic fluctuations without a characteristic temporal periodicity. Such activity often exhibits a 1/f-type power spectrum, in which power falls off with increasing frequency following a power-law function: P(f)∝1/fβ, which is indicative of scale-free dynamics. Two extensively studied forms of scale-free neural dynamics in the human brain are slow cortical potentials (SCPs)—the low-frequency (<5 Hz) component of brain field potentials—and the amplitude fluctuations of α oscillations, both of which have been shown to carry important functional roles. In addition, scale-free dynamics characterize normal human physiology such as heartbeat dynamics. However, the exact relationships among these scale-free neural and physiological dynamics remain unclear. We recorded simultaneous magnetoencephalography and electrocardiography in healthy subjects in the resting state and while performing a discrimination task on scale-free dynamical auditory stimuli that followed different scale-free statistics. We observed that long-range temporal correlation (captured by the power-law exponent β) in SCPs positively correlated with that of heartbeat dynamics across time within an individual and negatively correlated with that of α-amplitude fluctuations across individuals. In addition, across individuals, long-range temporal correlation of both SCP and α-oscillation amplitude predicted subjects’ discrimination performance in the auditory task, albeit through antagonistic relationships. These findings reveal interrelations among different scale-free neural and physiological dynamics and initial evidence for the involvement of scale-free neural dynamics in the processing of natural stimuli, which often exhibit scale-free dynamics. PMID:27822495

  4. Scale-Free Neural and Physiological Dynamics in Naturalistic Stimuli Processing.

    PubMed

    Lin, Amy; Maniscalco, Brian; He, Biyu J

    2016-01-01

    Neural activity recorded at multiple spatiotemporal scales is dominated by arrhythmic fluctuations without a characteristic temporal periodicity. Such activity often exhibits a 1/f-type power spectrum, in which power falls off with increasing frequency following a power-law function: [Formula: see text], which is indicative of scale-free dynamics. Two extensively studied forms of scale-free neural dynamics in the human brain are slow cortical potentials (SCPs)-the low-frequency (<5 Hz) component of brain field potentials-and the amplitude fluctuations of α oscillations, both of which have been shown to carry important functional roles. In addition, scale-free dynamics characterize normal human physiology such as heartbeat dynamics. However, the exact relationships among these scale-free neural and physiological dynamics remain unclear. We recorded simultaneous magnetoencephalography and electrocardiography in healthy subjects in the resting state and while performing a discrimination task on scale-free dynamical auditory stimuli that followed different scale-free statistics. We observed that long-range temporal correlation (captured by the power-law exponent β) in SCPs positively correlated with that of heartbeat dynamics across time within an individual and negatively correlated with that of α-amplitude fluctuations across individuals. In addition, across individuals, long-range temporal correlation of both SCP and α-oscillation amplitude predicted subjects' discrimination performance in the auditory task, albeit through antagonistic relationships. These findings reveal interrelations among different scale-free neural and physiological dynamics and initial evidence for the involvement of scale-free neural dynamics in the processing of natural stimuli, which often exhibit scale-free dynamics.

  5. Neural network approaches to dynamic collision-free trajectory generation.

    PubMed

    Yang, S X; Meng, M

    2001-01-01

    In this paper, dynamic collision-free trajectory generation in a nonstationary environment is studied using biologically inspired neural network approaches. The proposed neural network is topologically organized, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The state space of the neural network can be either the Cartesian workspace or the joint space of multi-joint robot manipulators. There are only local lateral connections among neurons. The real-time optimal trajectory is generated through the dynamic activity landscape of the neural network without explicitly searching over the free space nor the collision paths, without explicitly optimizing any global cost functions, without any prior knowledge of the dynamic environment, and without any learning procedures. Therefore the model algorithm is computationally efficient. The stability of the neural network system is guaranteed by the existence of a Lyapunov function candidate. In addition, this model is not very sensitive to the model parameters. Several model variations are presented and the differences are discussed. As examples, the proposed models are applied to generate collision-free trajectories for a mobile robot to solve a maze-type of problem, to avoid concave U-shaped obstacles, to track a moving target and at the same to avoid varying obstacles, and to generate a trajectory for a two-link planar robot with two targets. The effectiveness and efficiency of the proposed approaches are demonstrated through simulation and comparison studies.

  6. A theory of neural dimensionality, dynamics, and measurement

    NASA Astrophysics Data System (ADS)

    Ganguli, Surya

    In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing millions of behaviorally relevant neurons. Dimensionality reduction has often shown that such datasets are strikingly simple; they can be described using a much smaller number of dimensions than the number of recorded neurons, and the resulting projections onto these dimensions yield a remarkably insightful dynamical portrait of circuit computation. This ubiquitous simplicity raises several profound and timely conceptual questions. What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons? We present a theory that answers these questions, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover dynamical portraits in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. membership pending.

  7. Dynamic properties of force fields

    NASA Astrophysics Data System (ADS)

    Vitalini, F.; Mey, A. S. J. S.; Noé, F.; Keller, B. G.

    2015-02-01

    Molecular-dynamics simulations are increasingly used to study dynamic properties of biological systems. With this development, the ability of force fields to successfully predict relaxation timescales and the associated conformational exchange processes moves into focus. We assess to what extent the dynamic properties of model peptides (Ac-A-NHMe, Ac-V-NHMe, AVAVA, A10) differ when simulated with different force fields (AMBER ff99SB-ILDN, AMBER ff03, OPLS-AA/L, CHARMM27, and GROMOS43a1). The dynamic properties are extracted using Markov state models. For single-residue models (Ac-A-NHMe, Ac-V-NHMe), the slow conformational exchange processes are similar in all force fields, but the associated relaxation timescales differ by up to an order of magnitude. For the peptide systems, not only the relaxation timescales, but also the conformational exchange processes differ considerably across force fields. This finding calls the significance of dynamic interpretations of molecular-dynamics simulations into question.

  8. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    PubMed

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  9. Spontaneous Neural Dynamics and Multi-scale Network Organization

    PubMed Central

    Foster, Brett L.; He, Biyu J.; Honey, Christopher J.; Jerbi, Karim; Maier, Alexander; Saalmann, Yuri B.

    2016-01-01

    Spontaneous neural activity has historically been viewed as task-irrelevant noise that should be controlled for via experimental design, and removed through data analysis. However, electrophysiology and functional MRI studies of spontaneous activity patterns, which have greatly increased in number over the past decade, have revealed a close correspondence between these intrinsic patterns and the structural network architecture of functional brain circuits. In particular, by analyzing the large-scale covariation of spontaneous hemodynamics, researchers are able to reliably identify functional networks in the human brain. Subsequent work has sought to identify the corresponding neural signatures via electrophysiological measurements, as this would elucidate the neural origin of spontaneous hemodynamics and would reveal the temporal dynamics of these processes across slower and faster timescales. Here we survey common approaches to quantifying spontaneous neural activity, reviewing their empirical success, and their correspondence with the findings of neuroimaging. We emphasize invasive electrophysiological measurements, which are amenable to amplitude- and phase-based analyses, and which can report variations in connectivity with high spatiotemporal precision. After summarizing key findings from the human brain, we survey work in animal models that display similar multi-scale properties. We highlight that, across many spatiotemporal scales, the covariance structure of spontaneous neural activity reflects structural properties of neural networks and dynamically tracks their functional repertoire. PMID:26903823

  10. Neural network with dynamically adaptable neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.

  11. Logic Dynamics for Deductive Inference -- Its Stability and Neural Basis

    NASA Astrophysics Data System (ADS)

    Tsuda, Ichiro

    2014-12-01

    We propose a dynamical model that represents a process of deductive inference. We discuss the stability of logic dynamics and a neural basis for the dynamics. We propose a new concept of descriptive stability, thereby enabling a structure of stable descriptions of mathematical models concerning dynamic phenomena to be clarified. The present theory is based on the wider and deeper thoughts of John S. Nicolis. In particular, it is based on our joint paper on the chaos theory of human short-term memories with a magic number of seven plus or minus two.

  12. Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture

    SciTech Connect

    Disney, Adam; Reynolds, John

    2015-01-01

    Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.

  13. Non-Lipschitzian dynamics for neural net modelling

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.

  14. Dynamics of a neural system with a multiscale architecture

    PubMed Central

    Breakspear, Michael; Stam, Cornelis J

    2005-01-01

    The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448

  15. 3-D flame temperature field reconstruction with multiobjective neural network

    NASA Astrophysics Data System (ADS)

    Wan, Xiong; Gao, Yiqing; Wang, Yuanmei

    2003-02-01

    A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.

  16. Dynamic behaviors of the non-neural ectoderm during mammalian cranial neural tube closure.

    PubMed

    Ray, Heather J; Niswander, Lee A

    2016-08-15

    The embryonic brain and spinal cord initially form through the process of neural tube closure (NTC). NTC is thought to be highly similar between rodents and humans, and studies of mouse genetic mutants have greatly increased our understanding of the molecular basis of NTC with relevance for human neural tube defects. In addition, studies using amphibian and chick embryos have shed light into the cellular and tissue dynamics underlying NTC. However, the dynamics of mammalian NTC has been difficult to study due to in utero development until recently when advances in mouse embryo ex vivo culture techniques along with confocal microscopy have allowed for imaging of mouse NTC in real time. Here, we have performed live imaging of mouse embryos with a particular focus on the non-neural ectoderm (NNE). Previous studies in multiple model systems have found that the NNE is important for proper NTC, but little is known about the behavior of these cells during mammalian NTC. Here we utilized a NNE-specific genetic labeling system to assess NNE dynamics during murine NTC and identified different NNE cell behaviors as the cranial region undergoes NTC. These results bring valuable new insight into regional differences in cellular behavior during NTC that may be driven by different molecular regulators and which may underlie the various positional disruptions of NTC observed in humans with neural tube defects.

  17. Can Neural Activity Propagate by Endogenous Electrical Field?

    PubMed Central

    Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming

    2015-01-01

    It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic

  18. Framing effects: behavioral dynamics and neural basis.

    PubMed

    Zheng, Hongming; Wang, X T; Zhu, Liqi

    2010-09-01

    This study examined the neural basis of framing effects using life-death decision problems framed either positively in terms of lives saved or negatively in terms of lives lost in large group and small group contexts. Using functional MRI we found differential brain activations to the verbal and social cues embedded in the choice problems. In large group contexts, framing effects were significant where participants were more risk seeking under the negative (loss) framing than under the positive (gain) framing. This behavioral difference in risk preference was mainly regulated by the activation in the right inferior frontal gyrus, including the homologue of the Broca's area. In contrast, framing effects diminished in small group contexts while the insula and parietal lobe in the right hemisphere were distinctively activated, suggesting an important role of emotion in switching choice preference from an indecisive mode to a more consistent risk-taking inclination, governed by a kith-and-kin decision rationality.

  19. Effects of cellular homeostatic intrinsic plasticity on dynamical and computational properties of biological recurrent neural networks.

    PubMed

    Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno

    2013-09-18

    Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks.

  20. Visual field interpretation with a personal computer based neural network.

    PubMed

    Mutlukan, E; Keating, D

    1994-01-01

    The Computer Assisted Touch Screen (CATS) and Computer Assisted Moving Eye Campimeter (CAMEC) are personal computer (PC)-based video-campimeters which employ multiple and single static stimuli on a cathode ray tube respectively. Clinical studies show that CATS and CAMEC provide comparable results to more expensive conventional visual field test devices. A neural network has been designed to classify visual field data from PC-based video-campimeters to facilitate diagnostic interpretation of visual field test results by non-experts. A three-layer back propagation network was designed, with 110 units in the input layer (each unit corresponding to a test point on the visual field test grid), a hidden layer of 40 processing units, and an output layer of 27 units (each one corresponding to a particular type of visual field pattern). The network was trained by a training set of 540 simulated visual field test result patterns, including normal, glaucomatous and neuro-ophthalmic defects, for up to 20,000 cycles. The classification accuracy of the network was initially measured with a previously unseen test set of 135 simulated fields and further tested with a genuine test result set of 100 neurological and 200 glaucomatous fields. A classification accuracy of 91-97% with simulated field results and 65-100% with genuine field results were achieved. This suggests that neural networks incorporated into PC-based video-campimeters may enable correct interpretation of results in non-specialist clinics or in the community.

  1. Dynamic Pricing in Electronic Commerce Using Neural Network

    NASA Astrophysics Data System (ADS)

    Ghose, Tapu Kumar; Tran, Thomas T.

    In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.

  2. Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns

    PubMed Central

    Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario

    2015-01-01

    The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381

  3. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  4. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches

    PubMed Central

    Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang

    2016-01-01

    Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312

  5. Approximate Inference for Time-Varying Interactions and Macroscopic Dynamics of Neural Populations

    PubMed Central

    Obermayer, Klaus

    2017-01-01

    The models in statistical physics such as an Ising model offer a convenient way to characterize stationary activity of neural populations. Such stationary activity of neurons may be expected for recordings from in vitro slices or anesthetized animals. However, modeling activity of cortical circuitries of awake animals has been more challenging because both spike-rates and interactions can change according to sensory stimulation, behavior, or an internal state of the brain. Previous approaches modeling the dynamics of neural interactions suffer from computational cost; therefore, its application was limited to only a dozen neurons. Here by introducing multiple analytic approximation methods to a state-space model of neural population activity, we make it possible to estimate dynamic pairwise interactions of up to 60 neurons. More specifically, we applied the pseudolikelihood approximation to the state-space model, and combined it with the Bethe or TAP mean-field approximation to make the sequential Bayesian estimation of the model parameters possible. The large-scale analysis allows us to investigate dynamics of macroscopic properties of neural circuitries underlying stimulus processing and behavior. We show that the model accurately estimates dynamics of network properties such as sparseness, entropy, and heat capacity by simulated data, and demonstrate utilities of these measures by analyzing activity of monkey V4 neurons as well as a simulated balanced network of spiking neurons. PMID:28095421

  6. Nonlinear dynamical system approaches towards neural prosthesis

    SciTech Connect

    Torikai, Hiroyuki; Hashimoto, Sho

    2011-04-19

    An asynchronous discrete-state spiking neurons is a wired system of shift registers that can mimic nonlinear dynamics of an ODE-based neuron model. The control parameter of the neuron is the wiring pattern among the registers and thus they are suitable for on-chip learning. In this paper an asynchronous discrete-state spiking neuron is introduced and its typical nonlinear phenomena are demonstrated. Also, a learning algorithm for a set of neurons is presented and it is demonstrated that the algorithm enables the set of neurons to reconstruct nonlinear dynamics of another set of neurons with unknown parameter values. The learning function is validated by FPGA experiments.

  7. Dynamics of gauge field inflation

    SciTech Connect

    Alexander, Stephon; Jyoti, Dhrubo; Kosowsky, Arthur; Marcianò, Antonino

    2015-05-05

    We analyze the existence and stability of dynamical attractor solutions for cosmological inflation driven by the coupling between fermions and a gauge field. Assuming a spatially homogeneous and isotropic gauge field and fermion current, the interacting fermion equation of motion reduces to that of a free fermion up to a phase shift. Consistency of the model is ensured via the Stückelberg mechanism. We prove the existence of exactly one stable solution, and demonstrate the stability numerically. Inflation arises without fine tuning, and does not require postulating any effective potential or non-standard coupling.

  8. Dynamical analysis of uncertain neural networks with multiple time delays

    NASA Astrophysics Data System (ADS)

    Arik, Sabri

    2016-02-01

    This paper investigates the robust stability problem for dynamical neural networks in the presence of time delays and norm-bounded parameter uncertainties with respect to the class of non-decreasing, non-linear activation functions. By employing the Lyapunov stability and homeomorphism mapping theorems together, a new delay-independent sufficient condition is obtained for the existence, uniqueness and global asymptotic stability of the equilibrium point for the delayed uncertain neural networks. The condition obtained for robust stability establishes a matrix-norm relationship between the network parameters of the neural system, which can be easily verified by using properties of the class of the positive definite matrices. Some constructive numerical examples are presented to show the applicability of the obtained result and its advantages over the previously published corresponding literature results.

  9. On neural networks in identification and control of dynamic systems

    NASA Technical Reports Server (NTRS)

    Phan, Minh; Juang, Jer-Nan; Hyland, David C.

    1993-01-01

    This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.

  10. Dynamic neural architecture for social knowledge retrieval.

    PubMed

    Wang, Yin; Collins, Jessica A; Koski, Jessica; Nugiel, Tehila; Metoki, Athanasia; Olson, Ingrid R

    2017-03-13

    Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.

  11. Transient dynamics for sequence processing neural networks

    NASA Astrophysics Data System (ADS)

    Kawamura, Masaki; Okada, Masato

    2002-01-01

    An exact solution of the transient dynamics for a sequential associative memory model is discussed through both the path-integral method and the statistical neurodynamics. Although the path-integral method has the ability to give an exact solution of the transient dynamics, only stationary properties have been discussed for the sequential associative memory. We have succeeded in deriving an exact macroscopic description of the transient dynamics by analysing the correlation of crosstalk noise. Surprisingly, the order parameter equations of this exact solution are completely equivalent to those of the statistical neurodynamics, which is an approximation theory that assumes crosstalk noise to obey the Gaussian distribution. In order to examine our theoretical findings, we numerically obtain cumulants of the crosstalk noise. We verify that the third- and fourth-order cumulants are equal to zero, and that the crosstalk noise is normally distributed even in the non-retrieval case. We show that the results obtained by our theory agree with those obtained by computer simulations. We have also found that the macroscopic unstable state completely coincides with the separatrix.

  12. A mean field neural network for hierarchical module placement

    NASA Technical Reports Server (NTRS)

    Unaltuna, M. Kemal; Pitchumani, Vijay

    1992-01-01

    This paper proposes a mean field neural network for the two-dimensional module placement problem. An efficient coding scheme with only O(N log N) neurons is employed where N is the number of modules. The neurons are evolved in groups of N in log N iteration steps such that the circuit is recursively partitioned in alternating vertical and horizontal directions. In our simulations, the network was able to find optimal solutions to all test problems with up to 128 modules.

  13. Simulating dynamic plastic continuous neural networks by finite elements.

    PubMed

    Joghataie, Abdolreza; Torghabehi, Omid Oliyan

    2014-08-01

    We introduce dynamic plastic continuous neural network (DPCNN), which is comprised of neurons distributed in a nonlinear plastic medium where wire-like connections of neural networks are replaced with the continuous medium. We use finite element method to model the dynamic phenomenon of information processing within the DPCNNs. During the training, instead of weights, the properties of the continuous material at its different locations and some properties of neurons are modified. Input and output can be vectors and/or continuous functions over lines and/or areas. Delay and feedback from neurons to themselves and from outputs occur in the DPCNNs. We model a simple form of the DPCNN where the medium is a rectangular plate of bilinear material, and the neurons continuously fire a signal, which is a function of the horizontal displacement.

  14. Response of traveling waves to transient inputs in neural fields.

    PubMed

    Kilpatrick, Zachary P; Ermentrout, Bard

    2012-02-01

    We analyze the effects of transient stimulation on traveling waves in neural field equations. Neural fields are modeled as integro-differential equations whose convolution term represents the synaptic connections of a spatially extended neuronal network. The adjoint of the linearized wave equation can be used to identify how a particular input will shift the location of a traveling wave. This wave response function is analogous to the phase response curve of limit cycle oscillators. For traveling fronts in an excitatory network, the sign of the shift depends solely on the sign of the transient input. A complementary estimate of the effective shift is derived using an equation for the time-dependent speed of the perturbed front. Traveling pulses are analyzed in an asymmetric lateral inhibitory network and they can be advanced or delayed, depending on the position of spatially localized transient inputs. We also develop bounds on the amplitude of transient input necessary to terminate traveling pulses, based on the global bifurcation structure of the neural field.

  15. Response of traveling waves to transient inputs in neural fields

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Zachary P.; Ermentrout, Bard

    2012-02-01

    We analyze the effects of transient stimulation on traveling waves in neural field equations. Neural fields are modeled as integro-differential equations whose convolution term represents the synaptic connections of a spatially extended neuronal network. The adjoint of the linearized wave equation can be used to identify how a particular input will shift the location of a traveling wave. This wave response function is analogous to the phase response curve of limit cycle oscillators. For traveling fronts in an excitatory network, the sign of the shift depends solely on the sign of the transient input. A complementary estimate of the effective shift is derived using an equation for the time-dependent speed of the perturbed front. Traveling pulses are analyzed in an asymmetric lateral inhibitory network and they can be advanced or delayed, depending on the position of spatially localized transient inputs. We also develop bounds on the amplitude of transient input necessary to terminate traveling pulses, based on the global bifurcation structure of the neural field.

  16. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    NASA Astrophysics Data System (ADS)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  17. Speech Recognition Using Neural Nets and Dynamic Time Warping

    DTIC Science & Technology

    1988-12-01

    AFIT/GEO/ENG/88D-1 SPEECH RECOGNITION USING NEURAL NETS AND DYNAMIC TIME WARPING THESIS Presented to the Faculty of the School of Engineering ... Engineering Gary Dean Barmore, B.S., B.S.E.E Capt, USAF December, 1988 Approved for public release; distribution unlimited Preface The purpose of this...input vector (. is smaller than the distance between any other node’s weight vector and the input vector. Hence, a sequence of input vectors

  18. Generalized neural networks for spectral analysis: dynamics and Liapunov functions.

    PubMed

    Vegas, José M; Zufiria, Pedro J

    2004-03-01

    This paper analyzes local and global behavior of several dynamical systems which generalize some artificial neural network (ANN) semilinear models originally designed for principal component analysis (PCA) in the characterization of random vectors. These systems implicitly performed the spectral analysis of correlation (i.e. symmetric positive definite) matrices. Here, the proposed generalizations cover both nonsymmetric matrices as well as fully nonlinear models. Local stability analysis is performed via linearization and global behavior is analyzed by constructing several Liapunov functions.

  19. Shaping the Dynamics of a Bidirectional Neural Interface

    PubMed Central

    Vato, Alessandro; Semprini, Marianna; Maggiolini, Emma; Szymanski, Francois D.; Fadiga, Luciano; Panzeri, Stefano; Mussa-Ivaldi, Ferdinando A.

    2012-01-01

    Progress in decoding neural signals has enabled the development of interfaces that translate cortical brain activities into commands for operating robotic arms and other devices. The electrical stimulation of sensory areas provides a means to create artificial sensory information about the state of a device. Taken together, neural activity recording and microstimulation techniques allow us to embed a portion of the central nervous system within a closed-loop system, whose behavior emerges from the combined dynamical properties of its neural and artificial components. In this study we asked if it is possible to concurrently regulate this bidirectional brain-machine interaction so as to shape a desired dynamical behavior of the combined system. To this end, we followed a well-known biological pathway. In vertebrates, the communications between brain and limb mechanics are mediated by the spinal cord, which combines brain instructions with sensory information and organizes coordinated patterns of muscle forces driving the limbs along dynamically stable trajectories. We report the creation and testing of the first neural interface that emulates this sensory-motor interaction. The interface organizes a bidirectional communication between sensory and motor areas of the brain of anaesthetized rats and an external dynamical object with programmable properties. The system includes (a) a motor interface decoding signals from a motor cortical area, and (b) a sensory interface encoding the state of the external object into electrical stimuli to a somatosensory area. The interactions between brain activities and the state of the external object generate a family of trajectories converging upon a selected equilibrium point from arbitrary starting locations. Thus, the bidirectional interface establishes the possibility to specify not only a particular movement trajectory but an entire family of motions, which includes the prescribed reactions to unexpected perturbations. PMID

  20. Some new results on system identification with dynamic neural networks.

    PubMed

    Yu, W; Li, X

    2001-01-01

    Nonlinear system online identification via dynamic neural networks is studied in this paper. The main contribution of the paper is that the passivity approach is applied to access several new stable properties of neuro identification. The conditions for passivity, stability, asymptotic stability, and input-to-state stability are established in certain senses. We conclude that the gradient descent algorithm for weight adjustment is stable in an L(infinity) sense and robust to any bounded uncertainties.

  1. Dynamic neural activity during stress signals resilient coping

    PubMed Central

    Sinha, Rajita; Lacadie, Cheryl M.; Constable, R. Todd; Seo, Dongju

    2016-01-01

    Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping. PMID:27432990

  2. Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.

    PubMed

    Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc

    2016-08-01

    This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.

  3. Perspective: network-guided pattern formation of neural dynamics.

    PubMed

    Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C

    2014-10-05

    The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.

  4. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  5. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  6. Adaptive pedestrian detection using convolutional neural network with dynamically adjusted classifier

    NASA Astrophysics Data System (ADS)

    Tang, Song; Ye, Mao; Zhu, Ce; Liu, Yiguang

    2017-01-01

    How to transfer the trained detector into the target scenarios has been an important topic for a long time in the field of computer vision. Unfortunately, most of the existing transfer methods need to keep source samples or label target samples in the detection phase. Therefore, they are difficult to apply to real applications. For this problem, we propose a framework that consists of a controlled convolutional neural network (CCNN) and a modulating neural network (MNN). In a CCNN, the parameters of the last layer, i.e., the classifier, are dynamically adjusted by a MNN. For each target sample, the CCNN adaptively generates a proprietary classifier. Our contributions include (1) the first detector-based unsupervised transfer method that is very suitable for real applications and (2) a new scheme of a dynamically adjusting classifier in which a new object function is invented. Experimental results confirm that our method can achieve state-of-the-art results on two pedestrian datasets.

  7. Bio-Inspired Neural Model for Learning Dynamic Models

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  8. Dynamical criticality in the collective activity of a neural population

    NASA Astrophysics Data System (ADS)

    Mora, Thierry

    The past decade has seen a wealth of physiological data suggesting that neural networks may behave like critical branching processes. Concurrently, the collective activity of neurons has been studied using explicit mappings to classic statistical mechanics models such as disordered Ising models, allowing for the study of their thermodynamics, but these efforts have ignored the dynamical nature of neural activity. I will show how to reconcile these two approaches by learning effective statistical mechanics models of the full history of the collective activity of a neuron population directly from physiological data, treating time as an additional dimension. Applying this technique to multi-electrode recordings from retinal ganglion cells, and studying the thermodynamics of the inferred model, reveals a peak in specific heat reminiscent of a second-order phase transition.

  9. A neural network approach to dynamic task assignment of multirobots.

    PubMed

    Zhu, Anmin; Yang, Simon X

    2006-09-01

    In this paper, a neural network approach to task assignment, based on a self-organizing map (SOM), is proposed for a multirobot system in dynamic environments subject to uncertainties. It is capable of dynamically controlling a group of mobile robots to achieve multiple tasks at different locations, so that the desired number of robots will arrive at every target location from arbitrary initial locations. In the proposed approach, the robot motion planning is integrated with the task assignment, thus the robots start to move once the overall task is given. The robot navigation can be dynamically adjusted to guarantee that each target location has the desired number of robots, even under uncertainties such as when some robots break down. The proposed approach is capable of dealing with changing environments. The effectiveness and efficiency of the proposed approach are demonstrated by simulation studies.

  10. Renormalization of Collective Modes in Large-Scale Neural Dynamics

    NASA Astrophysics Data System (ADS)

    Moirogiannis, Dimitrios; Piro, Oreste; Magnasco, Marcelo O.

    2017-03-01

    The bulk of studies of coupled oscillators use, as is appropriate in Physics, a global coupling constant controlling all individual interactions. However, because as the coupling is increased, the number of relevant degrees of freedom also increases, this setting conflates the strength of the coupling with the effective dimensionality of the resulting dynamics. We propose a coupling more appropriate to neural circuitry, where synaptic strengths are under biological, activity-dependent control and where the coupling strength and the dimensionality can be controlled separately. Here we study a set of N→ ∞ strongly- and nonsymmetrically-coupled, dissipative, powered, rotational dynamical systems, and derive the equations of motion of the reduced system for dimensions 2 and 4. Our setting highlights the statistical structure of the eigenvectors of the connectivity matrix as the fundamental determinant of collective behavior, inheriting from this structure symmetries and singularities absent from the original microscopic dynamics.

  11. The Hamiltonian Brain: Efficient Probabilistic Inference with Excitatory-Inhibitory Neural Circuit Dynamics

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural

  12. Endothelial cells regulate neural crest and second heart field morphogenesis

    PubMed Central

    Milgrom-Hoffman, Michal; Michailovici, Inbal; Ferrara, Napoleone; Zelzer, Elazar; Tzahor, Eldad

    2014-01-01

    ABSTRACT Cardiac and craniofacial developmental programs are intricately linked during early embryogenesis, which is also reflected by a high frequency of birth defects affecting both regions. The molecular nature of the crosstalk between mesoderm and neural crest progenitors and the involvement of endothelial cells within the cardio–craniofacial field are largely unclear. Here we show in the mouse that genetic ablation of vascular endothelial growth factor receptor 2 (Flk1) in the mesoderm results in early embryonic lethality, severe deformation of the cardio–craniofacial field, lack of endothelial cells and a poorly formed vascular system. We provide evidence that endothelial cells are required for migration and survival of cranial neural crest cells and consequently for the deployment of second heart field progenitors into the cardiac outflow tract. Insights into the molecular mechanisms reveal marked reduction in Transforming growth factor beta 1 (Tgfb1) along with changes in the extracellular matrix (ECM) composition. Our collective findings in both mouse and avian models suggest that endothelial cells coordinate cardio–craniofacial morphogenesis, in part via a conserved signaling circuit regulating ECM remodeling by Tgfb1. PMID:24996922

  13. Endothelial cells regulate neural crest and second heart field morphogenesis.

    PubMed

    Milgrom-Hoffman, Michal; Michailovici, Inbal; Ferrara, Napoleone; Zelzer, Elazar; Tzahor, Eldad

    2014-07-04

    Cardiac and craniofacial developmental programs are intricately linked during early embryogenesis, which is also reflected by a high frequency of birth defects affecting both regions. The molecular nature of the crosstalk between mesoderm and neural crest progenitors and the involvement of endothelial cells within the cardio-craniofacial field are largely unclear. Here we show in the mouse that genetic ablation of vascular endothelial growth factor receptor 2 (Flk1) in the mesoderm results in early embryonic lethality, severe deformation of the cardio-craniofacial field, lack of endothelial cells and a poorly formed vascular system. We provide evidence that endothelial cells are required for migration and survival of cranial neural crest cells and consequently for the deployment of second heart field progenitors into the cardiac outflow tract. Insights into the molecular mechanisms reveal marked reduction in Transforming growth factor beta 1 (Tgfb1) along with changes in the extracellular matrix (ECM) composition. Our collective findings in both mouse and avian models suggest that endothelial cells coordinate cardio-craniofacial morphogenesis, in part via a conserved signaling circuit regulating ECM remodeling by Tgfb1.

  14. dNSP: a biologically inspired dynamic Neural network approach to Signal Processing.

    PubMed

    Cano-Izquierdo, José Manuel; Ibarrola, Julio; Pinzolas, Miguel; Almonacid, Miguel

    2008-09-01

    The arriving order of data is one of the intrinsic properties of a signal. Therefore, techniques dealing with this temporal relation are required for identification and signal processing tasks. To perform a classification of the signal according with its temporal characteristics, it would be useful to find a feature vector in which the temporal attributes were embedded. The correlation and power density spectrum functions are suitable tools to manage this issue. These functions are usually defined with statistical formulation. On the other hand, in biology there can be found numerous processes in which signals are processed to give a feature vector; for example, the processing of sound by the auditory system. In this work, the dNSP (dynamic Neural Signal Processing) architecture is proposed. This architecture allows representing a time-varying signal by a spatial (thus statical) vector. Inspired by the aforementioned biological processes, the dNSP performs frequency decomposition using an analogical parallel algorithm carried out by simple processing units. The architecture has been developed under the paradigm of a multilayer neural network, where the different layers are composed by units whose activation functions have been extracted from the theory of Neural Dynamic [Grossberg, S. (1988). Nonlinear neural networks principles, mechanisms and architectures. Neural Networks, 1, 17-61]. A theoretical study of the behavior of the dynamic equations of the units and their relationship with some statistical functions allows establishing a parallelism between the unit activations and correlation and power density spectrum functions. To test the capabilities of the proposed approach, several testbeds have been employed, i.e. the frequencial study of mathematical functions. As a possible application of the architecture, a highly interesting problem in the field of automatic control is addressed: the recognition of a controlled DC motor operating state.

  15. Neural Population Dynamics during Reaching Are Better Explained by a Dynamical System than Representational Tuning

    PubMed Central

    Dann, Benjamin

    2016-01-01

    Recent models of movement generation in motor cortex have sought to explain neural activity not as a function of movement parameters, known as representational models, but as a dynamical system acting at the level of the population. Despite evidence supporting this framework, the evaluation of representational models and their integration with dynamical systems is incomplete in the literature. Using a representational velocity-tuning based simulation of center-out reaching, we show that incorporating variable latency offsets between neural activity and kinematics is sufficient to generate rotational dynamics at the level of neural populations, a phenomenon observed in motor cortex. However, we developed a covariance-matched permutation test (CMPT) that reassigns neural data between task conditions independently for each neuron while maintaining overall neuron-to-neuron relationships, revealing that rotations based on the representational model did not uniquely depend on the underlying condition structure. In contrast, rotations based on either a dynamical model or motor cortex data depend on this relationship, providing evidence that the dynamical model more readily explains motor cortex activity. Importantly, implementing a recurrent neural network we demonstrate that both representational tuning properties and rotational dynamics emerge, providing evidence that a dynamical system can reproduce previous findings of representational tuning. Finally, using motor cortex data in combination with the CMPT, we show that results based on small numbers of neurons or conditions should be interpreted cautiously, potentially informing future experimental design. Together, our findings reinforce the view that representational models lack the explanatory power to describe complex aspects of single neuron and population level activity. PMID:27814352

  16. Forecasting financial asset processes: stochastic dynamics via learning neural networks.

    PubMed

    Giebel, S; Rainer, M

    2010-01-01

    Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.

  17. Autonomic neural control of dynamic cerebral autoregulation in humans

    NASA Technical Reports Server (NTRS)

    Zhang, Rong; Zuckerman, Julie H.; Iwasaki, Kenichi; Wilson, Thad E.; Crandall, Craig G.; Levine, Benjamin D.

    2002-01-01

    BACKGROUND: The purpose of the present study was to determine the role of autonomic neural control of dynamic cerebral autoregulation in humans. METHODS AND RESULTS: We measured arterial pressure and cerebral blood flow (CBF) velocity in 12 healthy subjects (aged 29+/-6 years) before and after ganglion blockade with trimethaphan. CBF velocity was measured in the middle cerebral artery using transcranial Doppler. The magnitude of spontaneous changes in mean blood pressure and CBF velocity were quantified by spectral analysis. The transfer function gain, phase, and coherence between these variables were estimated to quantify dynamic cerebral autoregulation. After ganglion blockade, systolic and pulse pressure decreased significantly by 13% and 26%, respectively. CBF velocity decreased by 6% (P<0.05). In the very low frequency range (0.02 to 0.07 Hz), mean blood pressure variability decreased significantly (by 82%), while CBF velocity variability persisted. Thus, transfer function gain increased by 81%. In addition, the phase lead of CBF velocity to arterial pressure diminished. These changes in transfer function gain and phase persisted despite restoration of arterial pressure by infusion of phenylephrine and normalization of mean blood pressure variability by oscillatory lower body negative pressure. CONCLUSIONS: These data suggest that dynamic cerebral autoregulation is altered by ganglion blockade. We speculate that autonomic neural control of the cerebral circulation is tonically active and likely plays a significant role in the regulation of beat-to-beat CBF in humans.

  18. Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans.

    PubMed

    Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude

    2013-01-01

    Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies.

  19. Memory formation: from network structure to neural dynamics.

    PubMed

    Feldt, Sarah; Wang, Jane X; Hetrick, Vaughn L; Berke, Joshua D; Zochowski, Michal

    2010-05-13

    Understanding the neural correlates of brain function is an extremely challenging task, since any cognitive process is distributed over a complex and evolving network of neurons that comprise the brain. In order to quantify observed changes in neuronal dynamics during hippocampal memory formation, we present metrics designed to detect directional interactions and the formation of functional neuronal ensembles. We apply these metrics to both experimental and model-derived data in an attempt to link anatomical network changes with observed changes in neuronal dynamics during hippocampal memory formation processes. We show that the developed model provides a consistent explanation of the anatomical network modifications that underlie the activity changes observed in the experimental data.

  20. Derivation of a neural field model from a network of theta neurons.

    PubMed

    Laing, Carlo R

    2014-07-01

    Neural field models are used to study macroscopic spatiotemporal patterns in the cortex. Their derivation from networks of model neurons normally involves a number of assumptions, which may not be correct. Here we present an exact derivation of a neural field model from an infinite network of theta neurons, the canonical form of a type I neuron. We demonstrate the existence of a "bump" solution in both a discrete network of neurons and in the corresponding neural field model.

  1. Neural network potentials for dynamics and thermodynamics of gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Chiriki, Siva; Jindal, Shweta; Bulusu, Satya S.

    2017-02-01

    For understanding the dynamical and thermodynamical properties of metal nanoparticles, one has to go beyond static and structural predictions of a nanoparticle. Accurate description of dynamical properties may be computationally intensive depending on the size of nanoparticle. Herein, we demonstrate the use of atomistic neural network potentials, obtained by fitting quantum mechanical data, for extensive molecular dynamics simulations of gold nanoparticles. The fitted potential was tested by performing global optimizations of size selected gold nanoparticles (Aun, 17 ≤ n ≤ 58). We performed molecular dynamics simulations in canonical (NVT) and microcanonical (NVE) ensembles on Au17, Au34, Au58 for a total simulation time of around 3 ns for each nanoparticle. Our study based on both NVT and NVE ensembles indicate that there is a dynamical coexistence of solid-like and liquid-like phases near melting transition. We estimate the probability at finite temperatures for set of isomers lying below 0.5 eV from the global minimum structure. In the case of Au17 and Au58, the properties can be estimated using global minimum structure at room temperature, while for Au34, global minimum structure is not a dominant structure even at low temperatures.

  2. Optimal path-finding through mental exploration based on neural energy field gradients.

    PubMed

    Wang, Yihong; Wang, Rubin; Zhu, Yating

    2017-02-01

    Rodent animal can accomplish self-locating and path-finding task by forming a cognitive map in the hippocampus representing the environment. In the classical model of the cognitive map, the system (artificial animal) needs large amounts of physical exploration to study spatial environment to solve path-finding problems, which costs too much time and energy. Although Hopfield's mental exploration model makes up for the deficiency mentioned above, the path is still not efficient enough. Moreover, his model mainly focused on the artificial neural network, and clear physiological meanings has not been addressed. In this work, based on the concept of mental exploration, neural energy coding theory has been applied to the novel calculation model to solve the path-finding problem. Energy field is constructed on the basis of the firing power of place cell clusters, and the energy field gradient can be used in mental exploration to solve path-finding problems. The study shows that the new mental exploration model can efficiently find the optimal path, and present the learning process with biophysical meaning as well. We also analyzed the parameters of the model which affect the path efficiency. This new idea verifies the importance of place cell and synapse in spatial memory and proves that energy coding is effective to study cognitive activities. This may provide the theoretical basis for the neural dynamics mechanism of spatial memory.

  3. Dynamic analysis of a general class of winner-take-all competitive neural networks.

    PubMed

    Fang, Yuguang; Cohen, Michael A; Kincaid, Thomas G

    2010-05-01

    This paper studies a general class of dynamical neural networks with lateral inhibition, exhibiting winner-take-all (WTA) behavior. These networks are motivated by a metal-oxide-semiconductor field effect transistor (MOSFET) implementation of neural networks, in which mutual competition plays a very important role. We show that for a fairly general class of competitive neural networks, WTA behavior exists. Sufficient conditions for the network to have a WTA equilibrium are obtained, and rigorous convergence analysis is carried out. The conditions for the network to have the WTA behavior obtained in this paper provide design guidelines for the network implementation and fabrication. We also demonstrate that whenever the network gets into the WTA region, it will stay in that region and settle down exponentially fast to the WTA point. This provides a speeding procedure for the decision making: as soon as it gets into the region, the winner can be declared. Finally, we show that this WTA neural network has a self-resetting property, and a resetting principle is proposed.

  4. Sensorimotor learning biases choice behavior: a learning neural field model for decision making.

    PubMed

    Klaes, Christian; Schneegans, Sebastian; Schöner, Gregor; Gail, Alexander

    2012-01-01

    According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for

  5. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex

    PubMed Central

    Procyk, Emmanuel; Dominey, Peter Ford

    2016-01-01

    Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a

  6. Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex.

    PubMed

    Enel, Pierre; Procyk, Emmanuel; Quilodran, René; Dominey, Peter Ford

    2016-06-01

    Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a

  7. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating

    PubMed Central

    Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of

  8. A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating.

    PubMed

    Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor

    2017-01-01

    Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of

  9. Multiplex visibility graphs to investigate recurrent neural network dynamics

    NASA Astrophysics Data System (ADS)

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-03-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.

  10. Multiplex visibility graphs to investigate recurrent neural network dynamics

    PubMed Central

    Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert

    2017-01-01

    A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods. PMID:28281563

  11. Neural dynamics of image representation in the primary visual cortex.

    PubMed

    Yan, Xiaogang; Khambhati, Ankit; Liu, Lei; Lee, Tai Sing

    2012-01-01

    Horizontal connections in the primary visual cortex have been hypothesized to play a number of computational roles: association field for contour completion, surface interpolation, surround suppression, and saliency computation. Here, we argue that horizontal connections might also serve a critical role for computing the appropriate codes for image representation. That the early visual cortex or V1 explicitly represents the image we perceive has been a common assumption in computational theories of efficient coding (Olshausen and Field (1996)), yet such a framework for understanding the circuitry in V1 has not been seriously entertained in the neurophysiological community. In fact, a number of recent fMRI and neurophysiological studies cast doubt on the neural validity of such an isomorphic representation (Cornelissen et al., 2006; von der Heydt et al., 2003). In this study, we investigated, neurophysiologically, how V1 neurons respond to uniform color surfaces and show that spiking activities of neurons can be decomposed into three components: a bottom-up feedforward input, an articulation of color tuning and a contextual modulation signal that is inversely proportional to the distance away from the bounding contrast border. We demonstrate through computational simulations that the behaviors of a model for image representation are consistent with many aspects of our neural observations. We conclude that the hypothesis of isomorphic representation of images in V1 remains viable and this hypothesis suggests an additional new interpretation of the functional roles of horizontal connections in the primary visual cortex.

  12. Track and Field Dynamics. Second Edition.

    ERIC Educational Resources Information Center

    Ecker, Tom

    Track and field coaching is considered an art embodying three sciences--physiology, psychology, and dynamics. It is the area of dynamics, the branch of physics that deals with the action of force on bodies, that is central to this book. Although the book does not cover the entire realm of dynamics, the laws and principles that relate directly to…

  13. Emergence of spatially heterogeneous burst suppression in a neural field model of electrocortical activity

    PubMed Central

    Bojak, Ingo; Stoyanov, Zhivko V.; Liley, David T. J.

    2015-01-01

    Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex. PMID:25767438

  14. Social decisions affect neural activity to perceived dynamic gaze

    PubMed Central

    Latinus, Marianne; Love, Scott A.; Rossi, Alejandra; Parada, Francisco J.; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin

    2015-01-01

    Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a ‘default mode’ that may focus on spatial information; a ‘socially aware mode’ that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified. PMID:25925272

  15. Social decisions affect neural activity to perceived dynamic gaze.

    PubMed

    Latinus, Marianne; Love, Scott A; Rossi, Alejandra; Parada, Francisco J; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin; Puce, Aina

    2015-11-01

    Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a 'default mode' that may focus on spatial information; a 'socially aware mode' that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified.

  16. Nonlinear adaptive trajectory tracking using dynamic neural networks.

    PubMed

    Poznyak, A S; Yu, W; Sanchez, E N; Perez, J P

    1999-01-01

    In this paper the adaptive nonlinear identification and trajectory tracking are discussed via dynamic neural networks. By means of a Lyapunov-like analysis we determine stability conditions for the identification error. Then we analyze the trajectory tracking error by a local optimal controller. An algebraic Riccati equation and a differential one are used for the identification and the tracking error analysis. As our main original contributions, we establish two theorems: the first one gives a bound for the identification error and the second one establishes a bound for the tracking error. We illustrate the effectiveness of these results by two examples: the second-order relay system with multiple isolated equilibrium points and the chaotic system given by Duffing equation.

  17. Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C.

    1997-01-01

    A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.

  18. Nonlinear dynamics of direction-selective recurrent neural media.

    PubMed

    Xie, Xiaohui; Giese, Martin A

    2002-05-01

    The direction selectivity of cortical neurons can be accounted for by asymmetric lateral connections. Such lateral connectivity leads to a network dynamics with characteristic properties that can be exploited for distinguishing in neurophysiological experiments this mechanism for direction selectivity from other possible mechanisms. We present a mathematical analysis for a class of direction-selective neural models with asymmetric lateral connections. Contrasting with earlier theoretical studies that have analyzed approximations of the network dynamics by neglecting nonlinearities using methods from linear systems theory, we study the network dynamics with nonlinearity taken into consideration. We show that asymmetrically coupled networks can stabilize stimulus-locked traveling pulse solutions that are appropriate for the modeling of the responses of direction-selective neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of these solutions breaks down, giving rise to lurching activity waves with specific spatiotemporal periodicity. These solutions, and the bifurcation by which they arise, cannot be easily accounted for by classical models for direction selectivity.

  19. Random dynamics of the Morris-Lecar neural model.

    PubMed

    Tateno, Takashi; Pakdaman, Khashayar

    2004-09-01

    Determining the response characteristics of neurons to fluctuating noise-like inputs similar to realistic stimuli is essential for understanding neuronal coding. This study addresses this issue by providing a random dynamical system analysis of the Morris-Lecar neural model driven by a white Gaussian noise current. Depending on parameter selections, the deterministic Morris-Lecar model can be considered as a canonical prototype for widely encountered classes of neuronal membranes, referred to as class I and class II membranes. In both the transitions from excitable to oscillating regimes are associated with different bifurcation scenarios. This work examines how random perturbations affect these two bifurcation scenarios. It is first numerically shown that the Morris-Lecar model driven by white Gaussian noise current tends to have a unique stationary distribution in the phase space. Numerical evaluations also reveal quantitative and qualitative changes in this distribution in the vicinity of the bifurcations of the deterministic system. However, these changes notwithstanding, our numerical simulations show that the Lyapunov exponents of the system remain negative in these parameter regions, indicating that no dynamical stochastic bifurcations take place. Moreover, our numerical simulations confirm that, regardless of the asymptotic dynamics of the deterministic system, the random Morris-Lecar model stabilizes at a unique stationary stochastic process. In terms of random dynamical system theory, our analysis shows that additive noise destroys the above-mentioned bifurcation sequences that characterize class I and class II regimes in the Morris-Lecar model. The interpretation of this result in terms of neuronal coding is that, despite the differences in the deterministic dynamics of class I and class II membranes, their responses to noise-like stimuli present a reliable feature.

  20. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-01-01

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708

  1. Hidden Conditional Neural Fields for Continuous Phoneme Speech Recognition

    NASA Astrophysics Data System (ADS)

    Fujii, Yasuhisa; Yamamoto, Kazumasa; Nakagawa, Seiichi

    In this paper, we propose Hidden Conditional Neural Fields (HCNF) for continuous phoneme speech recognition, which are a combination of Hidden Conditional Random Fields (HCRF) and a Multi-Layer Perceptron (MLP), and inherit their merits, namely, the discriminative property for sequences from HCRF and the ability to extract non-linear features from an MLP. HCNF can incorporate many types of features from which non-linear features can be extracted, and is trained by sequential criteria. We first present the formulation of HCNF and then examine three methods to further improve automatic speech recognition using HCNF, which is an objective function that explicitly considers training errors, provides a hierarchical tandem-style feature and includes a deep non-linear feature extractor for the observation function. We show that HCNF can be trained realistically without any initial model and outperforms HCRF and the triphone hidden Markov model trained by the minimum phone error (MPE) manner using experimental results for continuous English phoneme recognition on the TIMIT core test set and Japanese phoneme recognition on the IPA 100 test set.

  2. Neural networks with excitatory and inhibitory components: Direct and inverse problems by a mean-field approach

    NASA Astrophysics Data System (ADS)

    di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro

    2016-01-01

    We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.

  3. Control of Complex Dynamic Systems by Neural Networks

    NASA Technical Reports Server (NTRS)

    Spall, James C.; Cristion, John A.

    1993-01-01

    This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.

  4. Advanced Neural Network Modeling of Synthetic Jet Flow Fields

    DTIC Science & Technology

    2006-03-01

    The purpose of this research was to continue development of a neural network -based, lumped deterministic source term (LDST) approximation module for...main exploration involved the grid sensitivity of the neural network model. A second task was originally planned on the portability of the approach to

  5. Moving to higher ground: The dynamic field theory and the dynamics of visual cognition.

    PubMed

    Johnson, Jeffrey S; Spencer, John P; Schöner, Gregor

    2008-08-01

    In the present report, we describe a new dynamic field theory that captures the dynamics of visuo-spatial cognition. This theory grew out of the dynamic systems approach to motor control and development, and is grounded in neural principles. The initial application of dynamic field theory to issues in visuo-spatial cognition extended concepts of the motor approach to decision making in a sensori-motor context, and, more recently, to the dynamics of spatial cognition. Here we extend these concepts still further to address topics in visual cognition, including visual working memory for non-spatial object properties, the processes that underlie change detection, and the 'binding problem' in vision. In each case, we demonstrate that the general principles of the dynamic field approach can unify findings in the literature and generate novel predictions. We contend that the application of these concepts to visual cognition avoids the pitfalls of reductionist approaches in cognitive science, and points toward a formal integration of brains, bodies, and behavior.

  6. Traveling waves and breathers in an excitatory-inhibitory neural field

    NASA Astrophysics Data System (ADS)

    Folias, Stefanos E.

    2017-03-01

    We study existence and stability of traveling activity bump solutions in an excitatory-inhibitory (E-I) neural field with Heaviside firing rate functions by deriving existence conditions for traveling bumps and an Evans function to analyze their spectral stability. Subsequently, we show that these existence and stability results reduce, in the limit of wave speed c →0 , to the equivalent conditions developed for the stationary bump case. Using the results for the stationary bump case, we show that drift bifurcations of stationary bumps serve as a mechanism for generating traveling bump solutions in the E-I neural field as parameters are varied. Furthermore, we explore the interrelations between stationary and traveling types of bumps and breathers (time-periodic oscillatory bumps) by bridging together analytical and simulation results for stationary and traveling bumps and their bifurcations in a region of parameter space. Interestingly, we find evidence for a codimension-2 drift-Hopf bifurcation occurring as two parameters, inhibitory time constant τ and I-to-I synaptic connection strength w¯i i, are varied and show that the codimension-2 point serves as an organizing center for the dynamics of these four types of spatially localized solutions. Additionally, we describe a case involving subcritical bifurcations that lead to traveling waves and breathers as τ is varied.

  7. Track and Field: Technique Through Dynamics.

    ERIC Educational Resources Information Center

    Ecker, Tom

    This book was designed to aid in applying the laws of dynamics to the sport of track and field, event by event. It begins by tracing the history of the discoveries of the laws of motion and the principles of dynamics, with explanations of commonly used terms derived from the vocabularies of the physical sciences. The principles and laws of…

  8. Generalized activity equations for spiking neural network dynamics

    PubMed Central

    Buice, Michael A.; Chow, Carson C.

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales—the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances. PMID:24298252

  9. Heart fields: spatial polarity and temporal dynamics.

    PubMed

    Abu-Issa, Radwan

    2014-02-01

    In chick and mouse, heart fields undergo dynamic morphological spatiotemporal changes during heart tube formation. Here, the dynamic change in spatial polarity of such fields is discussed and a new perspective on the heart fields is proposed. The heart progenitor cells delaminate through the primitive streak and migrate in a semicircular trajectory craniolaterally forming the bilateral heart fields as part of the splanchnic mesoderm. They switch their polarity from anteroposterior to mediolateral. The anterior intestinal portal posterior descent inverts the newly formed heart field mediolateral polarity into lateromedial by 125° bending. The heart fields revert back to their original anteroposterior polarity and fuse at the midline forming a semi heart tube by completing their half circle movement. Several names and roles were assigned to different portions of the heart fields: posterior versus anterior, first versus second, and primary versus secondary heart field. The posterior and anterior heart fields define basically physical fields that form the inflow-outflow axis of the heart tube. The first and second heart fields are, in contrast, temporal fields of differentiating cardiomyocytes expressing myosin light chain 2a and undifferentiated and proliferating precardiac mesoderm expressing Isl1 gene, respectively. The two markers present a complementary pattern and are expressed transiently in all myocardial lineages. Thus, Isl1 is not restricted to a portion of the heart field or one of the two heart lineages as has been often assumed.

  10. Exploring neural cell dynamics with digital holographic microscopy.

    PubMed

    Marquet, P; Depeursinge, C; Magistretti, P J

    2013-01-01

    In this review, we summarize how the new concept of digital optics applied to the field of holographic microscopy has allowed the development of a reliable and flexible digital holographic quantitative phase microscopy (DH-QPM) technique at the nanoscale particularly suitable for cell imaging. Particular emphasis is placed on the original biological information provided by the quantitative phase signal. We present the most relevant DH-QPM applications in the field of cell biology, including automated cell counts, recognition, classification, three-dimensional tracking, discrimination between physiological and pathophysiological states, and the study of cell membrane fluctuations at the nanoscale. In the last part, original results show how DH-QPM can address two important issues in the field of neurobiology, namely, multiple-site optical recording of neuronal activity and noninvasive visualization of dendritic spine dynamics resulting from a full digital holographic microscopy tomographic approach.

  11. Filling the Gap on Developmental Change: Tests of a Dynamic Field Theory of Spatial Cognition

    ERIC Educational Resources Information Center

    Schutte, Anne R.; Spencer, John P.

    2010-01-01

    In early childhood, there is a developmental transition in spatial memory biases. Before the transition, children's memory responses are biased toward the midline of a space, while after the transition responses are biased away from midline. The Dynamic Field Theory (DFT) posits that changes in neural interaction and changes in how children…

  12. Neural RNA as a principal dynamic information carrier in a neuron

    NASA Astrophysics Data System (ADS)

    Berezin, Andrey A.

    1999-11-01

    Quantum mechanical approach has been used to develop a model of the neural ribonucleic acid molecule dynamics. Macro and micro Fermi-Pasta-Ulam recurrence has been considered as a principle information carrier in a neuron.

  13. The influence of electric fields on hippocampal neural progenitor cells.

    PubMed

    Ariza, Carlos Atico; Fleury, Asha T; Tormos, Christian J; Petruk, Vadim; Chawla, Sagar; Oh, Jisun; Sakaguchi, Donald S; Mallapragada, Surya K

    2010-12-01

    The differentiation and proliferation of neural stem/progenitor cells (NPCs) depend on various in vivo environmental factors or cues, which may include an endogenous electrical field (EF), as observed during nervous system development and repair. In this study, we investigate the morphologic, phenotypic, and mitotic alterations of adult hippocampal NPCs that occur when exposed to two EFs of estimated endogenous strengths. NPCs treated with a 437 mV/mm direct current (DC) EF aligned perpendicularly to the EF vector and had a greater tendency to differentiate into neurons, but not into oligodendrocytes or astrocytes, compared to controls. Furthermore, NPC process growth was promoted perpendicularly and inhibited anodally in the 437 mV/mm DC EF. Yet fewer cells were observed in the DC EF, which in part was due to a decrease in cell viability. The other EF applied was a 46 mV/mm alternating current (AC) EF. However, the 46 mV/mm AC EF showed no major differences in alignment or differentiation, compared to control conditions. For both EF treatments, the percent of mitotic cells during the last 14 h of the experiment were statistically similar to controls. Reported here, to our knowledge, is the first evidence of adult NPC differentiation affected in an EF in vitro. Further investigation and application of EFs on stem cells is warranted to elucidate the utility of EFs to control phenotypic behavior. With progress, the use of EFs may be engineered to control differentiation and target the growth of transplanted cells in a stem cell-based therapy to treat nervous system disorders.

  14. Islet1 derivatives in the heart are of both neural crest and second heart field origin

    PubMed Central

    Engleka, Kurt A.; Manderfield, Lauren J.; Brust, Rachael D.; Li, Li; Cohen, Ashley; Dymecki, Susan M.; Epstein, Jonathan A.

    2012-01-01

    Rationale Islet1 (Isl1) has been proposed as a marker of cardiac progenitor cells derived from the second heart field and is utilized to identify and purify cardiac progenitors from murine and human specimens for ex vivo expansion. The use of Isl1 as a specific second heart field marker is dependent on its exclusion from other cardiac lineages such as neural crest. Objective Determine if Isl1 is expressed by cardiac neural crest. Methods and Results We used an intersectional fate-mapping system employing the RC::FrePe allele which reports dual Flpe and Cre recombination. Combining Isl11Cre/+, a SHF driver, and Wnt1::Flpe, a neural crest driver, with Rc::FrePe reveals that some Isl1 derivatives in the cardiac outflow tract derive from Wnt1-expressing neural crest progenitors. In contrast, no overlap was observed between Wnt1-derived neural crest and an alternative second heart field driver, Mef2c-AHF-Cre. Conclusions Isl1 is not restricted to second heart field progenitors in the developing heart but also labels cardiac neural crest. The intersection of Isl1 and Wnt1 lineages within the heart provides a caveat to using Isl1 as an exclusive second heart field cardiac progenitor marker and suggests that some Isl1-expressing progenitor cells derived from embryos, ES or iPS cultures may be of neural crest lineage. PMID:22394517

  15. Cognitive Flexibility through Metastable Neural Dynamics Is Disrupted by Damage to the Structural Connectome.

    PubMed

    Hellyer, Peter J; Scott, Gregory; Shanahan, Murray; Sharp, David J; Leech, Robert

    2015-06-17

    Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome.

  16. The relevance of network micro-structure for neural dynamics

    PubMed Central

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits. PMID:23761758

  17. The neural dynamics of song syntax in songbirds

    NASA Astrophysics Data System (ADS)

    Jin, Dezhe

    2010-03-01

    Songbird is ``the hydrogen atom'' of the neuroscience of complex, learned vocalizations such as human speech. Songs of Bengalese finch consist of sequences of syllables. While syllables are temporally stereotypical, syllable sequences can vary and follow complex, probabilistic syntactic rules, which are rudimentarily similar to grammars in human language. Songbird brain is accessible to experimental probes, and is understood well enough to construct biologically constrained, predictive computational models. In this talk, I will discuss the structure and dynamics of neural networks underlying the stereotypy of the birdsong syllables and the flexibility of syllable sequences. Recent experiments and computational models suggest that a syllable is encoded in a chain network of projection neurons in premotor nucleus HVC (proper name). Precisely timed spikes propagate along the chain, driving vocalization of the syllable through downstream nuclei. Through a computational model, I show that that variable syllable sequences can be generated through spike propagations in a network in HVC in which the syllable-encoding chain networks are connected into a branching chain pattern. The neurons mutually inhibit each other through the inhibitory HVC interneurons, and are driven by external inputs from nuclei upstream of HVC. At a branching point that connects the final group of a chain to the first groups of several chains, the spike activity selects one branch to continue the propagation. The selection is probabilistic, and is due to the winner-take-all mechanism mediated by the inhibition and noise. The model predicts that the syllable sequences statistically follow partially observable Markov models. Experimental results supporting this and other predictions of the model will be presented. We suggest that the syntax of birdsong syllable sequences is embedded in the connection patterns of HVC projection neurons.

  18. Whole-Brain Neural Dynamics of Probabilistic Reward Prediction.

    PubMed

    Bach, Dominik R; Symmonds, Mkael; Barnes, Gareth; Dolan, Raymond J

    2017-04-05

    Predicting future reward is paramount to performing an optimal action. Although a number of brain areas are known to encode such predictions, a detailed account of how the associated representations evolve over time is lacking. Here, we address this question using human magnetoencephalography (MEG) and multivariate analyses of instantaneous activity in reconstructed sources. We overtrained participants on a simple instrumental reward learning task where geometric cues predicted a distribution of possible rewards, from which a sample was revealed 2000 ms later. We show that predicted mean reward (i.e., expected value), and predicted reward variability (i.e., economic risk), are encoded distinctly. Early on, representations of mean reward are seen in parietal and visual areas, and later in frontal regions with orbitofrontal cortex emerging last. Strikingly, an encoding of reward variability emerges simultaneously in parietal/sensory and frontal sources and later than mean reward encoding. An orbitofrontal variability encoding emerged around the same time as that seen for mean reward. Crucially, cross-prediction showed that mean reward and variability representations are distinct and also revealed that instantaneous representations become more stable over time. Across sources, the best fitting metric for variability signals was coefficient of variation (rather than SD or variance), but distinct best metrics were seen for individual brain regions. Our data demonstrate how a dynamic encoding of probabilistic reward prediction unfolds in the brain both in time and space.SIGNIFICANCE STATEMENT Predicting future reward is paramount to optimal behavior. To gain insight into the underlying neural computations, we investigate how reward representations in the brain arise over time. Using magnetoencephalography, we show that a representation of predicted mean reward emerges early in parietal/sensory regions and later in frontal cortex. In contrast, predicted reward variability

  19. Imaging electric field dynamics with graphene optoelectronics

    NASA Astrophysics Data System (ADS)

    Horng, Jason; Balch, Halleh B.; McGuire, Allister F.; Tsai, Hsin-Zon; Forrester, Patrick R.; Crommie, Michael F.; Cui, Bianxiao; Wang, Feng

    2016-12-01

    The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts, a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena.

  20. Imaging electric field dynamics with graphene optoelectronics

    PubMed Central

    Horng, Jason; Balch, Halleh B.; McGuire, Allister F.; Tsai, Hsin-Zon; Forrester, Patrick R.; Crommie, Michael F.; Cui, Bianxiao; Wang, Feng

    2016-01-01

    The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts, a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena. PMID:27982125

  1. A hardware implementation of artificial neural networks using field programmable gate arrays

    NASA Astrophysics Data System (ADS)

    Won, E.

    2007-11-01

    An artificial neural network algorithm is implemented using a low-cost field programmable gate array hardware. One hidden layer is used in the feed-forward neural network structure in order to discriminate one class of patterns from the other class in real time. In this work, the training of the network is performed in the off-line computing environment and the results of the training are configured to the hardware in order to minimize the latency of the neural computation. With five 8-bit input patterns, six hidden nodes, and one 8-bit output, the implemented hardware neural network makes decisions on a set of input patterns in 11 clock cycles, or less than 200 ns with a 60 MHz clock. The result from the hardware neural computation is well predictable based on the off-line computation. This implementation may be used in level 1 hardware triggers in high energy physics experiments.

  2. Hybrid computing using a neural network with dynamic external memory.

    PubMed

    Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis

    2016-10-27

    Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.

  3. Topological field theory of dynamical systems

    SciTech Connect

    Ovchinnikov, Igor V.

    2012-09-15

    Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the 'edge of chaos.' Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.

  4. Topological field theory of dynamical systems.

    PubMed

    Ovchinnikov, Igor V

    2012-09-01

    Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the "edge of chaos." Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.

  5. Topological field theory of dynamical systems

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Igor V.

    2012-09-01

    Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the "edge of chaos." Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.

  6. Gradient calculations for dynamic recurrent neural networks: a survey.

    PubMed

    Pearlmutter, B A

    1995-01-01

    Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.

  7. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar

    PubMed Central

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K. U.; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs. PMID:27853431

  8. Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.

    PubMed

    Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor

    2016-01-01

    Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.

  9. Neural field theory of nonlinear wave-wave and wave-neuron processes

    NASA Astrophysics Data System (ADS)

    Robinson, P. A.; Roy, N.

    2015-06-01

    Systematic expansion of neural field theory equations in terms of nonlinear response functions is carried out to enable a wide variety of nonlinear wave-wave and wave-neuron processes to be treated systematically in systems involving multiple neural populations. The results are illustrated by analyzing second-harmonic generation, and they can also be applied to wave-wave coalescence, multiharmonic generation, facilitation, depression, refractoriness, and other nonlinear processes.

  10. Phase field approximation of dynamic brittle fracture

    NASA Astrophysics Data System (ADS)

    Schlüter, Alexander; Willenbücher, Adrian; Kuhn, Charlotte; Müller, Ralf

    2014-11-01

    Numerical methods that are able to predict the failure of technical structures due to fracture are important in many engineering applications. One of these approaches, the so-called phase field method, represents cracks by means of an additional continuous field variable. This strategy avoids some of the main drawbacks of a sharp interface description of cracks. For example, it is not necessary to track or model crack faces explicitly, which allows a simple algorithmic treatment. The phase field model for brittle fracture presented in Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) assumes quasi-static loading conditions. However dynamic effects have a great impact on the crack growth in many practical applications. Therefore this investigation presents an extension of the quasi-static phase field model for fracture from Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) to the dynamic case. First of all Hamilton's principle is applied to derive a coupled set of Euler-Lagrange equations that govern the mechanical behaviour of the body as well as the crack growth. Subsequently the model is implemented in a finite element scheme which allows to solve several test problems numerically. The numerical examples illustrate the capabilities of the developed approach to dynamic fracture in brittle materials.

  11. Mean-field behavior of cluster dynamics

    NASA Astrophysics Data System (ADS)

    Persky, N.; Ben-Av, R.; Kanter, I.; Domany, E.

    1996-09-01

    The dynamic behavior of cluster algorithms is analyzed in the classical mean-field limit. Rigorous analytical results below Tc establish that the dynamic exponent has the value zSW=1 for the Swendsen-Wang algorithm and zW=0 for the Wolff algorithm. An efficient Monte Carlo implementation is introduced, adapted for using these algorithms for fully connected graphs. Extensive simulations both above and below Tc demonstrate scaling and evaluate the finite-size scaling function by means of a rather impressive collapse of the data.

  12. Dynamic Magnetic Field Applications for Materials Processing

    NASA Technical Reports Server (NTRS)

    Mazuruk, K.; Grugel, Richard N.; Motakef, S.; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    Magnetic fields, variable in time and space, can be used to control convection in electrically conducting melts. Flow induced by these fields has been found to be beneficial for crystal growth applications. It allows increased crystal growth rates, and improves homogeneity and quality. Particularly beneficial is the natural convection damping capability of alternating magnetic fields. One well-known example is the rotating magnetic field (RMF) configuration. RMF induces liquid motion consisting of a swirling basic flow and a meridional secondary flow. In addition to crystal growth applications, RMF can also be used for mixing non-homogeneous melts in continuous metal castings. These applied aspects have stimulated increasing research on RMF-induced fluid dynamics. A novel type of magnetic field configuration consisting of an axisymmetric magnetostatic wave, designated the traveling magnetic field (TMF), has been recently proposed. It induces a basic flow in the form of a single vortex. TMF may find use in crystal growth techniques such as the vertical Bridgman (VB), float zone (FZ), and the traveling heater method. In this review, both methods, RMF and TMF are presented. Our recent theoretical and experimental results include such topics as localized TMF, natural convection dumping using TMF in a vertical Bridgman configuration, the traveling heater method, and the Lorentz force induced by TMF as a function of frequency. Experimentally, alloy mixing results, with and without applied TMF, will be presented. Finally, advantages of the traveling magnetic field, in comparison to the more mature rotating magnetic field method, will be discussed.

  13. Nonequilibrium dynamics of emergent field configurations

    NASA Astrophysics Data System (ADS)

    Howell, Rafael Cassidy

    The processes by which nonlinear physical systems approach thermal equilibrium is of great importance in many areas of science. Central to this is the mechanism by which energy is transferred between the many degrees of freedom comprising these systems. With this in mind, in this research the nonequilibrium dynamics of nonperturbative fluctuations within Ginzburg-Landau models are investigated. In particular, two questions are addressed. In both cases the system is initially prepared in one of two minima of a double-well potential. First, within the context of a (2 + 1) dimensional field theory, we investigate whether emergent spatio-temporal coherent structures play a dynamcal role in the equilibration of the field. We find that the answer is sensitive to the initial temperature of the system. At low initial temperatures, the dynamics are well approximated with a time-dependent mean-field theory. For higher temperatures, the strong nonlinear coupling between the modes in the field does give rise to the synchronized emergence of coherent spatio-temporal configurations, identified with oscillons. These are long-lived coherent field configurations characterized by their persistent oscillatory behavior at their core. This initial global emergence is seen to be a consequence of resonant behavior in the long wavelength modes in the system. A second question concerns the emergence of disorder in a highly viscous system modeled by a (3 + 1) dimensional field theory. An integro-differential Boltzmann equation is derived to model the thermal nucleation of precursors of one phase within the homogeneous background. The fraction of the volume populated by these precursors is computed as a function of temperature. This model is capable of describing the onset of percolation, characterizing the approach to criticality (i.e. disorder). It also provides a nonperturbative correction to the critical temperature based on the nonequilibrium dynamics of the system.

  14. Dynamic social power modulates neural basis of math calculation.

    PubMed

    Harada, Tokiko; Bridge, Donna J; Chiao, Joan Y

    2012-01-01

    Both situational (e.g., perceived power) and sustained social factors (e.g., cultural stereotypes) are known to affect how people academically perform, particularly in the domain of mathematics. The ability to compute even simple mathematics, such as addition, relies on distinct neural circuitry within the inferior parietal and inferior frontal lobes, brain regions where magnitude representation and addition are performed. Despite prior behavioral evidence of social influence on academic performance, little is known about whether or not temporarily heightening a person's sense of power may influence the neural bases of math calculation. Here we primed female participants with either high or low power (LP) and then measured neural response while they performed exact and approximate math problems. We found that priming power affected math performance; specifically, females primed with high power (HP) performed better on approximate math calculation compared to females primed with LP. Furthermore, neural response within the left inferior frontal gyrus (IFG), a region previously associated with cognitive interference, was reduced for females in the HP compared to LP group. Taken together, these results indicate that even temporarily heightening a person's sense of social power can increase their math performance, possibly by reducing cognitive interference during math performance.

  15. Dynamic social power modulates neural basis of math calculation

    PubMed Central

    Harada, Tokiko; Bridge, Donna J.; Chiao, Joan Y.

    2013-01-01

    Both situational (e.g., perceived power) and sustained social factors (e.g., cultural stereotypes) are known to affect how people academically perform, particularly in the domain of mathematics. The ability to compute even simple mathematics, such as addition, relies on distinct neural circuitry within the inferior parietal and inferior frontal lobes, brain regions where magnitude representation and addition are performed. Despite prior behavioral evidence of social influence on academic performance, little is known about whether or not temporarily heightening a person's sense of power may influence the neural bases of math calculation. Here we primed female participants with either high or low power (LP) and then measured neural response while they performed exact and approximate math problems. We found that priming power affected math performance; specifically, females primed with high power (HP) performed better on approximate math calculation compared to females primed with LP. Furthermore, neural response within the left inferior frontal gyrus (IFG), a region previously associated with cognitive interference, was reduced for females in the HP compared to LP group. Taken together, these results indicate that even temporarily heightening a person's sense of social power can increase their math performance, possibly by reducing cognitive interference during math performance. PMID:23390415

  16. The earth's gravity field and ocean dynamics

    NASA Technical Reports Server (NTRS)

    Mather, R. S.

    1978-01-01

    An analysis of the signal-to-noise ratio of the best gravity field available shows that a basis exists for the recovery of the dominant parameters of the quasi-stationary sea surface topography. Results obtained from the analysis of GEOS-3 show that it is feasible to recover the quasi-stationary dynamic sea surface topography as a function of wavelength. The gravity field models required for synoptic ocean circulation modeling are less exacting in that constituents affecting radial components of orbital position need not be known through shorter wavelengths.

  17. Modeling emotional dynamics : currency versus field.

    SciTech Connect

    Sallach, D .L.; Decision and Information Sciences; Univ. of Chicago

    2008-08-01

    Randall Collins has introduced a simplified model of emotional dynamics in which emotional energy, heightened and focused by interaction rituals, serves as a common denominator for social exchange: a generic form of currency, except that it is active in a far broader range of social transactions. While the scope of this theory is attractive, the specifics of the model remain unconvincing. After a critical assessment of the currency theory of emotion, a field model of emotion is introduced that adds expressiveness by locating emotional valence within its cognitive context, thereby creating an integrated orientation field. The result is a model which claims less in the way of motivational specificity, but is more satisfactory in modeling the dynamic interaction between cognitive and emotional orientations at both individual and social levels.

  18. Dynamically orthogonal field equations for stochastic flows and particle dynamics

    DTIC Science & Technology

    2011-02-01

    where uncertainty ‘lives’ as well as a system of Stochastic Di erential Equations that de nes how the uncertainty evolves in the time varying stochastic ... stochastic dynamical component that are both time and space dependent, we derive a system of field equations consisting of a Partial Differential Equation...a system of Stochastic Differential Equations that defines how the stochasticity evolves in the time varying stochastic subspace. These new

  19. Parametric models to relate spike train and LFP dynamics with neural information processing

    PubMed Central

    Banerjee, Arpan; Dean, Heather L.; Pesaran, Bijan

    2012-01-01

    Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial

  20. Field measurements and neural network modeling of water quality parameters

    NASA Astrophysics Data System (ADS)

    Qishlaqi, Afishin; Kordian, Sediqeh; Parsaie, Abbas

    2017-01-01

    Rivers are one of the main resources for water supplying the agricultural, industrial, and urban use; therefore, unremitting surveying the quality of them is necessary. Recently, artificial neural networks have been proposed as a powerful tool for modeling and predicting the water quality parameters in natural streams. In this paper, to predict water quality parameters of Tireh River located at South West of Iran, a multilayer neural network model (MLP) was developed. The T.D.S, Ec, pH, HCO3, Cl, Na, So4, Mg, and Ca as main parameters of water quality parameters were measured and predicted using the MLP model. The architecture of the proposed MLP model included two hidden layers that at the first and second hidden layers, eight and six neurons were considered. The tangent sigmoid and pure-line functions were selected as transfer function for the neurons in hidden and output layers, respectively. The results showed that the MLP model has suitable performance to predict water quality parameters of Tireh River. For assessing the performance of the MLP model in the water quality prediction along the studied area, in addition to existing sampling stations, another 14 stations along were considered by authors. Evaluating the performance of developed MLP model to map relation between the water quality parameters along the studied area showed that it has suitable accuracy and minimum correlation between the results of MLP model and measured data was 0.85.

  1. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    PubMed

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator.

  2. Mixing Dynamics Induced by Traveling Magnetic Fields

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.; Mazuruk, Konstantin; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Microstructural and compositional homogeneity in metals and alloys can only be achieved if the initial melt is homogeneous prior to the onset of solidification processing. Naturally induced convection may initially facilitate this requirement but upon the onset of solidification significant compositional variations generally arise leading to undesired segregation. Application of alternating magnetic fields to promote a uniform bulk liquid concentration during solidification processing has been suggested. To investigate such possibilities an initial study of using traveling magnetic fields (TMF) to promote melt homogenization is reported in this work. Theoretically, the effect of TMF-induced convection on mixing phenomena is studied in the laminar regime of flow. Experimentally, with and without applied fields, both 1) mixing dynamics by optically monitoring the spreading of an initially localized dye in transparent fluids and, 2) compositional variations in metal alloys have been investigated.

  3. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in

  4. Optical sensed image fusion with dynamic neural networks

    NASA Astrophysics Data System (ADS)

    Shkvarko, Yuri V.; Ibarra-Manzano, Oscar G.; Jaime-Rivas, Rene; Andrade-Lucio, Jose A.; Alvarado-Mendez, Edgar; Rojas-Laguna, R.; Torres-Cisneros, Miguel; Alvarez-Jaime, J. A.

    2001-08-01

    The neural network-based technique for improving the quality of the image fusion is proposed as required for the remote sensing (RS) imagery. We prose to exit information about the point spread functions of the corresponding RS imaging systems combining it with prior realistic knowledge about the properties of the scene contained in the maximum entropy (ME) a priori image model. Applying the aggregate regularization method to solve the fusion tasks aimed to achieve the best resolution and noise suppression performances of the overall resulting image solves the problem. The proposed fusion method assumes the availability to control the design parameters, which influence the overall restoration performances. Computationally, the fusion method is implemented using the maximum entropy Hopfield-type neural network with adjustable parameters. Simulations illustrate the improved performances of the developed MENN-based image fusion method.

  5. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  6. Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks

    DTIC Science & Technology

    1992-05-30

    course, to on-going changes brought about by learning processes. As research in neurodynamics proceeded, the concept of reverberatory information flows...Microstructure of Cognition . Vol. 1: Foundations, M.I.T. Press, Cambridge, Massachusetts, pp. 354-361, 1986. 100 I Schwarz, G., "Estimating the dimension of a...Continually Running Fully Recurrent Neural Networks, ICS Report 8805, Institute of Cognitive Science, University of California at San Diego, 1988. 10 II

  7. Spatiotemporal multi-resolution approximation of the Amari type neural field model.

    PubMed

    Aram, P; Freestone, D R; Dewar, M; Scerri, K; Jirsa, V; Grayden, D B; Kadirkamanathan, V

    2013-02-01

    Neural fields are spatially continuous state variables described by integro-differential equations, which are well suited to describe the spatiotemporal evolution of cortical activations on multiple scales. Here we develop a multi-resolution approximation (MRA) framework for the integro-difference equation (IDE) neural field model based on semi-orthogonal cardinal B-spline wavelets. In this way, a flexible framework is created, whereby both macroscopic and microscopic behavior of the system can be represented simultaneously. State and parameter estimation is performed using the expectation maximization (EM) algorithm. A synthetic example is provided to demonstrate the framework.

  8. A neural network dynamics that resembles protein evolution

    NASA Astrophysics Data System (ADS)

    Ferrán, Edgardo A.; Ferrara, Pascual

    1992-06-01

    We use neutral networks to classify proteins according to their sequence similarities. A network composed by 7 × 7 neurons, was trained with the Kohonen unsupervised learning algorithm using, as inputs, matrix patterns derived from the bipeptide composition of cytochrome c proteins belonging to 76 different species. As a result of the training, the network self-organized the activation of its neurons into topologically ordered maps, wherein phylogenetically related sequences were positioned close to each other. The evolution of the topological map during learning, in a representative computational experiment, roughly resembles the way in which one species evolves into several others. For instance, sequences corresponding to vertebrates, initially grouped together into one neuron, were placed in a contiguous zone of the final neural map, with sequences of fishes, amphibia, reptiles, birds and mammals associated to different neurons. Some apparent wrong classifications are due to the fact that some proteins have a greater degree of sequence identity than the one expected by phylogenetics. In the final neural map, each synaptic vector may be considered as the pattern corresponding to the ancestor of all the proteins that are attached to that neuron. Although it may be also tempting to link real time with learning epochs and to use this relationship to calibrate the molecular evolutionary clock, this is not correct because the evolutionary time schedule obtained with the neural network depends highly on the discrete way in which the winner neighborhood is decreased during learning.

  9. Regional estimation of groundwater arsenic concentrations through systematical dynamic-neural modeling

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min

    2013-08-01

    Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the

  10. DYNAMICAL FIELD LINE CONNECTIVITY IN MAGNETIC TURBULENCE

    SciTech Connect

    Ruffolo, D.; Matthaeus, W. H.

    2015-06-20

    Point-to-point magnetic connectivity has a stochastic character whenever magnetic fluctuations cause a field line random walk, but this can also change due to dynamical activity. Comparing the instantaneous magnetic connectivity from the same point at two different times, we provide a nonperturbative analytic theory for the ensemble average perpendicular displacement of the magnetic field line, given the power spectrum of magnetic fluctuations. For simplicity, the theory is developed in the context of transverse turbulence, and is numerically evaluated for the noisy reduced MHD model. Our formalism accounts for the dynamical decorrelation of magnetic fluctuations due to wave propagation, local nonlinear distortion, random sweeping, and convection by a bulk wind flow relative to the observer. The diffusion coefficient D{sub X} of the time-differenced displacement becomes twice the usual field line diffusion coefficient D{sub x} at large time displacement t or large distance z along the mean field (corresponding to a pair of uncorrelated random walks), though for a low Kubo number (in the quasilinear regime) it can oscillate at intermediate values of t and z. At high Kubo number the dynamical decorrelation decays mainly from the nonlinear term and D{sub X} tends monotonically toward 2D{sub x} with increasing t and z. The formalism and results presented here are relevant to a variety of astrophysical processes, such as electron transport and heating patterns in coronal loops and the solar transition region, changing magnetic connection to particle sources near the Sun or at a planetary bow shock, and thickening of coronal hole boundaries.

  11. Robustness analysis of uncertain dynamical neural networks with multiple time delays.

    PubMed

    Senan, Sibel

    2015-10-01

    This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results.

  12. Quantum dynamics in strong fluctuating fields

    NASA Astrophysics Data System (ADS)

    Goychuk, Igor; Hänggi, Peter

    A large number of multifaceted quantum transport processes in molecular systems and physical nanosystems, such as e.g. nonadiabatic electron transfer in proteins, can be treated in terms of quantum relaxation processes which couple to one or several fluctuating environments. A thermal equilibrium environment can conveniently be modelled by a thermal bath of harmonic oscillators. An archetype situation provides a two-state dissipative quantum dynamics, commonly known under the label of a spin-boson dynamics. An interesting and nontrivial physical situation emerges, however, when the quantum dynamics evolves far away from thermal equilibrium. This occurs, for example, when a charge transferring medium possesses nonequilibrium degrees of freedom, or when a strong time-dependent control field is applied externally. Accordingly, certain parameters of underlying quantum subsystem acquire stochastic character. This may occur, for example, for the tunnelling coupling between the donor and acceptor states of the transferring electron, or for the corresponding energy difference between electronic states which assume via the coupling to the fluctuating environment an explicit stochastic or deterministic time-dependence. Here, we review the general theoretical framework which is based on the method of projector operators, yielding the quantum master equations for systems that are exposed to strong external fields. This allows one to investigate on a common basis, the influence of nonequilibrium fluctuations and periodic electrical fields on those already mentioned dynamics and related quantum transport processes. Most importantly, such strong fluctuating fields induce a whole variety of nonlinear and nonequilibrium phenomena. A characteristic feature of such dynamics is the absence of thermal (quantum) detailed balance.ContentsPAGE1. Introduction5262. Quantum dynamics in stochastic fields531 2.1. Stochastic Liouville equation531 2.2. Non-Markovian vs. Markovian discrete

  13. Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech.

    PubMed

    Khalighinejad, Bahar; Cruzatto da Silva, Guilherme; Mesgarani, Nima

    2017-02-22

    Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders.SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for

  14. Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech

    PubMed Central

    Khalighinejad, Bahar; Cruzatto da Silva, Guilherme

    2017-01-01

    Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders. SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for

  15. Direct imaging of neural currents using ultra-low field magnetic resonance techniques

    DOEpatents

    Volegov, Petr L.; Matlashov, Andrei N.; Mosher, John C.; Espy, Michelle A.; Kraus, Jr., Robert H.

    2009-08-11

    Using resonant interactions to directly and tomographically image neural activity in the human brain using magnetic resonance imaging (MRI) techniques at ultra-low field (ULF), the present inventors have established an approach that is sensitive to magnetic field distributions local to the spin population in cortex at the Larmor frequency of the measurement field. Because the Larmor frequency can be readily manipulated (through varying B.sub.m), one can also envision using ULF-DNI to image the frequency distribution of the local fields in cortex. Such information, taken together with simultaneous acquisition of MEG and ULF-NMR signals, enables non-invasive exploration of the correlation between local fields induced by neural activity in cortex and more `distant` measures of brain activity such as MEG and EEG.

  16. Parameter estimation of breast tumour using dynamic neural network from thermal pattern.

    PubMed

    Saniei, Elham; Setayeshi, Saeed; Akbari, Mohammad Esmaeil; Navid, Mitra

    2016-11-01

    This article presents a new approach for estimating the depth, size, and metabolic heat generation rate of a tumour. For this purpose, the surface temperature distribution of a breast thermal image and the dynamic neural network was used. The research consisted of two steps: forward and inverse. For the forward section, a finite element model was created. The Pennes bio-heat equation was solved to find surface and depth temperature distributions. Data from the analysis, then, were used to train the dynamic neural network model (DNN). Results from the DNN training/testing confirmed those of the finite element model. For the inverse section, the trained neural network was applied to estimate the depth temperature distribution (tumour position) from the surface temperature profile, extracted from the thermal image. Finally, tumour parameters were obtained from the depth temperature distribution. Experimental findings (20 patients) were promising in terms of the model's potential for retrieving tumour parameters.

  17. Classification data mining method based on dynamic RBF neural networks

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  18. Intersubject variability and induced gamma in the visual cortex: DCM with empirical Bayes and neural fields

    PubMed Central

    Perry, Gavin; Litvak, Vladimir; Singh, Krish D.; Friston, Karl J.

    2016-01-01

    Abstract This article describes the first application of a generic (empirical) Bayesian analysis of between‐subject effects in the dynamic causal modeling (DCM) of electrophysiological (MEG) data. It shows that (i) non‐invasive (MEG) data can be used to characterize subject‐specific differences in cortical microcircuitry and (ii) presents a validation of DCM with neural fields that exploits intersubject variability in gamma oscillations. We find that intersubject variability in visually induced gamma responses reflects changes in the excitation‐inhibition balance in a canonical cortical circuit. Crucially, this variability can be explained by subject‐specific differences in intrinsic connections to and from inhibitory interneurons that form a pyramidal‐interneuron gamma network. Our approach uses Bayesian model reduction to evaluate the evidence for (large sets of) nested models—and optimize the corresponding connectivity estimates at the within and between‐subject level. We also consider Bayesian cross‐validation to obtain predictive estimates for gamma‐response phenotypes, using a leave‐one‐out procedure. Hum Brain Mapp 37:4597–4614, 2016. © The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27593199

  19. From Behavior to Neural Dynamics: An Integrated Theory of Attention.

    PubMed

    Buschman, Timothy J; Kastner, Sabine

    2015-10-07

    The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one's current behavior. We refer to these mechanisms as "attention." Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain's large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits, and neurons. We then integrate these disparate results into a unified theory of attention.

  20. From behavior to neural dynamics: An integrated theory of attention

    PubMed Central

    Buschman, Timothy J.; Kastner, Sabine

    2015-01-01

    The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one’s current behavior. We refer to these mechanisms as ‘attention’. Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain’s large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits and neurons. We then integrate these disparate results into a unified theory of attention. PMID:26447577

  1. A comparison between wavelet based static and dynamic neural network approaches for runoff prediction

    NASA Astrophysics Data System (ADS)

    Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer

    2016-04-01

    In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.

  2. Neural Dynamics Associated with Semantic and Episodic Memory for Faces: Evidence from Multiple Frequency Bands

    ERIC Educational Resources Information Center

    Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo

    2010-01-01

    Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…

  3. A Neural Network Model of the Structure and Dynamics of Human Personality

    ERIC Educational Resources Information Center

    Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.

    2010-01-01

    We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…

  4. Feasibility Study of Extended-Gate-Type Silicon Nanowire Field-Effect Transistors for Neural Recording.

    PubMed

    Kang, Hongki; Kim, Jee-Yeon; Choi, Yang-Kyu; Nam, Yoonkey

    2017-03-28

    In this research, a high performance silicon nanowire field-effect transistor (transconductance as high as 34 µS and sensitivity as 84 nS/mV) is extensively studied and directly compared with planar passive microelectrode arrays for neural recording application. Electrical and electrochemical characteristics are carefully characterized in a very well-controlled manner. We especially focused on the signal amplification capability and intrinsic noise of the transistors. A neural recording system using both silicon nanowire field-effect transistor-based active-type microelectrode array and platinum black microelectrode-based passive-type microelectrode array are implemented and compared. An artificial neural spike signal is supplied as input to both arrays through a buffer solution and recorded simultaneously. Recorded signal intensity by the silicon nanowire transistor was precisely determined by an electrical characteristic of the transistor, transconductance. Signal-to-noise ratio was found to be strongly dependent upon the intrinsic 1/f noise of the silicon nanowire transistor. We found how signal strength is determined and how intrinsic noise of the transistor determines signal-to-noise ratio of the recorded neural signals. This study provides in-depth understanding of the overall neural recording mechanism using silicon nanowire transistors and solid design guideline for further improvement and development.

  5. Optical Trapping Dynamics in Interference Field

    NASA Astrophysics Data System (ADS)

    Viera, Luis Alfredo; Lira, Ignacio; Soto, Leopoldo; Pavez, Cristián

    2008-04-01

    A model that predicts a particle trapping time in an two beams interference laser fields is proposed. This interference consist in a sinusoidal intensity pattern, which is used to translate the particle from the dark fringes to the bright ones. The particle is submerged in a viscous fluid. The model takes into account the irradiance, the wavelength, the fringewidth, the medium viscosity and the size and approximated shape of the particle. From the classical separation of optical trapping force in gradient and scattering force, only the gradient force is considered, expressed in terms of the electric field. Opposing to this force, the drag force is considered in terms of the Stokes force. The expression for the gradient force is the Maxwell equations solution for an homogeneous dielectric dipole in an electric field. For the Stokes force, the RBC is considered an oblate spheroid flowing edgewise. An experimental set up has been designed for the displacement of a single RBC (Red Blood Cell) in blood plasma due to an interference laser field, produced by Argon Ion laser, using several irradiances. To the best knowledge, is the only dynamic model of optical trapping that predicts the particle trapping time and position without experimental results, and is made it in a simple analytical way. This analysis can be extended to other particles of arbitrary shape and trap configurations.

  6. Disappearing inflaton potential via heavy field dynamics

    SciTech Connect

    Kitajima, Naoya; Takahashi, Fuminobu E-mail: fumi@tuhep.phys.tohoku.ac.jp

    2016-02-01

    We propose a possibility that the inflaton potential is significantly modified after inflation due to heavy field dynamics. During inflation such a heavy scalar field may be stabilized at a value deviated from the low-energy minimum. In extreme cases, the inflaton potential vanishes and the inflaton becomes almost massless at some time after inflation. Such transition of the inflaton potential has interesting implications for primordial density perturbations, reheating, creation of unwanted relics, dark radiation, and experimental search for light degrees of freedom. To be concrete, we consider a chaotic inflation in supergravity where the inflaton mass parameter is promoted to a modulus field, finding that the inflaton becomes stable after the transition and contributes to dark matter. Another example is a hilltop inflation (also called new inflation) by the MSSM Higgs field which acquires a large expectation value just after inflation, but it returns to the origin after the transition and finally rolls down to the electroweak vacuum. Interestingly, the smallness of the electroweak scale compared to the Planck scale is directly related to the flatness of the inflaton potential.

  7. Neural network integration of field observations for soil endocrine disruptor characterisation.

    PubMed

    Aitkenhead, M J; Rhind, S M; Zhang, Z L; Kyle, C E; Coull, M C

    2014-01-15

    A neural network approach was used to predict the presence and concentration of a range of endocrine disrupting compounds (EDCs), based on field observations. Soil sample concentrations of endocrine disrupting compounds (EDCs) and site environmental characteristics, drawn from the National Soil Inventory of Scotland (NSIS) database, were used. Neural network models were trained to predict soil EDC concentrations using field observations for 184 sites. The results showed that presence/absence and concentration of several of the EDCs, mostly no longer in production, could be predicted with some accuracy. We were able to predict concentrations of seven of 31 compounds with r(2) values greater than 0.25 for log-normalised values and of eight with log-normalised predictions converted to a linear scale. Additional statistical analyses were carried out, including Root Mean Square Error (RMSE), Mean Error (ME), Willmott's index of agreement, Percent Bias (PBIAS) and ratio of root mean square to standard deviation (RSR). These analyses allowed us to demonstrate that the neural network models were making meaningful predictions of EDC concentration. We identified the main predictive input parameters in each case, based on a sensitivity analysis of the trained neural network model. We also demonstrated the capacity of the method for predicting the presence and level of EDC concentration in the field, identified further developments required to make this process as rapid and operator-friendly as possible and discussed the potential value of a system for field surveys of soil composition.

  8. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI

    PubMed Central

    Yu, Theodore; Sejnowski, Terrence J.; Cauwenberghs, Gert

    2011-01-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris–Lecar and Hodgkin–Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces. PMID:22227949

  9. Travelling waves in a neural field model with refractoriness.

    PubMed

    Meijer, Hil G E; Coombes, Stephen

    2014-04-01

    At one level of abstraction neural tissue can be regarded as a medium for turning local synaptic activity into output signals that propagate over large distances via axons to generate further synaptic activity that can cause reverberant activity in networks that possess a mixture of excitatory and inhibitory connections. This output is often taken to be a firing rate, and the mathematical form for the evolution equation of activity depends upon a spatial convolution of this rate with a fixed anatomical connectivity pattern. Such formulations often neglect the metabolic processes that would ultimately limit synaptic activity. Here we reinstate such a process, in the spirit of an original prescription by Wilson and Cowan (Biophys J 12:1-24, 1972), using a term that multiplies the usual spatial convolution with a moving time average of local activity over some refractory time-scale. This modulation can substantially affect network behaviour, and in particular give rise to periodic travelling waves in a purely excitatory network (with exponentially decaying anatomical connectivity), which in the absence of refractoriness would only support travelling fronts. We construct these solutions numerically as stationary periodic solutions in a co-moving frame (of both an equivalent delay differential model as well as the original delay integro-differential model). Continuation methods are used to obtain the dispersion curve for periodic travelling waves (speed as a function of period), and found to be reminiscent of those for spatially extended models of excitable tissue. A kinematic analysis (based on the dispersion curve) predicts the onset of wave instabilities, which are confirmed numerically.

  10. Internal Representation of Task Rules by Recurrent Dynamics: The Importance of the Diversity of Neural Responses

    PubMed Central

    Rigotti, Mattia; Rubin, Daniel Ben Dayan; Wang, Xiao-Jing; Fusi, Stefano

    2010-01-01

    Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation. PMID:21048899

  11. Imaging second messenger dynamics in developing neural circuits

    PubMed Central

    Dunn, Timothy A.; Feller, Marla B.

    2010-01-01

    A characteristic feature of developing neural circuits is that they are spontaneously active. There are several examples, including the retina, spinal cord and hippocampus, where spontaneous activity is highly correlated amongst neighboring cells, with large depolarizing events occurring with a periodicity on the order of minutes. One likely mechanism by which neurons can “decode” these slow oscillations is through activation of second messengers cascades that either influence transcriptional activity or drive posttranslational modifications. Here we describe recent experiments where imaging has been used to characterize slow oscillations in the cAMP/PKA second messenger cascade in retinal neurons. We review the latest techniques in imaging this specific second messenger cascade, its intimate relationship with changes in intracellular calcium concentration, and several hypotheses regarding its role in neurodevelopment. PMID:18383551

  12. Dynamic deep temperature recovery by acoustic thermography using neural networks

    NASA Astrophysics Data System (ADS)

    Anosov, A. A.; Belyaev, R. V.; Vilkov, V. A.; Kazanskii, A. S.; Mansfel'd, A. D.; Subochev, P. V.

    2013-11-01

    In an experiment, the deep temperature, which changed with time, was recovered for a model object, bovine liver. The liver was heated for 6 min by laser radiation (810 nm), transmitted via a light guide to a depth of 1 cm. During heating and subsequent cooling, the deep temperature was measured by acoustic thermography. For independent control, we used three electronic telemeters, the indications of which were also subsequently recovered. Deep temperature was recovered using a neural network with a time delay. During the last 2 min of heating, the mean square error of recovery for an averaging time of 50 s did not exceed 0.5°C. Such a result makes it possible to use this method for solving a number of medical problems.

  13. An ensemble of dynamic neural network identifiers for fault detection and isolation of gas turbine engines.

    PubMed

    Amozegar, M; Khorasani, K

    2016-04-01

    In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies.

  14. Multiple-scale dynamics in neural systems: learning, synchronization and network oscillations

    NASA Astrophysics Data System (ADS)

    Zhigulin, Valentin P.

    Many dynamical processes that take place in neural systems involve interactions between multiple temporal and/or spatial scales which lead to the emergence of new dynamical phenomena. Two of them are studied in this thesis: learning-induced robustness and enhancement of synchronization in small neural circuits; and emergence of global spatio-temporal dynamics from local interactions in neural networks.Chapter 2 presents the study of synchronization of two model neurons coupled through a synapse with spike-timing dependent plasticity (STDP). It shows that this form of learning leads to the enlargement of frequency locking zones and makes synchronization much more robust to noise than classical synchronization mediated by non-plastic synapses. A simple discrete-time map model is presented that enables deep understanding of this phenomenon and demonstrates its generality. Chapter 3 extends these results by demonstrating enhancement of synchronization in a hybrid circuit with living postsynaptic neuron. The robustness of STDP-mediated synchronization is further confirmed with simulations of stochastic plasticity.Chapter 4 studies the entrainment of a heterogeneous network of electrically coupled neurons by periodic stimulation. It demonstrates that, when compared to the case of non-plastic input synapses, inputs with STDP enhance coherence of network oscillations and improve robustness of synchronization to the variability of network properties. The observed mechanism may play a role in synchronization of hippocampal neural ensembles.Chapter 5 proposes a new type of artificial synaptic connection that combines fast reaction of an electrical synapse with plasticity of a chemical synapse. It shows that such synapse mediates regularization of chaos in a circuit of two chaotic bursting neurons and leads to structural stability of the regularized state. Such plastic electrical synapse may be used in the development of robust neural prosthetics.Chapter 6 suggests a new

  15. Dynamic Neural Processing of Linguistic Cues Related to Death

    PubMed Central

    Ma, Yina; Qin, Jungang; Han, Shihui

    2013-01-01

    Behavioral studies suggest that humans evolve the capacity to cope with anxiety induced by the awareness of death’s inevitability. However, the neurocognitive processes that underlie online death-related thoughts remain unclear. Our recent functional MRI study found that the processing of linguistic cues related to death was characterized by decreased neural activity in human insular cortex. The current study further investigated the time course of neural processing of death-related linguistic cues. We recorded event-related potentials (ERP) to death-related, life-related, negative-valence, and neutral-valence words in a modified Stroop task that required color naming of words. We found that the amplitude of an early frontal/central negativity at 84–120 ms (N1) decreased to death-related words but increased to life-related words relative to neutral-valence words. The N1 effect associated with death-related and life-related words was correlated respectively with individuals’ pessimistic and optimistic attitudes toward life. Death-related words also increased the amplitude of a frontal/central positivity at 124–300 ms (P2) and of a frontal/central positivity at 300–500 ms (P3). However, the P2 and P3 modulations were observed for both death-related and negative-valence words but not for life-related words. The ERP results suggest an early inverse coding of linguistic cues related to life and death, which is followed by negative emotional responses to death-related information. PMID:23840787

  16. Dynamic nuclear polarization at high magnetic fields

    PubMed Central

    Maly, Thorsten; Debelouchina, Galia T.; Bajaj, Vikram S.; Hu, Kan-Nian; Joo, Chan-Gyu; Mak–Jurkauskas, Melody L.; Sirigiri, Jagadishwar R.; van der Wel, Patrick C. A.; Herzfeld, Judith; Temkin, Richard J.; Griffin, Robert G.

    2009-01-01

    Dynamic nuclear polarization (DNP) is a method that permits NMR signal intensities of solids and liquids to be enhanced significantly, and is therefore potentially an important tool in structural and mechanistic studies of biologically relevant molecules. During a DNP experiment, the large polarization of an exogeneous or endogeneous unpaired electron is transferred to the nuclei of interest (I) by microwave (μw) irradiation of the sample. The maximum theoretical enhancement achievable is given by the gyromagnetic ratios (γe/γl), being ∼660 for protons. In the early 1950s, the DNP phenomenon was demonstrated experimentally, and intensively investigated in the following four decades, primarily at low magnetic fields. This review focuses on recent developments in the field of DNP with a special emphasis on work done at high magnetic fields (≥5 T), the regime where contemporary NMR experiments are performed. After a brief historical survey, we present a review of the classical continuous wave (cw) DNP mechanisms—the Overhauser effect, the solid effect, the cross effect, and thermal mixing. A special section is devoted to the theory of coherent polarization transfer mechanisms, since they are potentially more efficient at high fields than classical polarization schemes. The implementation of DNP at high magnetic fields has required the development and improvement of new and existing instrumentation. Therefore, we also review some recent developments in μw and probe technology, followed by an overview of DNP applications in biological solids and liquids. Finally, we outline some possible areas for future developments. PMID:18266416

  17. Dynamics on Networks: The Role of Local Dynamics and Global Networks on the Emergence of Hypersynchronous Neural Activity

    PubMed Central

    Schmidt, Helmut; Petkov, George; Richardson, Mark P.; Terry, John R.

    2014-01-01

    Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of computational modeling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit, which in the field of complexity sciences is known as dynamics on networks. In this study we describe the development and application of this framework using modular networks of Kuramoto oscillators. We use this framework to understand functional networks inferred from resting state EEG recordings of a cohort of 35 adults with heterogeneous idiopathic generalized epilepsies and 40 healthy adult controls. Taking emergent synchrony across the global network as a proxy for seizures, our study finds that the critical strength of coupling required to synchronize the global network is significantly decreased for the epilepsy cohort for functional networks inferred from both theta (3–6 Hz) and low-alpha (6–9 Hz) bands. We further identify left frontal regions as a potential driver of seizure activity within these networks. We also explore the ability of our method to identify individuals with epilepsy, observing up to 80 predictive power through use of receiver operating characteristic analysis. Collectively these findings demonstrate that a computer model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which should ultimately enable a more appropriate mechanistic stratification of people

  18. A point process approach to identifying and tracking transitions in neural spiking dynamics in the subthalamic nucleus of Parkinson's patients

    NASA Astrophysics Data System (ADS)

    Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.

    2013-12-01

    Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.

  19. Some new stability properties of dynamic neural networks with different time-scales.

    PubMed

    Yu, Wen; Sandoval, Alejandro Cruz

    2006-06-01

    Dynamic neural networks with different time-scales include the aspects of fast and slow phenomenons. Some applications require that the equilibrium points of these networks to be stable. The main contribution of the paper is that Lyapunov function and singularly perturbed technique are combined to access several new stable properties of different time-scales neural networks. Exponential stability and asymptotic stability are obtained by sector and bound conditions. Compared to other papers, these conditions are simpler. Numerical examples are given to demonstrate the effectiveness of the theoretical results.

  20. Study of the neural dynamics for understanding communication in terms of complex hetero systems.

    PubMed

    Tsuda, Ichiro; Yamaguchi, Yoko; Hashimoto, Takashi; Okuda, Jiro; Kawasaki, Masahiro; Nagasaka, Yasuo

    2015-01-01

    The purpose of the research project was to establish a new research area named "neural information science for communication" by elucidating its neural mechanism. The research was performed in collaboration with applied mathematicians in complex-systems science and experimental researchers in neuroscience. The project included measurements of brain activity during communication with or without languages and analyses performed with the help of extended theories for dynamical systems and stochastic systems. The communication paradigm was extended to the interactions between human and human, human and animal, human and robot, human and materials, and even animal and animal.

  1. Field-driven dynamics of nematic microcapillaries

    NASA Astrophysics Data System (ADS)

    Khayyatzadeh, Pouya; Fu, Fred; Abukhdeir, Nasser Mohieddin

    2015-12-01

    Polymer-dispersed liquid-crystal (PDLC) composites long have been a focus of study for their unique electro-optical properties which have resulted in various applications such as switchable (transparent or translucent) windows. These composites are manufactured using desirable "bottom-up" techniques, such as phase separation of a liquid-crystal-polymer mixture, which enable production of PDLC films at very large scales. LC domains within PDLCs are typically spheroidal, as opposed to rectangular for an LCD panel, and thus exhibit substantially different behavior in the presence of an external field. The fundamental difference between spheroidal and rectangular nematic domains is that the former results in the presence of nanoscale orientational defects in LC order while the latter does not. Progress in the development and optimization of PDLC electro-optical properties has progressed at a relatively slow pace due to this increased complexity. In this work, continuum simulations are performed in order to capture the complex formation and electric field-driven switching dynamics of approximations of PDLC domains. Using a simplified elliptic cylinder (microcapillary) geometry as an approximation of spheroidal PDLC domains, the effects of geometry (aspect ratio), surface anchoring, and external field strength are studied through the use of the Landau-de Gennes model of the nematic LC phase.

  2. Neural population dynamics in human motor cortex during movements in people with ALS.

    PubMed

    Pandarinath, Chethan; Gilja, Vikash; Blabe, Christine H; Nuyujukian, Paul; Sarma, Anish A; Sorice, Brittany L; Eskandar, Emad N; Hochberg, Leigh R; Henderson, Jaimie M; Shenoy, Krishna V

    2015-06-23

    The prevailing view of motor cortex holds that motor cortical neural activity represents muscle or movement parameters. However, recent studies in non-human primates have shown that neural activity does not simply represent muscle or movement parameters; instead, its temporal structure is well-described by a dynamical system where activity during movement evolves lawfully from an initial pre-movement state. In this study, we analyze neuronal ensemble activity in motor cortex in two clinical trial participants diagnosed with Amyotrophic Lateral Sclerosis (ALS). We find that activity in human motor cortex has similar dynamical structure to that of non-human primates, indicating that human motor cortex contains a similar underlying dynamical system for movement generation.

  3. A neural network model of the structure and dynamics of human personality.

    PubMed

    Read, Stephen J; Monroe, Brian M; Brownstein, Aaron L; Yang, Yu; Chopra, Gurveen; Miller, Lynn C

    2010-01-01

    We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two overarching motivational systems, an approach and an avoidance system, as well as a general disinhibition and constraint system. Each overarching motivational system influences more specific motives. Traits are modeled in terms of differences in the sensitivities of the motivational systems, the baseline activation of specific motives, and inhibitory strength. The result is a motive-based neural network model of personality based on research about the structure and neurobiology of human personality. The model provides an account of personality dynamics and person-situation interactions and suggests how dynamic processing approaches and dispositional, structural approaches can be integrated in a common framework.

  4. Degradation prediction model based on a neural network with dynamic windows.

    PubMed

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-03-23

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data.

  5. Degradation Prediction Model Based on a Neural Network with Dynamic Windows

    PubMed Central

    Zhang, Xinghui; Xiao, Lei; Kang, Jianshe

    2015-01-01

    Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873

  6. Distributed Nonlocal Feedback Delays May Destabilize Fronts in Neural Fields, Distributed Transmission Delays Do Not

    PubMed Central

    2013-01-01

    The spread of activity in neural populations is a well-known phenomenon. To understand the propagation speed and the stability of stationary fronts in neural populations, the present work considers a neural field model that involves intracortical and cortico-cortical synaptic interactions. This includes distributions of axonal transmission speeds and nonlocal feedback delays as well as general classes of synaptic interactions. The work proves the spectral stability of standing and traveling fronts subject to general transmission speeds for large classes of spatial interactions and derives conditions for the front instabilities subjected to nonlocal feedback delays. Moreover, it turns out that the uniqueness of the stationary traveling fronts guarantees its exponential stability for vanishing feedback delay. Numerical simulations complement the analytical findings. PMID:23899051

  7. Dynamic Baysesian state-space model with a neural network for an online river flow prediction

    NASA Astrophysics Data System (ADS)

    Ham, Jonghwa; Hong, Yoon-Seok

    2013-04-01

    The usefulness of artificial neural networks in complex hydrological modeling has been demonstrated by successful applications. Several different types of neural network have been used for the hydrological modeling task but the multi-layer perceptron (MLP) neural network (also known as the feed-forward neural network) has enjoyed a predominant position because of its simplicity and its ability to provide good approximations. In many hydrological applications of MLP neural networks, the gradient descent-based batch learning algorithm such as back-propagation, quasi-Newton, Levenburg-Marquardt, and conjugate gradient algorithms has been used to optimize the cost function (usually by minimizing the error function in the prediction) by updating the parameters and structure in a neural network defined using a set of input-output training examples. Hydrological systems are highly with time-varying inputs and outputs, and are characterized by data that arrive sequentially. The gradient descent-based batch learning approaches that are implemented in MLP neural networks have significant disadvantages for online dynamic hydrological modeling because they could not update the model structure and parameter when a new set of hydrological measurement data becomes available. In addition, a large amount of training data is always required off-line with a long model training time. In this work, a dynamic nonlinear Bayesian state-space model with a multi-layer perceptron (MLP) neural network via a sequential Monte Carlo (SMC) learning algorithm is proposed for an online dynamic hydrological modeling. This proposed new method of modeling is herein known as MLP-SMC. The sequential Monte Carlo learning algorithm in the MLP-SMC is designed to evolve and adapt the weight of a MLP neural network sequentially in time on the arrival of each new item of hydrological data. The weight of a MLP neural network is treated as the unknown dynamic state variable in the dynamic Bayesian state

  8. Dynamical complexity in the C.elegans neural network

    NASA Astrophysics Data System (ADS)

    Antonopoulos, C. G.; Fokas, A. S.; Bountis, T. C.

    2016-09-01

    We model the neuronal circuit of the C.elegans soil worm in terms of a Hindmarsh-Rose system of ordinary differential equations, dividing its circuit into six communities which are determined via the Walktrap and Louvain methods. Using the numerical solution of these equations, we analyze important measures of dynamical complexity, namely synchronicity, the largest Lyapunov exponent, and the ΦAR auto-regressive integrated information theory measure. We show that ΦAR provides a useful measure of the information contained in the C.elegans brain dynamic network. Our analysis reveals that the C.elegans brain dynamic network generates more information than the sum of its constituent parts, and that attains higher levels of integrated information for couplings for which either all its communities are highly synchronized, or there is a mixed state of highly synchronized and desynchronized communities.

  9. Machine Learning for Dynamical Mean Field Theory

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Littlewood, P. B.; Millis, Andy

    2014-03-01

    Machine Learning (ML), an approach that infers new results from accumulated knowledge, is in use for a variety of tasks ranging from face and voice recognition to internet searching and has recently been gaining increasing importance in chemistry and physics. In this talk, we investigate the possibility of using ML to solve the equations of dynamical mean field theory which otherwise requires the (numerically very expensive) solution of a quantum impurity model. Our ML scheme requires the relation between two functions: the hybridization function describing the bare (local) electronic structure of a material and the self-energy describing the many body physics. We discuss the parameterization of the two functions for the exact diagonalization solver and present examples, beginning with the Anderson Impurity model with a fixed bath density of states, demonstrating the advantages and the pitfalls of the method. DOE contract DE-AC02-06CH11357.

  10. Subduction dynamics: Constraints from gravity field observations

    NASA Technical Reports Server (NTRS)

    Mcadoo, D. C.

    1985-01-01

    Satellite systems do the best job of resolving the long wavelength components of the Earth's gravity field. Over the oceans, satellite-borne radar altimeters such as SEASAT provide the best resolution observations of the intermediate wavelength components. Satellite observations of gravity contributed to the understanding of the dynamics of subduction. Large, long wavelength geoidal highs generally occur over subduction zones. These highs are attributed to the superposition of two effects of subduction: (1) the positive mass anomalies of subducting slabs themselves; and (2) the surface deformations such as the trenches convectively inducted by these slabs as they sink into the mantle. Models of this subduction process suggest that the mantle behaves as a nonNewtonian fluid, its effective viscosity increases significantly with depth, and that large positive mass anomalies may occur beneath the seismically defined Benioff zones.

  11. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    SciTech Connect

    Fiete, Ila R.; Seung, H. Sebastian

    2006-07-28

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source.

  12. Dynamic neural network-based robust observers for uncertain nonlinear systems.

    PubMed

    Dinh, H T; Kamalapurkar, R; Bhasin, S; Dixon, W E

    2014-12-01

    A dynamic neural network (DNN) based robust observer for uncertain nonlinear systems is developed. The observer structure consists of a DNN to estimate the system dynamics on-line, a dynamic filter to estimate the unmeasurable state and a sliding mode feedback term to account for modeling errors and exogenous disturbances. The observed states are proven to asymptotically converge to the system states of high-order uncertain nonlinear systems through Lyapunov-based analysis. Simulations and experiments on a two-link robot manipulator are performed to show the effectiveness of the proposed method in comparison to several other state estimation methods.

  13. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    NASA Astrophysics Data System (ADS)

    Fiete, Ila R.; Seung, H. Sebastian

    2006-07-01

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of “empiric” synapses driven by random spike trains from an external source.

  14. Global neural dynamic surface tracking control of strict-feedback systems with application to hypersonic flight vehicle.

    PubMed

    Xu, Bin; Yang, Chenguang; Pan, Yongping

    2015-10-01

    This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design.

  15. Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight

    PubMed Central

    Chen, Wanchun

    2014-01-01

    This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821

  16. Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong

    2015-03-01

    This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.

  17. Field-induced superdiffusion and dynamical heterogeneity

    NASA Astrophysics Data System (ADS)

    Gradenigo, Giacomo; Bertin, Eric; Biroli, Giulio

    2016-06-01

    By analyzing two kinetically constrained models of supercooled liquids we show that the anomalous transport of a driven tracer observed in supercooled liquids is another facet of the phenomenon of dynamical heterogeneity. We focus on the Fredrickson-Andersen and the Bertin-Bouchaud-Lequeux models. By numerical simulations and analytical arguments we demonstrate that the violation of the Stokes-Einstein relation and the field-induced superdiffusion observed during a long preasymptotic regime have the same physical origin: while a fraction of probes do not move, others jump repeatedly because they are close to local mobile regions. The anomalous fluctuations observed out of equilibrium in the presence of a pulling force ɛ ,σx2(t ) = - 2˜t3 /2 , which are accompanied by the asymptotic decay αɛ(t ) ˜t-1 /2 of the non-Gaussian parameter from nontrivial values to zero, are due to the splitting of the probes population in the two (mobile and immobile) groups and to dynamical correlations, a mechanism expected to happen generically in supercooled liquids.

  18. Nonlinear dynamics in a neural network (parallel) processor

    NASA Astrophysics Data System (ADS)

    Perera, A. G. Unil; Matsik, S. G.; Betarbet, S. R.

    1995-06-01

    We consider an iterative map derived from the device equations for a silicon p+-n-n+ diode, which simulates a biological neuron. This map has been extended to a coupled neuron circuit consisting of two of these artificial neurons connected by a filter circuit, which could be used as a single channel of a parallel asynchronous processor. The extended map output is studied under different conditions to determine the effect of various parameters on the pulsing pattern. As the control parameter is increased, fixed points (both stable and unstable) as well as a limit cycle appear. On further increase, a Hopf bifurcation is seen causing the disappearance of the limit cycle. The increasing control parameter, which is related to a decrease in the bias applied to the circuit, also causes variation in the location of the fixed points. This variation could be important in applications to neural networks. The control parameter value at which the fixed point appear and the bifurcation occurs can be varied by changing the weightage of the filter circuit. The modeling outputs, are compared with the experimental outputs.

  19. REVIEWS OF TOPICAL PROBLEMS: Models of neural dynamics in brain information processing — the developments of 'the decade'

    NASA Astrophysics Data System (ADS)

    Borisyuk, G. N.; Borisyuk, R. M.; Kazanovich, Yakov B.; Ivanitskii, Genrikh R.

    2002-10-01

    Neural network models are discussed that have been developed during the last decade with the purpose of reproducing spatio-temporal patterns of neural activity in different brain structures. The main goal of the modeling was to test hypotheses of synchronization, temporal and phase relations in brain information processing. The models being considered are those of temporal structure of spike sequences, of neural activity dynamics, and oscillatory models of attention and feature integration.

  20. Magnetic field perturbation of neural recording and stimulating microelectrodes

    NASA Astrophysics Data System (ADS)

    Martinez-Santiesteban, Francisco M.; Swanson, Scott D.; Noll, Douglas C.; Anderson, David J.

    2007-04-01

    To improve the overall temporal and spatial resolution of brain mapping techniques, in animal models, some attempts have been reported to join electrophysiological methods with functional magnetic resonance imaging (fMRI). However, little attention has been paid to the image artefacts produced by the microelectrodes that compromise the anatomical or functional information of those studies. This work presents a group of simulations and MR images that show the limitations of wire microelectrodes and the potential advantages of silicon technology, in terms of image quality, in MRI environments. Magnetic field perturbations are calculated using a Fourier-based method for platinum (Pt) and tungsten (W) microwires as well as two different silicon technologies. We conclude that image artefacts produced by microelectrodes are highly dependent not only on the magnetic susceptibility of the materials used but also on the size, shape and orientation of the electrodes with respect to the main magnetic field. In addition silicon microelectrodes present better MRI characteristics than metallic microelectrodes. However, metallization layers added to silicon materials can adversely affect the quality of MR images. Therefore only those silicon microelectrodes that minimize the amount of metallic material can be considered MR-compatible and therefore suitable for possible simultaneous fMRI and electrophysiological studies. High resolution gradient echo images acquired at 2 T (TR/TE = 100/15 ms, voxel size = 100 × 100 × 100 µm3) of platinum-iridium (Pt-Ir, 90%-10%) and tungsten microwires show a complete signal loss that covers a volume significantly larger than the actual volume occupied by the microelectrodes: roughly 400 times larger for Pt-Ir and 180 for W, at the tip of the microelectrodes. Similar MR images of a single-shank silicon microelectrode only produce a partial volume effect on the voxels occupied by the probe with less than 50% of signal loss.

  1. Application of chaotic dynamics in a recurrent neural network to control: hardware implementation into a novel autonomous roving robot.

    PubMed

    Li, Yongtao; Kurata, Shuhei; Morita, Shogo; Shimizu, So; Munetaka, Daigo; Nara, Shigetoshi

    2008-09-01

    Originating from a viewpoint that complex/chaotic dynamics would play an important role in biological system including brains, chaotic dynamics introduced in a recurrent neural network was applied to control. The results of computer experiment was successfully implemented into a novel autonomous roving robot, which can only catch rough target information with uncertainty by a few sensors. It was employed to solve practical two-dimensional mazes using adaptive neural dynamics generated by the recurrent neural network in which four prototype simple motions are embedded. Adaptive switching of a system parameter in the neural network results in stationary motion or chaotic motion depending on dynamical situations. The results of hardware implementation and practical experiment using it show that, in given two-dimensional mazes, the robot can successfully avoid obstacles and reach the target. Therefore, we believe that chaotic dynamics has novel potential capability in controlling, and could be utilized to practical engineering application.

  2. Dynamically partitionable autoassociative networks as a solution to the neural binding problem.

    PubMed

    Hayworth, Kenneth J

    2012-01-01

    An outstanding question in theoretical neuroscience is how the brain solves the neural binding problem. In vision, binding can be summarized as the ability to represent that certain properties belong to one object while other properties belong to a different object. I review the binding problem in visual and other domains, and review its simplest proposed solution - the anatomical binding hypothesis. This hypothesis has traditionally been rejected as a true solution because it seems to require a type of one-to-one wiring of neurons that would be impossible in a biological system (as opposed to an engineered system like a computer). I show that this requirement for one-to-one wiring can be loosened by carefully considering how the neural representation is actually put to use by the rest of the brain. This leads to a solution where a symbol is represented not as a particular pattern of neural activation but instead as a piece of a global stable attractor state. I introduce the Dynamically Partitionable AutoAssociative Network (DPAAN) as an implementation of this solution and show how DPANNs can be used in systems which perform perceptual binding and in systems that implement syntax-sensitive rules. Finally I show how the core parts of the cognitive architecture ACT-R can be neurally implemented using a DPAAN as ACT-R's global workspace. Because the DPAAN solution to the binding problem requires only "flat" neural representations (as opposed to the phase encoded representation hypothesized in neural synchrony solutions) it is directly compatible with the most well developed neural models of learning, memory, and pattern recognition.

  3. Robust fault detection of wind energy conversion systems based on dynamic neural networks.

    PubMed

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.

  4. Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks

    PubMed Central

    Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad

    2014-01-01

    Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774

  5. Non-linear dynamics in recurrently connected neural circuits implement Bayesian inference by sampling

    NASA Astrophysics Data System (ADS)

    Ticchi, Alessandro; Faisal, Aldo A.; Brain; Behaviour Lab Team

    2015-03-01

    Experimental evidence at the behavioural-level shows that the brains are able to make Bayes-optimal inference and decisions (Kording and Wolpert 2004, Nature; Ernst and Banks, 2002, Nature), yet at the circuit level little is known about how neural circuits may implement Bayesian learning and inference (but see (Ma et al. 2006, Nat Neurosci)). Molecular sources of noise are clearly established to be powerful enough to pose limits to neural function and structure in the brain (Faisal et al. 2008, Nat Rev Neurosci; Faisal et al. 2005, Curr Biol). We propose a spking neuron model where we exploit molecular noise as a useful resource to implement close-to-optimal inference by sampling. Specifically, we derive a synaptic plasticity rule which, coupled with integrate-and-fire neural dynamics and recurrent inhibitory connections, enables a neural population to learn the statistical properties of the received sensory input (prior). Moreover, the proposed model allows to combine prior knowledge with additional sources of information (likelihood) from another neural population, and to implement in spiking neurons a Markov Chain Monte Carlo algorithm which generates samples from the inferred posterior distribution.

  6. Dynamical Behaviors of Multiple Equilibria in Competitive Neural Networks With Discontinuous Nonmonotonic Piecewise Linear Activation Functions.

    PubMed

    Nie, Xiaobing; Zheng, Wei Xing

    2016-03-01

    This paper addresses the problem of coexistence and dynamical behaviors of multiple equilibria for competitive neural networks. First, a general class of discontinuous nonmonotonic piecewise linear activation functions is introduced for competitive neural networks. Then based on the fixed point theorem and theory of strict diagonal dominance matrix, it is shown that under some conditions, such n -neuron competitive neural networks can have 5(n) equilibria, among which 3(n) equilibria are locally stable and the others are unstable. More importantly, it is revealed that the neural networks with the discontinuous activation functions introduced in this paper can have both more total equilibria and locally stable equilibria than the ones with other activation functions, such as the continuous Mexican-hat-type activation function and discontinuous two-level activation function. Furthermore, the 3(n) locally stable equilibria given in this paper are located in not only saturated regions, but also unsaturated regions, which is different from the existing results on multistability of neural networks with multiple level activation functions. A simulation example is provided to illustrate and validate the theoretical findings.

  7. Automatic Classification of Volcanic Earthquakes Using Multi-Station Waveforms and Dynamic Neural Networks

    NASA Astrophysics Data System (ADS)

    Bruton, C. P.; West, M. E.

    2013-12-01

    Earthquakes and seismicity have long been used to monitor volcanoes. In addition to time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia.

  8. Autonomous dynamics in neural networks: the dHAN concept and associative thought processes

    NASA Astrophysics Data System (ADS)

    Gros, Claudius

    2007-02-01

    The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.

  9. Network burst dynamics under heterogeneous cholinergic modulation of neural firing properties and heterogeneous synaptic connectivity

    PubMed Central

    Knudstrup, Scott; Zochowski, Michal; Booth, Victoria

    2016-01-01

    The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313

  10. Different dynamic resting state fMRI patterns are linked to different frequencies of neural activity.

    PubMed

    Thompson, Garth John; Pan, Wen-Ju; Keilholz, Shella Dawn

    2015-07-01

    Resting state functional magnetic resonance imaging (rsfMRI) results have indicated that network mapping can contribute to understanding behavior and disease, but it has been difficult to translate the maps created with rsfMRI to neuroelectrical states in the brain. Recently, dynamic analyses have revealed multiple patterns in the rsfMRI signal that are strongly associated with particular bands of neural activity. To further investigate these findings, simultaneously recorded invasive electrophysiology and rsfMRI from rats were used to examine two types of electrical activity (directly measured low-frequency/infraslow activity and band-limited power of higher frequencies) and two types of dynamic rsfMRI (quasi-periodic patterns or QPP, and sliding window correlation or SWC). The relationship between neural activity and dynamic rsfMRI was tested under three anesthetic states in rats: dexmedetomidine and high and low doses of isoflurane. Under dexmedetomidine, the lightest anesthetic, infraslow electrophysiology correlated with QPP but not SWC, whereas band-limited power in higher frequencies correlated with SWC but not QPP. Results were similar under isoflurane; however, the QPP was also correlated to band-limited power, possibly due to the burst-suppression state induced by the anesthetic agent. The results provide additional support for the hypothesis that the two types of dynamic rsfMRI are linked to different frequencies of neural activity, but isoflurane anesthesia may make this relationship more complicated. Understanding which neural frequency bands appear as particular dynamic patterns in rsfMRI may ultimately help isolate components of the rsfMRI signal that are of interest to disorders such as schizophrenia and attention deficit disorder.

  11. Different dynamic resting state fMRI patterns are linked to different frequencies of neural activity

    PubMed Central

    Thompson, Garth John; Pan, Wen-Ju

    2015-01-01

    Resting state functional magnetic resonance imaging (rsfMRI) results have indicated that network mapping can contribute to understanding behavior and disease, but it has been difficult to translate the maps created with rsfMRI to neuroelectrical states in the brain. Recently, dynamic analyses have revealed multiple patterns in the rsfMRI signal that are strongly associated with particular bands of neural activity. To further investigate these findings, simultaneously recorded invasive electrophysiology and rsfMRI from rats were used to examine two types of electrical activity (directly measured low-frequency/infraslow activity and band-limited power of higher frequencies) and two types of dynamic rsfMRI (quasi-periodic patterns or QPP, and sliding window correlation or SWC). The relationship between neural activity and dynamic rsfMRI was tested under three anesthetic states in rats: dexmedetomidine and high and low doses of isoflurane. Under dexmedetomidine, the lightest anesthetic, infraslow electrophysiology correlated with QPP but not SWC, whereas band-limited power in higher frequencies correlated with SWC but not QPP. Results were similar under isoflurane; however, the QPP was also correlated to band-limited power, possibly due to the burst-suppression state induced by the anesthetic agent. The results provide additional support for the hypothesis that the two types of dynamic rsfMRI are linked to different frequencies of neural activity, but isoflurane anesthesia may make this relationship more complicated. Understanding which neural frequency bands appear as particular dynamic patterns in rsfMRI may ultimately help isolate components of the rsfMRI signal that are of interest to disorders such as schizophrenia and attention deficit disorder. PMID:26041826

  12. Neural network architecture for cognitive navigation in dynamic environments.

    PubMed

    Villacorta-Atienza, José Antonio; Makarov, Valeri A

    2013-12-01

    Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution.

  13. Dynamic modeling of physical phenomena for PRAs using neural networks

    SciTech Connect

    Benjamin, A.S.; Brown, N.N.; Paez, T.L.

    1998-04-01

    In most probabilistic risk assessments, there is a set of accident scenarios that involves the physical responses of a system to environmental challenges. Examples include the effects of earthquakes and fires on the operability of a nuclear reactor safety system, the effects of fires and impacts on the safety integrity of a nuclear weapon, and the effects of human intrusions on the transport of radionuclides from an underground waste facility. The physical responses of the system to these challenges can be quite complex, and their evaluation may require the use of detailed computer codes that are very time consuming to execute. Yet, to perform meaningful probabilistic analyses, it is necessary to evaluate the responses for a large number of variations in the input parameters that describe the initial state of the system, the environments to which it is exposed, and the effects of human interaction. Because the uncertainties of the system response may be very large, it may also be necessary to perform these evaluations for various values of modeling parameters that have high uncertainties, such as material stiffnesses, surface emissivities, and ground permeabilities. The authors have been exploring the use of artificial neural networks (ANNs) as a means for estimating the physical responses of complex systems to phenomenological events such as those cited above. These networks are designed as mathematical constructs with adjustable parameters that can be trained so that the results obtained from the networks will simulate the results obtained from the detailed computer codes. The intent is for the networks to provide an adequate simulation of the detailed codes over a significant range of variables while requiring only a small fraction of the computer processing time required by the detailed codes. This enables the authors to integrate the physical response analyses into the probabilistic models in order to estimate the probabilities of various responses.

  14. Modeling the influences of nanoparticles on neural field oscillations in thalamocortical networks.

    PubMed

    Busse, Michael; Kraegeloh, Annette; Arzt, Eduard; Strauss, Daniel J

    2012-01-01

    The purpose of this study is twofold. First, we present a simplified multiscale modeling approach integrating activity on the scale of ionic channels into the spatiotemporal scale of neural field potentials: Resting upon a Hodgkin-Huxley based single cell model we introduced a neuronal feedback circuit based on the Llinás-model of thalamocortical activity and binding, where all cell specific intrinsic properties were adopted from patch-clamp measurements. In this paper, we expand this existing model by integrating the output to the spatiotemporal scale of field potentials. Those are supposed to originate from the parallel activity of a variety of synchronized thalamocortical columns at the quasi-microscopic level, where the involved neurons are gathered together in units. Second and more important, we study the possible effects of nanoparticles (NPs) that are supposed to interact with thalamic cells of our network model. In two preliminary studies we demonstrated in vitro and in vivo effects of NPs on the ionic channels of single neurons and thereafter on neuronal feedback circuits. By means of our new model we assumed now NPs induced changes on the ionic currents of the involved thalamic neurons. Here we found extensive diversified pattern formations of neural field potentials when comparing to the modeled activity without neuromodulating NPs addition. This model provides predictions about the influences of NPs on spatiotemporal neural field oscillations in thalamocortical networks. These predictions can be validated by high spatiotemporal resolution electrophysiological measurements like voltage sensitive dyes and multiarray recordings.

  15. Altered temporal dynamics of neural adaptation in the aging human auditory cortex.

    PubMed

    Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas

    2016-09-01

    Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging.

  16. Research of Recurrent Dynamic Neural Networks for Adaptive Control of Complex Dynamic Systems

    DTIC Science & Technology

    2010-07-08

    Recognition 7.1. General description of experiment 7.2. Gesture recognition system on the base of single recurrent neural network 7.3. Experiment...results for gesture recognition system on the base of single recurrent neural network 7.4. Gesture recognition system on the base of multimodular...of non-linear effects increases an advantage of neurocontrol on linear control methods. 7. Experiments related to Gesture Recognition 7.1. General

  17. Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy.

    PubMed

    Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S; Yuste, Rafael; Ahrens, Misha B

    2016-03-01

    Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning-removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416×832×160  μm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain.

  18. Dynamic transitions among multiple oscillators of synchronized bursts in cultured neural networks

    NASA Astrophysics Data System (ADS)

    Hoan Kim, June; Heo, Ryoun; Choi, Joon Ho; Lee, Kyoung J.

    2014-04-01

    Synchronized neural bursts are a salient dynamic feature of biological neural networks, having important roles in brain functions. This report investigates the deterministic nature behind seemingly random temporal sequences of inter-burst intervals generated by cultured networks of cortical cells. We found that the complex sequences were an intricate patchwork of several noisy ‘burst oscillators’, whose periods covered a wide dynamic range, from a few tens of milliseconds to tens of seconds. The transition from one type of oscillator to another favored a particular passage, while the dwelling time between two neighboring transitions followed an exponential distribution showing no memory. With different amounts of bicuculline or picrotoxin application, we could also terminate the oscillators, generate new ones or tune their periods.

  19. Dynamic changes in neural circuitry during adolescence are associated with persistent attenuation of fear memories

    PubMed Central

    Pattwell, Siobhan S.; Liston, Conor; Jing, Deqiang; Ninan, Ipe; Yang, Rui R.; Witztum, Jonathan; Murdock, Mitchell H.; Dincheva, Iva; Bath, Kevin G.; Casey, B. J.; Deisseroth, Karl; Lee, Francis S.

    2016-01-01

    Fear can be highly adaptive in promoting survival, yet it can also be detrimental when it persists long after a threat has passed. Flexibility of the fear response may be most advantageous during adolescence when animals are prone to explore novel, potentially threatening environments. Two opposing adolescent fear-related behaviours—diminished extinction of cued fear and suppressed expression of contextual fear—may serve this purpose, but the neural basis underlying these changes is unknown. Using microprisms to image prefrontal cortical spine maturation across development, we identify dynamic BLA-hippocampal-mPFC circuit reorganization associated with these behavioural shifts. Exploiting this sensitive period of neural development, we modified existing behavioural interventions in an age-specific manner to attenuate adolescent fear memories persistently into adulthood. These findings identify novel strategies that leverage dynamic neurodevelopmental changes during adolescence with the potential to extinguish pathological fears implicated in anxiety and stress-related disorders. PMID:27215672

  20. Neural Dynamic Logic of Consciousness: The Knowledge Instinct

    DTIC Science & Technology

    2007-09-07

    Interaction between language and cognition is an active field of study (see [xii] for neurodynamics of this interaction and for more references). Here we do...mechanisms of cognition , emotion, and language, and will study multi-agent systems, in which each agent possesses complex neurodynamics of interaction...may reveal to what extent the hierarchy is inborn vs. adaptively learned. Studies of the neurodynamics of interacting language and cognition have

  1. Prototype extraction in material attractor neural networks with stochastic dynamic learning

    NASA Astrophysics Data System (ADS)

    Fusi, Stefano

    1995-04-01

    Dynamic learning of random stimuli can be described as a random walk among the stable synaptic values. It is shown that prototype extraction can take place in material attractor neural networks when the stimuli are correlated and hierarchically organized. The network learns a set of attractors representing the prototypes in a completely unsupervised fashion and is able to modify its attractors when the input statistics change. Learning and forgetting rates are computed.

  2. Neural network interpolation of the magnetic field for the LISA Pathfinder Diagnostics Subsystem

    NASA Astrophysics Data System (ADS)

    Diaz-Aguilo, Marc; Lobo, Alberto; García-Berro, Enrique

    2011-05-01

    LISA Pathfinder is a science and technology demonstrator of the European Space Agency within the framework of its LISA mission, which aims to be the first space-borne gravitational wave observatory. The payload of LISA Pathfinder is the so-called LISA Technology Package, which is designed to measure relative accelerations between two test masses in nominal free fall. Its disturbances are monitored and dealt by the diagnostics subsystem. This subsystem consists of several modules, and one of these is the magnetic diagnostics system, which includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at the positions of the test masses. However, since the magnetometers are located far from the positions of the test masses, the magnetic field at their positions must be interpolated. It has been recently shown that because there are not enough magnetic channels, classical interpolation methods fail to derive reliable measurements at the positions of the test masses, while neural network interpolation can provide the required measurements at the desired accuracy. In this paper we expand these studies and we assess the reliability and robustness of the neural network interpolation scheme for variations of the locations and possible offsets of the magnetometers, as well as for changes in environmental conditions. We find that neural networks are robust enough to derive accurate measurements of the magnetic field at the positions of the test masses in most circumstances.

  3. The effects of noise on binocular rivalry waves: a stochastic neural field model

    NASA Astrophysics Data System (ADS)

    Webber, Matthew A.; Bressloff, Paul C.

    2013-03-01

    We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave.

  4. Dynamic network communication as a unifying neural basis for cognition, development, aging, and disease.

    PubMed

    Voytek, Bradley; Knight, Robert T

    2015-06-15

    Perception, cognition, and social interaction depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at multiple timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. In this article, we begin from the perspective that successful interregional communication relies upon the transient synchronization between distinct low-frequency (<80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. From this, we construct a theoretical framework for dynamic network communication, arguing that these networks reflect a balance between oscillatory coupling and local population spiking activity and that these two levels of activity interact. We theorize that when oscillatory coupling is too strong, spike timing within the local neuronal population becomes too synchronous; when oscillatory coupling is too weak, spike timing is too disorganized. Each results in specific disruptions to neural communication. These alterations in communication dynamics may underlie cognitive changes associated with healthy development and aging, in addition to neurological and psychiatric disorders. A number of neurological and psychiatric disorders-including Parkinson's disease, autism, depression, schizophrenia, and anxiety-are associated with abnormalities in oscillatory activity. Although aging, psychiatric and neurological disease, and experience differ in the biological changes to structural gray or white matter, neurotransmission, and gene expression, our framework suggests that any resultant cognitive and behavioral changes in normal or disordered states or their treatment are a product of how these physical processes affect dynamic network communication.

  5. Neural dynamics of error processing in medial frontal cortex.

    PubMed

    Mars, Rogier B; Coles, Michael G H; Grol, Meike J; Holroyd, Clay B; Nieuwenhuis, Sander; Hulstijn, Wouter; Toni, Ivan

    2005-12-01

    Adaptive behavior requires an organism to evaluate the outcome of its actions, such that future behavior can be adjusted accordingly and the appropriate response selected. During associative learning, the time at which such evaluative information is available changes as learning progresses, from the delivery of performance feedback early in learning to the execution of the response itself during learned performance. Here, we report a learning-dependent shift in the timing of activation in the rostral cingulate zone of the anterior cingulate cortex from external error feedback to internal error detection. This pattern of activity is seen only in the anterior cingulate, not in the pre-supplementary motor area. The dynamics of these reciprocal changes are consistent with the claim that the rostral cingulate zone is involved in response selection on the basis of the expected outcome of an action. Specifically, these data illustrate how the anterior cingulate receives evaluative information, indicating that an action has not produced the desired result.

  6. The simplest problem in the collective dynamics of neural networks: is synchrony stable?

    NASA Astrophysics Data System (ADS)

    Timme, Marc; Wolf, Fred

    2008-07-01

    For spiking neural networks we consider the stability problem of global synchrony, arguably the simplest non-trivial collective dynamics in such networks. We find that even this simplest dynamical problem—local stability of synchrony—is non-trivial to solve and requires novel methods for its solution. In particular, the discrete mode of pulsed communication together with the complicated connectivity of neural interaction networks requires a non-standard approach. The dynamics in the vicinity of the synchronous state is determined by a multitude of linear operators, in contrast to a single stability matrix in conventional linear stability theory. This unusual property qualitatively depends on network topology and may be neglected for globally coupled homogeneous networks. For generic networks, however, the number of operators increases exponentially with the size of the network. We present methods to treat this multi-operator problem exactly. First, based on the Gershgorin and Perron-Frobenius theorems, we derive bounds on the eigenvalues that provide important information about the synchronization process but are not sufficient to establish the asymptotic stability or instability of the synchronous state. We then present a complete analysis of asymptotic stability for topologically strongly connected networks using simple graph-theoretical considerations. For inhibitory interactions between dissipative (leaky) oscillatory neurons the synchronous state is stable, independent of the parameters and the network connectivity. These results indicate that pulse-like interactions play a profound role in network dynamical systems, and in particular in the dynamics of biological synchronization, unless the coupling is homogeneous and all-to-all. The concepts introduced here are expected to also facilitate the exact analysis of more complicated dynamical network states, for instance the irregular balanced activity in cortical neural networks.

  7. Using recurrent neural networks to optimize dynamical decoupling for quantum memory

    NASA Astrophysics Data System (ADS)

    August, Moritz; Ni, Xiaotong

    2017-01-01

    We utilize machine learning models that are based on recurrent neural networks to optimize dynamical decoupling (DD) sequences. Dynamical decoupling is a relatively simple technique for suppressing the errors in quantum memory for certain noise models. In numerical simulations, we show that with minimum use of prior knowledge and starting from random sequences, the models are able to improve over time and eventually output DD sequences with performance better than that of the well known DD families. Furthermore, our algorithm is easy to implement in experiments to find solutions tailored to the specific hardware, as it treats the figure of merit as a black box.

  8. Dynamic imaging and quantitative analysis of cranial neural tube closure in the mouse embryo using optical coherence tomography

    PubMed Central

    Wang, Shang; Garcia, Monica D.; Lopez, Andrew L.; Overbeek, Paul A.; Larin, Kirill V.; Larina, Irina V.

    2016-01-01

    Neural tube closure is a critical feature of central nervous system morphogenesis during embryonic development. Failure of this process leads to neural tube defects, one of the most common forms of human congenital defects. Although molecular and genetic studies in model organisms have provided insights into the genes and proteins that are required for normal neural tube development, complications associated with live imaging of neural tube closure in mammals limit efficient morphological analyses. Here, we report the use of optical coherence tomography (OCT) for dynamic imaging and quantitative assessment of cranial neural tube closure in live mouse embryos in culture. Through time-lapse imaging, we captured two neural tube closure mechanisms in different cranial regions, zipper-like closure of the hindbrain region and button-like closure of the midbrain region. We also used OCT imaging for phenotypic characterization of a neural tube defect in a mouse mutant. These results suggest that the described approach is a useful tool for live dynamic analysis of normal neural tube closure and neural tube defects in the mouse model. PMID:28101427

  9. Dynamic imaging and quantitative analysis of cranial neural tube closure in the mouse embryo using optical coherence tomography.

    PubMed

    Wang, Shang; Garcia, Monica D; Lopez, Andrew L; Overbeek, Paul A; Larin, Kirill V; Larina, Irina V

    2017-01-01

    Neural tube closure is a critical feature of central nervous system morphogenesis during embryonic development. Failure of this process leads to neural tube defects, one of the most common forms of human congenital defects. Although molecular and genetic studies in model organisms have provided insights into the genes and proteins that are required for normal neural tube development, complications associated with live imaging of neural tube closure in mammals limit efficient morphological analyses. Here, we report the use of optical coherence tomography (OCT) for dynamic imaging and quantitative assessment of cranial neural tube closure in live mouse embryos in culture. Through time-lapse imaging, we captured two neural tube closure mechanisms in different cranial regions, zipper-like closure of the hindbrain region and button-like closure of the midbrain region. We also used OCT imaging for phenotypic characterization of a neural tube defect in a mouse mutant. These results suggest that the described approach is a useful tool for live dynamic analysis of normal neural tube closure and neural tube defects in the mouse model.

  10. Neural processing of dynamic emotional facial expressions in psychopaths

    PubMed Central

    Decety, Jean; Skelly, Laurie; Yoder, Keith J.; Kiehl, Kent A.

    2014-01-01

    Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, eighty adult incarcerated males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent fMRI scanning while viewing dynamic facial expressions of fear, sadness, happiness and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, STS) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex, regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions. PMID:24359488

  11. Neural processing of dynamic emotional facial expressions in psychopaths.

    PubMed

    Decety, Jean; Skelly, Laurie; Yoder, Keith J; Kiehl, Kent A

    2014-02-01

    Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, 80 incarcerated adult males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent functional magnetic resonance imaging (fMRI) scanning while viewing dynamic facial expressions of fear, sadness, happiness, and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, and superior temporal sulcus (STS)) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex (OFC)), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness, and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex (vmPFC), regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions.

  12. Neural dynamics underlying target detection in the human brain.

    PubMed

    Bansal, Arjun K; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R; Kreiman, Gabriel

    2014-02-19

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses.

  13. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  14. Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model.

    PubMed

    Ito, Masato; Noda, Kuniaki; Hoshino, Yukiko; Tani, Jun

    2006-04-01

    This study presents experiments on the learning of object handling behaviors by a small humanoid robot using a dynamic neural network model, the recurrent neural network with parametric bias (RNNPB). The first experiment showed that after the robot learned different types of ball handling behaviors using human direct teaching, the robot was able to generate adequate ball handling motor sequences situated to the relative position between the robot's hands and the ball. The same scheme was applied to a block handling learning task where it was shown that the robot can switch among learned different block handling sequences, situated to the ways of interaction by human supporters. Our analysis showed that entrainment of the internal memory structures of the RNNPB through the interactions of the objects and the human supporters are the essential mechanisms for those observed situated behaviors of the robot.

  15. Memory, sleep, and dynamic stabilization of neural circuitry: evolutionary perspectives.

    PubMed

    Kavanau, J L

    1996-01-01

    Some aspects of the evolution of mechanisms for enhancement and maintenance of synaptic efficacy are treated. After the origin of use-dependent synaptic plasticity, frequent synaptic activation (dynamic stabilization, DS) probably prolonged transient efficacy enhancements induced by single activations. In many "primitive" invertebrates inhabiting essentially unvarying aqueous environments, DS of synapses occurs primarily in the course of frequent functional use. In advanced locomoting ectotherms encountering highly varied environments, DS is thought to occur both through frequent functional use and by spontaneous "non-utilitarian" activations that occur primarily during rest. Non-utilitarian activations are induced by endogenous oscillatory neuronal activity, the need for which might have been one of the sources of selective pressure for the evolution of neurons with oscillatory firing capacities. As non-sleeping animals evolved increasingly complex brains, ever greater amounts of circuitry encoding inherited and experiential information (memories) required maintenance. The selective pressure for the evolution of sleep may have been the need to depress perception and processing of sensory inputs to minimize interference with DS of this circuitry. As the higher body temperatures and metabolic rates of endothermy evolved, mere skeletal muscle hypotonia evidently did not suffice to prevent sleep-disrupting skeletal muscle contractions during DS of motor circuitry. Selection against sleep disruption may have led to the evolution of further decreases in muscle tone, paralleling the increase in metabolic rate, and culminating in the postural atonia of REM (rapid eye movement) sleep. Phasic variations in heart and respiratory rates during REM sleep may result from superposition of activations accomplishing non-utilitarian DS of redundant and modulatory motor circuitry on the rhythmic autonomic control mechanisms. Accompanying non-utilitarian DS of circuitry during sleep

  16. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    PubMed

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  17. Distributed dynamical computation in neural circuits with propagating coherent activity patterns.

    PubMed

    Gong, Pulin; van Leeuwen, Cees

    2009-12-01

    Activity in neural circuits is spatiotemporally organized. Its spatial organization consists of multiple, localized coherent patterns, or patchy clusters. These patterns propagate across the circuits over time. This type of collective behavior has ubiquitously been observed, both in spontaneous activity and evoked responses; its function, however, has remained unclear. We construct a spatially extended, spiking neural circuit that generates emergent spatiotemporal activity patterns, thereby capturing some of the complexities of the patterns observed empirically. We elucidate what kind of fundamental function these patterns can serve by showing how they process information. As self-sustained objects, localized coherent patterns can signal information by propagating across the neural circuit. Computational operations occur when these emergent patterns interact, or collide with each other. The ongoing behaviors of these patterns naturally embody both distributed, parallel computation and cascaded logical operations. Such distributed computations enable the system to work in an inherently flexible and efficient way. Our work leads us to propose that propagating coherent activity patterns are the underlying primitives with which neural circuits carry out distributed dynamical computation.

  18. Neural regeneration dynamics of Xenopus laevis olfactory epithelium after zinc sulfate-induced damage.

    PubMed

    Frontera, J L; Raices, M; Cervino, A S; Pozzi, A G; Paz, D A

    2016-11-01

    Neural stem cells (NSCs) of the olfactory epithelium (OE) are responsible for tissue maintenance and the neural regeneration after severe damage of the tissue. In the normal OE, NSCs are located in the basal layer, olfactory receptor neurons (ORNs) mainly in the middle layer, and sustentacular (SUS) cells in the most apical olfactory layer. In this work, we induced severe damage of the OE through treatment with a zinc sulfate (ZnSO4) solution directly in the medium, which resulted in the loss of ORNs and SUS cells, but retention of the basal layer. During recovery following injury, the OE exhibited increased proliferation of NSCs and rapid neural regeneration. After 24h of recovery, new ORNs and SUS cells were observed. Normal morphology and olfactory function were reached after 168h (7 days) of recovery after ZnSO4 treatment. Taken together, these data support the hypothesis that NSCs in the basal layer activate after OE injury and that these are sufficient for complete neural regeneration and olfactory function restoration. Our analysis provides histological and functional insights into the dynamics between olfactory neurogenesis and the neuronal integration into the neuronal circuitry of the olfactory bulb that restores the function of the olfactory system.

  19. Caldesmon regulates actin dynamics to influence cranial neural crest migration in Xenopus

    PubMed Central

    Nie, Shuyi; Kee, Yun; Bronner-Fraser, Marianne

    2011-01-01

    Caldesmon (CaD) is an important actin modulator that associates with actin filaments to regulate cell morphology and motility. Although extensively studied in cultured cells, there is little functional information regarding the role of CaD in migrating cells in vivo. Here we show that nonmuscle CaD is highly expressed in both premigratory and migrating cranial neural crest cells of Xenopus embryos. Depletion of CaD with antisense morpholino oligonucleotides causes cranial neural crest cells to migrate a significantly shorter distance, prevents their segregation into distinct migratory streams, and later results in severe defects in cartilage formation. Demonstrating specificity, these effects are rescued by adding back exogenous CaD. Interestingly, CaD proteins with mutations in the Ca2+-calmodulin–binding sites or ErK/Cdk1 phosphorylation sites fail to rescue the knockdown phenotypes, whereas mutation of the PAK phosphorylation site is able to rescue them. Analysis of neural crest explants reveals that CaD is required for the dynamic arrangements of actin and, thus, for cell shape changes and process formation. Taken together, these results suggest that the actin-modulating activity of CaD may underlie its critical function and is regulated by distinct signaling pathways during normal neural crest migration. PMID:21795398

  20. Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes.

    PubMed

    Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice; Tamietto, Marco

    2014-11-01

    The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion.

  1. An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates

    PubMed Central

    Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto

    2013-01-01

    Objective Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims, and those living with severe neuromotor disease. Such systems must be chronically safe, durable, and effective. Approach We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous, and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based MEA via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1Hz to 7.8kHz, ×200 gain) and multiplexed by a custom application specific integrated circuit, digitized, and then packaged for transmission. The neural data (24 Mbps) was transmitted by a wireless data link carried on an frequency shift key modulated signal at 3.2GHz and 3.8GHz to a receiver 1 meter away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7-hour continuous operation between recharge via an inductive transcutaneous wireless power link at 2MHz. Main results Device verification and early validation was performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight on how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile patient use, have

  2. An implantable wireless neural interface for recording cortical circuit dynamics in moving primates

    NASA Astrophysics Data System (ADS)

    Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto

    2013-04-01

    Objective. Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims and those living with severe neuromotor disease. Such systems must be chronically safe, durable and effective. Approach. We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based microelectrode array via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1 Hz to 7.8 kHz, 200× gain) and multiplexed by a custom application specific integrated circuit, digitized and then packaged for transmission. The neural data (24 Mbps) were transmitted by a wireless data link carried on a frequency-shift-key-modulated signal at 3.2 and 3.8 GHz to a receiver 1 m away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7 h continuous operation between recharge via an inductive transcutaneous wireless power link at 2 MHz. Main results. Device verification and early validation were performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance. We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight into how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile

  3. Population coding and decoding in a neural field: a computational study.

    PubMed

    Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki

    2002-05-01

    This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.

  4. Neural network simulation of soil NO3 dynamic under potato crop system

    NASA Astrophysics Data System (ADS)

    Goulet-Fortin, Jérôme; Morais, Anne; Anctil, François; Parent, Léon-Étienne; Bolinder, Martin

    2013-04-01

    Nitrate leaching is a major issue in sandy soils intensively cropped to potato. Modelling could test and improve management practices, particularly as regard to the optimal N application rates. Lack of input data is an important barrier for the application of classical process-based models to predict soil NO3 content (SNOC) and NO3 leaching (NOL). Alternatively, data driven models such as neural networks (NN) could better take into account indicators of spatial soil heterogeneity and plant growth pattern such as the leaf area index (LAI), hence reducing the amount of soil information required. The first objective of this study was to evaluate NN and hybrid models to simulate SNOC in the 0-40 cm soil layer considering inter-annual variations, spatial soil heterogeneity and differential N application rates. The second objective was to evaluate the same methodology to simulate seasonal NOL dynamic at 1 m deep. To this aim, multilayer perceptrons with different combinations of driving meteorological variables, functions of the LAI and state variables of external deterministic models have been trained and evaluated. The state variables from external models were: drainage estimated by the CLASS model and the soil temperature estimated by an ICBM subroutine. Results of SNOC simulations were compared to field data collected between 2004 and 2011 at several experimental plots under potato cropping systems in Québec, Eastern Canada. Results of NOL simulation were compared to data obtained in 2012 from 11 suction lysimeters installed in 2 experimental plots under potato cropping systems in the same region. The most performing model for SNOC simulation was obtained using a 4-input hybrid model composed of 1) cumulative LAI, 2) cumulative drainage, 3) soil temperature and 4) day of year. The most performing model for NOL simulation was obtained using a 5-input NN model composed of 1) N fertilization rate at spring, 2) LAI, 3) cumulative rainfall, 4) the day of year and 5) the

  5. The Emergent Executive: A Dynamic Field Theory of the Development of Executive Function

    PubMed Central

    Buss, Aaron T.; Spencer, John P.

    2015-01-01

    A dynamic neural field (DNF) model is presented which provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function—the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 4-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. In the DNF model, rule-use and behavioral flexibility come about through a form of dimensional attention which modulates activity within different cortical fields tuned to specific feature dimensions. In particular, we capture developmental change by increasing the strength of excitatory and inhibitory neural interactions in the dimensional attention system as well as refining the connectivity between this system and the feature-specific cortical fields. Note that although this enables the model to effectively switch tasks, the dimensional attention system does not ‘know’ the details of task-specific performance. Rather, correct performance emerges as a property of system-wide neural interactions. We show how this captures children's behavior in quantitative detail across 12 versions of the DCCS task. Moreover, we successfully test a set of novel predictions with 3-year-old children from a version of the task not explained by other theories. PMID:24818836

  6. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    PubMed

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  7. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics

    PubMed Central

    Sinapayen, Lana; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309

  8. Prune-able fuzzy ART neural architecture for robot map learning and navigation in dynamic environments.

    PubMed

    Araújo, Rui

    2006-09-01

    Mobile robots must be able to build their own maps to navigate in unknown worlds. Expanding a previously proposed method based on the fuzzy ART neural architecture (FARTNA), this paper introduces a new online method for learning maps of unknown dynamic worlds. For this purpose the new Prune-able fuzzy adaptive resonance theory neural architecture (PAFARTNA) is introduced. It extends the FARTNA self-organizing neural network with novel mechanisms that provide important dynamic adaptation capabilities. Relevant PAFARTNA properties are formulated and demonstrated. A method is proposed for the perception of object removals, and then integrated with PAFARTNA. The proposed methods are integrated into a navigation architecture. With the new navigation architecture the mobile robot is able to navigate in changing worlds, and a degree of optimality is maintained, associated to a shortest path planning approach implemented in real-time over the underlying global world model. Experimental results obtained with a Nomad 200 robot are presented demonstrating the feasibility and effectiveness of the proposed methods.

  9. Diagonal recurrent neural network based adaptive control of nonlinear dynamical systems using lyapunov stability criterion.

    PubMed

    Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P

    2017-03-01

    In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller.

  10. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  11. Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Sparks, Dean W., Jr.

    1997-01-01

    A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.

  12. A modified dynamic evolving neural-fuzzy approach to modeling customer satisfaction for affective design.

    PubMed

    Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael

    2013-01-01

    Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.

  13. A Modified Dynamic Evolving Neural-Fuzzy Approach to Modeling Customer Satisfaction for Affective Design

    PubMed Central

    Kwong, C. K.; Fung, K. Y.; Jiang, Huimin; Chan, K. Y.

    2013-01-01

    Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884

  14. A neural-network-based method of model reduction for the dynamic simulation of MEMS

    NASA Astrophysics Data System (ADS)

    Liang, Y. C.; Lin, W. Z.; Lee, H. P.; Lim, S. P.; Lee, K. H.; Feng, D. P.

    2001-05-01

    This paper proposes a neuro-network-based method for model reduction that combines the generalized Hebbian algorithm (GHA) with the Galerkin procedure to perform the dynamic simulation and analysis of nonlinear microelectromechanical systems (MEMS). An unsupervised neural network is adopted to find the principal eigenvectors of a correlation matrix of snapshots. It has been shown that the extensive computer results of the principal component analysis using the neural network of GHA can extract an empirical basis from numerical or experimental data, which can be used to convert the original system into a lumped low-order macromodel. The macromodel can be employed to carry out the dynamic simulation of the original system resulting in a dramatic reduction of computation time while not losing flexibility and accuracy. Compared with other existing model reduction methods for the dynamic simulation of MEMS, the present method does not need to compute the input correlation matrix in advance. It needs only to find very few required basis functions, which can be learned directly from the input data, and this means that the method possesses potential advantages when the measured data are large. The method is evaluated to simulate the pull-in dynamics of a doubly-clamped microbeam subjected to different input voltage spectra of electrostatic actuation. The efficiency and the flexibility of the proposed method are examined by comparing the results with those of the fully meshed finite-difference method.

  15. Self-organised transients in a neural mass model of epileptogenic tissue dynamics.

    PubMed

    Goodfellow, Marc; Schindler, Kaspar; Baier, Gerold

    2012-02-01

    Stimulation of human epileptic tissue can induce rhythmic, self-terminating responses on the EEG or ECoG. These responses play a potentially important role in localising tissue involved in the generation of seizure activity, yet the underlying mechanisms are unknown. However, in vitro evidence suggests that self-terminating oscillations in nervous tissue are underpinned by non-trivial spatio-temporal dynamics in an excitable medium. In this study, we investigate this hypothesis in spatial extensions to a neural mass model for epileptiform dynamics. We demonstrate that spatial extensions to this model in one and two dimensions display propagating travelling waves but also more complex transient dynamics in response to local perturbations. The neural mass formulation with local excitatory and inhibitory circuits, allows the direct incorporation of spatially distributed, functional heterogeneities into the model. We show that such heterogeneities can lead to prolonged reverberating responses to a single pulse perturbation, depending upon the location at which the stimulus is delivered. This leads to the hypothesis that prolonged rhythmic responses to local stimulation in epileptogenic tissue result from repeated self-excitation of regions of tissue with diminished inhibitory capabilities. Combined with previous models of the dynamics of focal seizures this macroscopic framework is a first step towards an explicit spatial formulation of the concept of the epileptogenic zone. Ultimately, an improved understanding of the pathophysiologic mechanisms of the epileptogenic zone will help to improve diagnostic and therapeutic measures for treating epilepsy.

  16. Direct field measurement of the dynamic amplification in a bridge

    NASA Astrophysics Data System (ADS)

    Carey, Ciarán; OBrien, Eugene J.; Malekjafarian, Abdollah; Lydon, Myra; Taylor, Su

    2017-02-01

    In this paper, the level of dynamics, as described by the Assessment Dynamic Ratio (ADR), is measured directly through a field test on a bridge in the United Kingdom. The bridge was instrumented using fiber optic strain sensors and piezo-polymer weigh-in-motion sensors were installed in the pavement on the approach road. Field measurements of static and static-plus-dynamic strains were taken over 45 days. The results show that, while dynamic amplification is large for many loading events, these tend not to be the critical events. ADR, the allowance that should be made for dynamics in an assessment of safety, is small.

  17. Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control

    PubMed Central

    Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind

    2016-01-01

    There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system. PMID:26829673

  18. Modeling the Dynamics of Human Brain Activity with Recurrent Neural Networks

    PubMed Central

    Güçlü, Umut; van Gerven, Marcel A. J.

    2017-01-01

    Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear convolution of features to responses (response model). While there has been extensive work on developing better feature models, the work on developing better response models has been rather limited. Here, we investigate the extent to which recurrent neural network models can use their internal memories for nonlinear processing of arbitrary feature sequences to predict feature-evoked response sequences as measured by functional magnetic resonance imaging. We show that the proposed recurrent neural network models can significantly outperform established response models by accurately estimating long-term dependencies that drive hemodynamic responses. The results open a new window into modeling the dynamics of brain activity in response to sensory stimuli. PMID:28232797

  19. Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control.

    PubMed

    Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind

    2016-02-01

    There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system.

  20. Automatic classification of volcanic earthquakes using multi-station waveforms and dynamic neural networks

    NASA Astrophysics Data System (ADS)

    Bruton, Christopher Patrick

    Earthquakes and seismicity have long been used to monitor volcanoes. In addition to the time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new sets of waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia. The procedure can successfully distinguish between slab and volcanic events at Uturuncu, between events from four different volcanoes in the Katmai region, and between volcano-tectonic and long-period events at Spurr. Average recall and overall accuracy were greater than 80% in all three cases.

  1. Unified description of the dynamics of quintessential scalar fields

    SciTech Connect

    Ureña-López, L. Arturo

    2012-03-01

    Using the dynamical system approach, we describe the general dynamics of cosmological scalar fields in terms of critical points and heteroclinic lines. It is found that critical points describe the initial and final states of the scalar field dynamics, but that heteroclinic lines give a more complete description of the evolution in between the critical points. In particular, the heteroclinic line that departs from the (saddle) critical point of perfect fluid-domination is the representative path in phase space of quintessence fields that may be viable dark energy candidates. We also discuss the attractor properties of the heteroclinic lines, and their importance for the description of thawing and freezing fields.

  2. Adaptive dynamic surface control of flexible-joint robots using self-recurrent wavelet neural networks.

    PubMed

    Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho

    2006-12-01

    A new method for the robust control of flexible-joint (FJ) robots with model uncertainties in both robot dynamics and actuator dynamics is proposed. The proposed control system is a combination of the adaptive dynamic surface control (DSC) technique and the self-recurrent wavelet neural network (SRWNN). The adaptive DSC technique provides the ability to overcome the "explosion of complexity" problem in backstepping controllers. The SRWNNs are used to observe the arbitrary model uncertainties of FJ robots, and all their weights are trained online. From the Lyapunov stability analysis, their adaptation laws are induced, and the uniformly ultimately boundedness of all signals in a closed-loop adaptive system is proved. Finally, simulation results for a three-link FJ robot are utilized to validate the good position tracking performance and robustness against payload uncertainties and external disturbances of the proposed control system.

  3. Optimal system size for complex dynamics in random neural networks near criticality

    SciTech Connect

    Wainrib, Gilles; García del Molino, Luis Carlos

    2013-12-15

    In this article, we consider a model of dynamical agents coupled through a random connectivity matrix, as introduced by Sompolinsky et al. [Phys. Rev. Lett. 61(3), 259–262 (1988)] in the context of random neural networks. When system size is infinite, it is known that increasing the disorder parameter induces a phase transition leading to chaotic dynamics. We observe and investigate here a novel phenomenon in the sub-critical regime for finite size systems: the probability of observing complex dynamics is maximal for an intermediate system size when the disorder is close enough to criticality. We give a more general explanation of this type of system size resonance in the framework of extreme values theory for eigenvalues of random matrices.

  4. Gas dynamics in strong centrifugal fields

    SciTech Connect

    Bogovalov, S.V.; Kislov, V.A.; Tronin, I.V.

    2015-03-10

    Dynamics of waves generated by scopes in gas centrifuges (GC) for isotope separation is considered. The centrifugal acceleration in the GC reaches values of the order of 106g. The centrifugal and Coriolis forces modify essentially the conventional sound waves. Three families of the waves with different polarisation and dispersion exist in these conditions. Dynamics of the flow in the model GC Iguasu is investigated numerically. Comparison of the results of the numerical modelling of the wave dynamics with the analytical predictions is performed. New phenomena of the resonances in the GC is found. The resonances occur for the waves polarized along the rotational axis having the smallest dumping due to the viscosity.

  5. Segregated and overlapping neural circuits exist for the production of static and dynamic precision grip force.

    PubMed

    Neely, Kristina A; Coombes, Stephen A; Planetta, Peggy J; Vaillancourt, David E

    2013-03-01

    A central topic in sensorimotor neuroscience is the static-dynamic dichotomy that exists throughout the nervous system. Previous work examining motor unit synchronization reports that the activation strategy and timing of motor units differ for static and dynamic tasks. However, it remains unclear whether segregated or overlapping blood-oxygen-level-dependent (BOLD) activity exists in the brain for static and dynamic motor control. This study compared the neural circuits associated with the production of static force to those associated with the production of dynamic force pulses. To that end, healthy young adults (n = 17) completed static and dynamic precision grip force tasks during functional magnetic resonance imaging (fMRI). Both tasks activated core regions within the visuomotor network, including primary and sensory motor cortices, premotor cortices, multiple visual areas, putamen, and cerebellum. Static force was associated with unique activity in a right-lateralized cortical network including inferior parietal lobe, ventral premotor cortex, and dorsolateral prefrontal cortex. In contrast, dynamic force was associated with unique activity in left-lateralized and midline cortical regions, including supplementary motor area, superior parietal lobe, fusiform gyrus, and visual area V3. These findings provide the first neuroimaging evidence supporting a lateralized pattern of brain activity for the production of static and dynamic precision grip force.

  6. Quantum analysis applied to thermo field dynamics on dissipative systems

    SciTech Connect

    Hashizume, Yoichiro; Okamura, Soichiro; Suzuki, Masuo

    2015-03-10

    Thermo field dynamics is one of formulations useful to treat statistical mechanics in the scheme of field theory. In the present study, we discuss dissipative thermo field dynamics of quantum damped harmonic oscillators. To treat the effective renormalization of quantum dissipation, we use the Suzuki-Takano approximation. Finally, we derive a dissipative von Neumann equation in the Lindbrad form. In the present treatment, we can easily obtain the initial damping shown previously by Kubo.

  7. Artificial neural network analysis of noisy visual field data in glaucoma.

    PubMed

    Henson, D B; Spenceley, S E; Bull, D R

    1997-06-01

    This paper reports on the application of an artificial neural network to the clinical analysis of ophthalmological data. In particular a 2-dimensional Kohonen self-organising feature map (SOM) is used to analyse visual field data from glaucoma patients. Importantly, the paper addresses the problem of how the SOM can be utilised to accommodate the noise within the data. This is a particularly important problem within longitudinal assessment, where detecting significant change is the crux of the problem in clinical diagnosis. Data from 737 glaucomatous visual field records (Humphrey Visual Field Analyzer, program 24-2) are used to train a SOM with 25 nodes organised on a square grid. The SOM clusters the data organising the output map such that fields with early and advanced loss are at extreme positions, with a continuum of change in place and extent of loss represented by the intervening nodes. For each SOM node 100 variants, generated by a computer simulation modelling the variability that might be expected in a glaucomatous eye, are also classified by the network to establish the extent of noise upon classification. Field change is then measured with respect to classification of a subsequent field, outside the area defined by the original field and its variants. The significant contribution of this paper is that the spatial analysis of the field data, which is provided by the SOM, has been augmented with noise analysis enhancing the visual representation of longitudinal data and enabling quantification of significant class change.

  8. Autonomous and Decentralized Optimization of Large-Scale Heterogeneous Wireless Networks by Neural Network Dynamics

    NASA Astrophysics Data System (ADS)

    Hasegawa, Mikio; Tran, Ha Nguyen; Miyamoto, Goh; Murata, Yoshitoshi; Harada, Hiroshi; Kato, Shuzo

    We propose a neurodynamical approach to a large-scale optimization problem in Cognitive Wireless Clouds, in which a huge number of mobile terminals with multiple different air interfaces autonomously utilize the most appropriate infrastructure wireless networks, by sensing available wireless networks, selecting the most appropriate one, and reconfiguring themselves with seamless handover to the target networks. To deal with such a cognitive radio network, game theory has been applied in order to analyze the stability of the dynamical systems consisting of the mobile terminals' distributed behaviors, but it is not a tool for globally optimizing the state of the network. As a natural optimization dynamical system model suitable for large-scale complex systems, we introduce the neural network dynamics which converges to an optimal state since its property is to continually decrease its energy function. In this paper, we apply such neurodynamics to the optimization problem of radio access technology selection. We compose a neural network that solves the problem, and we show that it is possible to improve total average throughput simply by using distributed and autonomous neuron updates on the terminal side.

  9. Dynamic Changes in Amygdala Psychophysiological Connectivity Reveal Distinct Neural Networks for Facial Expressions of Basic Emotions

    PubMed Central

    Diano, Matteo; Tamietto, Marco; Celeghin, Alessia; Weiskrantz, Lawrence; Tatu, Mona-Karina; Bagnis, Arianna; Duca, Sergio; Geminiani, Giuliano; Cauda, Franco; Costa, Tommaso

    2017-01-01

    The quest to characterize the neural signature distinctive of different basic emotions has recently come under renewed scrutiny. Here we investigated whether facial expressions of different basic emotions modulate the functional connectivity of the amygdala with the rest of the brain. To this end, we presented seventeen healthy participants (8 females) with facial expressions of anger, disgust, fear, happiness, sadness and emotional neutrality and analyzed amygdala’s psychophysiological interaction (PPI). In fact, PPI can reveal how inter-regional amygdala communications change dynamically depending on perception of various emotional expressions to recruit different brain networks, compared to the functional interactions it entertains during perception of neutral expressions. We found that for each emotion the amygdala recruited a distinctive and spatially distributed set of structures to interact with. These changes in amygdala connectional patters characterize the dynamic signature prototypical of individual emotion processing, and seemingly represent a neural mechanism that serves to implement the distinctive influence that each emotion exerts on perceptual, cognitive, and motor responses. Besides these differences, all emotions enhanced amygdala functional integration with premotor cortices compared to neutral faces. The present findings thus concur to reconceptualise the structure-function relation between brain-emotion from the traditional one-to-one mapping toward a network-based and dynamic perspective. PMID:28345642

  10. Neural control of cardiovascular responses and of ventilation during dynamic exercise in man.

    PubMed Central

    Strange, S; Secher, N H; Pawelczyk, J A; Karpakka, J; Christensen, N J; Mitchell, J H; Saltin, B

    1993-01-01

    1. Nine subjects performed dynamic knee extension by voluntary muscle contractions and by evoked contractions with and without epidural anaesthesia. Four exercise bouts of 10 min each were performed: three of one-legged knee extension (10, 20 and 30 W) and one of two-legged knee extension at 2 x 20 W. Epidural anaesthesia was induced with 0.5% bupivacaine or 2% lidocaine. Presence of neural blockade was verified by cutaneous sensory anaesthesia below T8-T10 and complete paralysis of both legs. 2. Compared to voluntary exercise, control electrically induced exercise resulted in normal or enhanced cardiovascular, metabolic and ventilatory responses. However, during epidural anaesthesia the increase in blood pressure with exercise was abolished. Furthermore, the increases in heart rate, cardiac output and leg blood flow were reduced. In contrast, plasma catecholamines, leg glucose uptake and leg lactate release, arterial carbon dioxide tension and pulmonary ventilation were not affected. Arterial and venous plasma potassium concentrations became elevated but leg potassium release was not increased. 3. The results conform to the idea that a reflex originating in contracting muscle is essential for the normal blood pressure response to dynamic exercise, and that other neural, humoral and haemodynamic mechanisms cannot govern this response. However, control mechanisms other than central command and the exercise pressor reflex can influence heart rate, cardiac output, muscle blood flow and ventilation during dynamic exercise in man. PMID:8308750

  11. Dynamic Changes in Amygdala Psychophysiological Connectivity Reveal Distinct Neural Networks for Facial Expressions of Basic Emotions.

    PubMed

    Diano, Matteo; Tamietto, Marco; Celeghin, Alessia; Weiskrantz, Lawrence; Tatu, Mona-Karina; Bagnis, Arianna; Duca, Sergio; Geminiani, Giuliano; Cauda, Franco; Costa, Tommaso

    2017-03-27

    The quest to characterize the neural signature distinctive of different basic emotions has recently come under renewed scrutiny. Here we investigated whether facial expressions of different basic emotions modulate the functional connectivity of the amygdala with the rest of the brain. To this end, we presented seventeen healthy participants (8 females) with facial expressions of anger, disgust, fear, happiness, sadness and emotional neutrality and analyzed amygdala's psychophysiological interaction (PPI). In fact, PPI can reveal how inter-regional amygdala communications change dynamically depending on perception of various emotional expressions to recruit different brain networks, compared to the functional interactions it entertains during perception of neutral expressions. We found that for each emotion the amygdala recruited a distinctive and spatially distributed set of structures to interact with. These changes in amygdala connectional patters characterize the dynamic signature prototypical of individual emotion processing, and seemingly represent a neural mechanism that serves to implement the distinctive influence that each emotion exerts on perceptual, cognitive, and motor responses. Besides these differences, all emotions enhanced amygdala functional integration with premotor cortices compared to neutral faces. The present findings thus concur to reconceptualise the structure-function relation between brain-emotion from the traditional one-to-one mapping toward a network-based and dynamic perspective.

  12. Analysis of neural dynamics in mild cognitive impairment and Alzheimer's disease using wavelet turbulence

    NASA Astrophysics Data System (ADS)

    Poza, Jesús; Gómez, Carlos; García, María; Corralejo, Rebeca; Fernández, Alberto; Hornero, Roberto

    2014-04-01

    Objective. Current diagnostic guidelines encourage further research for the development of novel Alzheimer's disease (AD) biomarkers, especially in its prodromal form (i.e. mild cognitive impairment, MCI). Magnetoencephalography (MEG) can provide essential information about AD brain dynamics; however, only a few studies have addressed the characterization of MEG in incipient AD. Approach. We analyzed MEG rhythms from 36 AD patients, 18 MCI subjects and 27 controls, introducing a new wavelet-based parameter to quantify their dynamical properties: the wavelet turbulence. Main results. Our results suggest that AD progression elicits statistically significant regional-dependent patterns of abnormalities in the neural activity (p < 0.05), including a progressive loss of irregularity, variability, symmetry and Gaussianity. Furthermore, the highest accuracies to discriminate AD and MCI subjects from controls were 79.4% and 68.9%, whereas, in the three-class setting, the accuracy reached 67.9%. Significance. Our findings provide an original description of several dynamical properties of neural activity in early AD and offer preliminary evidence that the proposed methodology is a promising tool for assessing brain changes at different stages of dementia.

  13. Frequency decomposition of conditional Granger causality and application to multivariate neural field potential data.

    PubMed

    Chen, Yonghong; Bressler, Steven L; Ding, Mingzhou

    2006-01-30

    It is often useful in multivariate time series analysis to determine statistical causal relations between different time series. Granger causality is a fundamental measure for this purpose. Yet the traditional pairwise approach to Granger causality analysis may not clearly distinguish between direct causal influences from one time series to another and indirect ones acting through a third time series. In order to differentiate direct from indirect Granger causality, a conditional Granger causality measure in the frequency domain is derived based on a partition matrix technique. Simulations and an application to neural field potential time series are demonstrated to validate the method.

  14. Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex.

    PubMed

    Murray, John D; Bernacchia, Alberto; Roy, Nicholas A; Constantinidis, Christos; Romo, Ranulfo; Wang, Xiao-Jing

    2017-01-10

    Working memory (WM) is a cognitive function for temporary maintenance and manipulation of information, which requires conversion of stimulus-driven signals into internal representations that are maintained across seconds-long mnemonic delays. Within primate prefrontal cortex (PFC), a critical node of the brain's WM network, neurons show stimulus-selective persistent activity during WM, but many of them exhibit strong temporal dynamics and heterogeneity, raising the questions of whether, and how, neuronal populations in PFC maintain stable mnemonic representations of stimuli during WM. Here we show that despite complex and heterogeneous temporal dynamics in single-neuron activity, PFC activity is endowed with a population-level coding of the mnemonic stimulus that is stable and robust throughout WM maintenance. We applied population-level analyses to hundreds of recorded single neurons from lateral PFC of monkeys performing two seminal tasks that demand parametric WM: oculomotor delayed response and vibrotactile delayed discrimination. We found that the high-dimensional state space of PFC population activity contains a low-dimensional subspace in which stimulus representations are stable across time during the cue and delay epochs, enabling robust and generalizable decoding compared with time-optimized subspaces. To explore potential mechanisms, we applied these same population-level analyses to theoretical neural circuit models of WM activity. Three previously proposed models failed to capture the key population-level features observed empirically. We propose network connectivity properties, implemented in a linear network model, which can underlie these features. This work uncovers stable population-level WM representations in PFC, despite strong temporal neural dynamics, thereby providing insights into neural circuit mechanisms supporting WM.

  15. On the periodic dynamics of a class of time-varying delayed neural networks via differential inclusions.

    PubMed

    Cai, Zuowei; Huang, Lihong; Guo, Zhenyuan; Chen, Xiaoyan

    2012-09-01

    This paper investigates the periodic dynamics of a general class of time-varying delayed neural networks with discontinuous right-hand sides. By employing the topological degree theory in set-valued analysis, differential inclusions theory and Lyapunov-like approach, we perform a thorough analysis of the existence, uniqueness and global exponential stability of the periodic solution for the neural networks. Especially, some sufficient conditions are derived to guarantee the existence, uniqueness and global exponential stability of the equilibrium point for the autonomous systems corresponding to the non-autonomous neural networks. Furthermore, the global convergence of the output and the convergence in finite time of the state are also discussed. Without assuming the boundedness or monotonicity of the discontinuous neuron activation functions, the obtained results improve and extend previous works on discontinuous or continuous neural network dynamical systems. Finally, two numerical examples are provided to show the applicability and effectiveness of our main results.

  16. Relationship between neural activation and electric field distribution during deep brain stimulation.

    PubMed

    Åström, Mattias; Diczfalusy, Elin; Martens, Hubert; Wårdell, Karin

    2015-02-01

    Models and simulations are commonly used to study deep brain stimulation (DBS). Simulated stimulation fields are often defined and visualized by electric field isolevels or volumes of tissue activated (VTA). The aim of the present study was to evaluate the relationship between stimulation field strength as defined by the electric potential V, the electric field E, and the divergence of the electric field ∇(2) V, and neural activation. Axon cable models were developed and coupled to finite-element DBS models in three-dimensional (3-D). Field thresholds ( VT , ET, and ∇(2) VT ) were derived at the location of activation for various stimulation amplitudes (1 to 5 V), pulse widths (30 to 120 μs), and axon diameters (2.0 to 7.5 μm). Results showed that thresholds for VT and ∇(2) VT were highly dependent on the stimulation amplitude while ET were approximately independent of the amplitude for large axons. The activation field strength thresholds presented in this study may be used in future studies to approximate the VTA during model-based investigations of DBS without the need of computational axon models.

  17. The neural basis of responsive caregiving behaviour: Investigating temporal dynamics within the parental brain.

    PubMed

    Young, Katherine S; Parsons, Christine E; Stein, Alan; Vuust, Peter; Craske, Michelle G; Kringelbach, Morten L

    2016-09-06

    Whether it is the sound of a distressed cry or the image of a cute face, infants capture our attention. Parents and other adults alike are drawn into interactions to engage in play, nurturance and provide care. Responsive caregiving behaviour is a key feature of the parent-infant relationship, forming the foundation upon which attachment is built. Infant cues are considered to be 'innate releasers' or 'motivational entities' eliciting responses in nearby adults (Lorenz 1943; Murray, 1979) [42,43]. Through the advent of modern neuroimaging, we are beginning to understand the initiation of this motivational state at the neurobiological level. In this review, we first describe a current model of the 'parental brain', based on functional MRI studies assessing neural responses to infant cues. Next, we discuss recent findings from temporally sensitive techniques (magneto- and electroencephalography) that illuminate the temporal dynamics of this neural network. We focus on converging evidence highlighting a specific role for the orbitofrontal cortex in supporting rapid orienting responses to infant cues. In addition, we consider to what extent these neural processes are tied to parenthood, or whether they might be present universally in all adults. We highlight important avenues for future research, including utilizing multiple levels of analysis for a comprehensive understanding of adaptive caregiving behaviour. Finally, we discuss how this research can help us understand disrupted parent-infant relationships, such as in situations where parents' contingent responding to infant cues is disrupted; for example, in parental depression or anxiety where cognitive attentional processes are disrupted.

  18. Neural correlates of the perception of dynamic versus static facial expressions of emotion

    PubMed Central

    Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C.; Abler, Birgit

    2011-01-01

    Aim: This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. Methods: A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Results: Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Conclusions: Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli. PMID:21522486

  19. Localized states in an unbounded neural field equation with smooth firing rate function: a multi-parameter analysis.

    PubMed

    Faye, Grégory; Rankin, James; Chossat, Pascal

    2013-05-01

    The existence of spatially localized solutions in neural networks is an important topic in neuroscience as these solutions are considered to characterize working (short-term) memory. We work with an unbounded neural network represented by the neural field equation with smooth firing rate function and a wizard hat spatial connectivity. Noting that stationary solutions of our neural field equation are equivalent to homoclinic orbits in a related fourth order ordinary differential equation, we apply normal form theory for a reversible Hopf bifurcation to prove the existence of localized solutions; further, we present results concerning their stability. Numerical continuation is used to compute branches of localized solution that exhibit snaking-type behaviour. We describe in terms of three parameters the exact regions for which localized solutions persist.

  20. Auditory-induced neural dynamics in sensory-motor circuitry predict learned temporal and sequential statistics of birdsong

    PubMed Central

    Bouchard, Kristofer E.; Brainard, Michael S.

    2016-01-01

    Predicting future events is a critical computation for both perception and behavior. Despite the essential nature of this computation, there are few studies demonstrating neural activity that predicts specific events in learned, probabilistic sequences. Here, we test the hypotheses that the dynamics of internally generated neural activity are predictive of future events and are structured by the learned temporal–sequential statistics of those events. We recorded neural activity in Bengalese finch sensory-motor area HVC in response to playback of sequences from individuals’ songs, and examined the neural activity that continued after stimulus offset. We found that the strength of response to a syllable in the sequence depended on the delay at which that syllable was played, with a maximal response when the delay matched the intersyllable gap normally present for that specific syllable during song production. Furthermore, poststimulus neural activity induced by sequence playback resembled the neural response to the next syllable in the sequence when that syllable was predictable, but not when the next syllable was uncertain. Our results demonstrate that the dynamics of internally generated HVC neural activity are predictive of the learned temporal–sequential structure of produced song and that the strength of this prediction is modulated by uncertainty. PMID:27506786

  1. Force fields for classical molecular dynamics.

    PubMed

    Monticelli, Luca; Tieleman, D Peter

    2013-01-01

    In this chapter we review the basic features and the principles underlying molecular mechanics force fields commonly used in molecular modeling of biological macromolecules. We start by summarizing the historical background and then describe classical pairwise additive potential energy functions. We introduce the problem of the calculation of nonbonded interactions, of particular importance for charged macromolecules. Different parameterization philosophies are then presented, followed by a section on force field validation. We conclude with a brief overview on future perspectives for the development of classical force fields.

  2. Optical vortex behavior in dynamic speckle fields.

    PubMed

    Kirkpatrick, Sean J; Khaksari, Kosar; Thomas, Dennis; Duncan, Donald D

    2012-05-01

    The dynamic behavior of phase singularities, or optical vortices, in the pseudo-phase representation of dynamic speckle patterns is investigated. Sequences of band-limited, dynamic speckle patterns with predetermined Gaussian decorrelation behavior were generated, and the pseudo-phase realizations of the individual speckle patterns were calculated via a two-dimensional Hilbert transform algorithm. Singular points in the pseudo-phase representation are identified by calculating the local topological charge as determined by convolution of the pseudo-phase representations with a series of 2×2 nabla filters. The spatial locations of the phase singularities are tracked over all frames of the speckle sequences, and recorded in three-dimensional space (x,y,f), where f is frame number in the sequence. The behavior of the phase singularities traces 'vortex trails' which are representative of the speckle dynamics. Slowly decorrelating speckle patterns results in long, relatively straight vortex trails, while rapidly decorrelating speckle patterns results in tortuous, relatively short vortex trails. Optical vortex analysis such as described herein can be used as a descriptor of biological activity, flow, and motion.

  3. Static and dynamical Meissner force fields

    NASA Technical Reports Server (NTRS)

    Weinberger, B. R.; Lynds, L.; Hull, J. R.; Mulcahy, T. M.

    1991-01-01

    The coupling between copper-based high temperature superconductors (HTS) and magnets is represented by a force field. Zero-field cooled experiments were performed with several forms of superconductors: 1) cold-pressed sintered cylindrical disks; 2) small particles fixed in epoxy polymers; and 3) small particles suspended in hydrocarbon waxes. Using magnets with axial field symmetries, direct spatial force measurements in the range of 0.1 to 10(exp 4) dynes were performed with an analytical balance and force constants were obtained from mechanical vibrational resonances. Force constants increase dramatically with decreasing spatial displacement. The force field displays a strong temperature dependence between 20 and 90 K and decreases exponentially with increasing distance of separation. Distinct slope changes suggest the presence of B-field and temperature-activated processes that define the forces. Hysteresis measurements indicated that the magnitude of force scales roughly with the volume fraction of HTS in composite structures. Thus, the net force resulting from the field interaction appears to arise from regions as small or smaller than the grain size and does not depend on contiguous electron transport over large areas. Results of these experiments are discussed.

  4. Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations

    PubMed Central

    2013-01-01

    In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328

  5. The Neural Basis of the Right Visual Field Advantage in Reading: An MEG Analysis Using Virtual Electrodes

    ERIC Educational Resources Information Center

    Barca, Laura; Cornelissen, Piers; Simpson, Michael; Urooj, Uzma; Woods, Will; Ellis, Andrew W.

    2011-01-01

    Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the…

  6. Utilizing neural networks in magnetic media modeling and field computation: A review

    PubMed Central

    Adly, Amr A.; Abd-El-Hafiz, Salwa K.

    2013-01-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper. PMID:25685531

  7. Optimal field-scale groundwater remediation using neural networks and the genetic algorithm

    SciTech Connect

    Rogers, L.L.; Dowla, F.U.; Johnson, V.M.

    1993-05-01

    We present a new approach for field-scale nonlinear management of groundwater remediation. First, an artificial neural network (ANN) is trained to predict the outcome of a groundwater transport simulation. Then a genetic algorithm (GA) searches through possible pumping realizations, evaluating the fitness of each with a prediction from the trained ANN. Traditional approaches rely on optimization algorithms requiring sequential calls of the groundwater transport simulation. Our approach processes the transport simulations in parallel and ``recycles`` the knowledge base of these simulations, greatly reducing the computational and real-time burden, often the primary impediment to developing field-scale management models. We present results from a Superfund site suggesting that such management techniques can reduce cleanup costs by over a hundred million dollars.

  8. Utilizing neural networks in magnetic media modeling and field computation: A review.

    PubMed

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2014-11-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  9. LETTER TO THE EDITOR: Retrieval dynamics of neural networks for sparsely coded sequential patterns

    NASA Astrophysics Data System (ADS)

    Kitano, Katsunori; Aoyagi, Toshio

    1998-09-01

    It is well known that a sparsely coded network in which the activity level is extremely low has intriguing equilibrium properties. In this work, we study the dynamical properties of a neural network designed to store sparsely coded sequential patterns rather than static ones. Applying the theory of statistical neurodynamics, we derive the dynamical equations governing the retrieval process which are described by some macroscopic order parameters such as the overlap. It is found that our theory provides good predictions for the storage capacity and the basin of attraction obtained through numerical simulations. The results indicate that the nature of the basin of attraction depends on the methods of activity control employed. Furthermore, it is found that robustness against random synaptic dilution slightly deteriorates with the degree of sparseness.

  10. Investigation of neural-net based control strategies for improved power system dynamic performance

    SciTech Connect

    Sobajic, D.J.

    1995-12-31

    The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net base system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.

  11. Implementation of recurrent artificial neural networks for nonlinear dynamic modeling in biomedical applications.

    PubMed

    Stošovic, Miona V Andrejevic; Litovski, Vanco B

    2013-11-01

    Simulation is indispensable during the design of many biomedical prostheses that are based on fundamental electrical and electronic actions. However, simulation necessitates the use of adequate models. The main difficulties related to the modeling of such devices are their nonlinearity and dynamic behavior. Here we report the application of recurrent artificial neural networks for modeling of a nonlinear, two-terminal circuit equivalent to a specific implantable hearing device. The method is general in the sense that any nonlinear dynamic two-terminal device or circuit may be modeled in the same way. The model generated was successfully used for simulation and optimization of a driver (operational amplifier)-transducer ensemble. This confirms our claim that in addition to the proper design and optimization of the hearing actuator, optimization in the electronic domain, at the electronic driver circuit-to-actuator interface, should take place in order to achieve best performance of the complete hearing aid.

  12. Measuring Process Dynamics and Nuclear Migration for Clones of Neural Progenitor Cells

    PubMed Central

    De La Hoz, Edgar Cardenas; Winter, Mark R.; Apostolopoulou, Maria; Temple, Sally

    2016-01-01

    Neural stem and progenitor cells (NPCs) generate processes that extend from the cell body in a dynamic manner. The NPC nucleus migrates along these processes with patterns believed to be tightly coupled to mechanisms of cell cycle regulation and cell fate determination. Here, we describe a new segmentation and tracking approach that allows NPC processes and nuclei to be reliably tracked across multiple rounds of cell division in phase-contrast microscopy images. Results are presented for mouse adult and embryonic NPCs from hundreds of clones, or lineage trees, containing tens of thousands of cells and millions of segmentations. New visualization approaches allow the NPC nuclear and process features to be effectively visualized for an entire clone. Significant differences in process and nuclear dynamics were found among type A and type C adult NPCs, and also between embryonic NPCs cultured from the anterior and posterior cerebral cortex. PMID:27878138

  13. Complex dynamics of a delayed discrete neural network of two nonidentical neurons

    SciTech Connect

    Chen, Yuanlong; Huang, Tingwen; Huang, Yu

    2014-03-15

    In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291–303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415–432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869–1878 (2013)]. We also give some numeric simulations to verify our theoretical results.

  14. Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.

    PubMed

    Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng

    2016-02-01

    This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.

  15. Classification of multispectral imagery using wavelet transform and dynamic learning neural network

    NASA Astrophysics Data System (ADS)

    Chen, H. C.; Tzeng, Yu-Chang

    1994-12-01

    A recently developed dynamic learning neural network (DL) has been successfully applied to multispectral imagery classification and parameter inversion. For multispectral imagery classification, it is noises and edges such as streets in the urban area and ridges in the mountain area in an image that result in misclassification or unclassification which reduce the classificalion rate. At the image spectrum point of view, noises and edges are the high frequency components in an image. Therefore, edge detection and noise reduction can be done by extracting the high frequency parts from an image to improve the classification rale. Although both noises and edges are the high frequency components, edges represent some userul information while noises should be removed. Thus, edges and noiscs must be separated when the high frequency parts are extracted. The conventional edge detection or noise reduction melhods could not distinguish edges from noises. A new approach, Wavelet transform, is selected to fulfill this requirement. The edge detection and noise reduction pre-processing using Wavelet transform and image classification using dynamic learning neural network are presented in this paper. The experimental results indicate that it did improve the classification rate.1

  16. Micromotion-induced dynamic effects from a neural probe and brain tissue interface

    NASA Astrophysics Data System (ADS)

    Polanco, Michael; Yoon, Hargsoon; Bawab, Sebastian

    2014-04-01

    Neural probes contain the potential to cause injury to surrounding neural cells due to a discrepancy in stiffness values between them and the surrounding brain tissue when subjected to mechanical micromotion of the brain. To evaluate the effects of the mechanical mismatch, a series of dynamic simulations are conducted to better understand the design enhancements required to improve the feasibility of the neuron probe. The simulations use a nonlinear transient explicit finite element code, LS-DYNA. A three-dimensional quarter-symmetry finite element model is utilized for the transient analysis to capture the time-dependent dynamic deformations on the brain tissue from the implant as a function of different frequency shapes and stiffness values. When micromotion-induced pulses are applied, reducing the neuron probe stiffness by three orders of magnitude leads up to a 41.6% reduction in stress and 39.1% reduction in strain. The simulation conditions assume a case where sheath bonding has begun to take place around the probe implantation site, but no full bond to the probe has occurred. The analyses can provide guidance on the materials necessary to design a probe for injury reduction.

  17. Dynamic neural networks based on-line identification and control of high performance motor drives

    NASA Technical Reports Server (NTRS)

    Rubaai, Ahmed; Kotaru, Raj

    1995-01-01

    In the automated and high-tech industries of the future, there wil be a need for high performance motor drives both in the low-power range and in the high-power range. To meet very straight demands of tracking and regulation in the two quadrants of operation, advanced control technologies are of a considerable interest and need to be developed. In response a dynamics learning control architecture is developed with simultaneous on-line identification and control. the feature of the proposed approach, to efficiently combine the dual task of system identification (learning) and adaptive control of nonlinear motor drives into a single operation is presented. This approach, therefore, not only adapts to uncertainties of the dynamic parameters of the motor drives but also learns about their inherent nonlinearities. In fact, most of the neural networks based adaptive control approaches in use have an identification phase entirely separate from the control phase. Because these approaches separate the identification and control modes, it is not possible to cope with dynamic changes in a controlled process. Extensive simulation studies have been conducted and good performance was observed. The robustness characteristics of neuro-controllers to perform efficiently in a noisy environment is also demonstrated. With this initial success, the principal investigator believes that the proposed approach with the suggested neural structure can be used successfully for the control of high performance motor drives. Two identification and control topologies based on the model reference adaptive control technique are used in this present analysis. No prior knowledge of load dynamics is assumed in either topology while the second topology also assumes no knowledge of the motor parameters.

  18. Characterization of the disruption of neural control strategies for dynamic fingertip forces from attractor reconstruction

    PubMed Central

    Valero-Cuevas, Francisco J.

    2017-01-01

    The Strength-Dexterity (SD) test measures the ability of the pulps of the thumb and index finger to compress a compliant and slender spring prone to buckling at low forces (<3N). We know that factors such as aging and neurodegenerative conditions bring deteriorating physiological changes (e.g., at the level of motor cortex, cerebellum, and basal ganglia), which lead to an overall loss of dexterous ability. However, little is known about how these changes reflect upon the dynamics of the underlying biological system. The spring-hand system exhibits nonlinear dynamical behavior and here we characterize the dynamical behavior of the phase portraits using attractor reconstruction. Thirty participants performed the SD test: 10 young adults, 10 older adults, and 10 older adults with Parkinson’s disease (PD). We used delayed embedding of the applied force to reconstruct its attractor. We characterized the distribution of points of the phase portraits by their density (number of distant points and interquartile range) and geometric features (trajectory length and size). We find phase portraits from older adults exhibit more distant points (p = 0.028) than young adults and participants with PD have larger interquartile ranges (p = 0.001), trajectory lengths (p = 0.005), and size (p = 0.003) than their healthy counterparts. The increased size of the phase portraits with healthy aging suggests a change in the dynamical properties of the system, which may represent a weakening of the neural control strategy. In contrast, the distortion of the attractor in PD suggests a fundamental change in the underlying biological system, and disruption of the neural control strategy. This ability to detect differences in the biological mechanisms of dexterity in healthy and pathological aging provides a simple means to assess their disruption in neurodegenerative conditions and justifies further studies to understand the link with the physiological changes. PMID:28192482

  19. Dynamics of coupled vortices in perpendicular field

    SciTech Connect

    Jain, Shikha; Novosad, Valentyn Fradin, Frank Y.; Pearson, John E.; Bader, Samuel D.

    2014-02-24

    We explore the coupling mechanism of two magnetic vortices in the presence of a perpendicular bias field by pre-selecting the polarity combinations using the resonant-spin-ordering approach. First, out of the four vortex polarity combinations (two of which are degenerate), three stable core polarity states are achieved by lifting the degeneracy of one of the states. Second, the response of the stiffness constant for the vortex pair (similar polarity) in perpendicular bias is found to be asymmetric around the zero field, in contrast to the response obtained from a single vortex core. Finally, the collective response of the system for antiparallel core polarities is symmetric around zero bias. The vortex core whose polarization is opposite to the bias field dominates the response.

  20. Exploring scalar field dynamics with Gaussian processes

    SciTech Connect

    Nair, Remya; Jhingan, Sanjay; Jain, Deepak E-mail: sanjay.jhingan@gmail.com

    2014-01-01

    The origin of the accelerated expansion of the Universe remains an unsolved mystery in Cosmology. In this work we consider a spatially flat Friedmann-Robertson-Walker (FRW) Universe with non-relativistic matter and a single scalar field contributing to the energy density of the Universe. Properties of this scalar field, like potential, kinetic energy, equation of state etc. are reconstructed from Supernovae and BAO data using Gaussian processes. We also reconstruct energy conditions and kinematic variables of expansion, such as the jerk and the slow roll parameter. We find that the reconstructed scalar field variables and the kinematic quantities are consistent with a flat ΛCDM Universe. Further, we find that the null energy condition is satisfied for the redshift range of the Supernovae data considered in the paper, but the strong energy condition is violated.

  1. Dynamical mean field solution of the Bose-Hubbard model.

    PubMed

    Anders, Peter; Gull, Emanuel; Pollet, Lode; Troyer, Matthias; Werner, Philipp

    2010-08-27

    We present the effective action and self-consistency equations for the bosonic dynamical mean field approximation to the bosonic Hubbard model and show that it provides remarkably accurate phase diagrams and correlation functions. To solve the bosonic dynamical mean field equations, we use a continuous-time Monte Carlo method for bosonic impurity models based on a diagrammatic expansion in the hybridization and condensate coupling. This method is readily generalized to bosonic mixtures, spinful bosons, and Bose-Fermi mixtures.

  2. Brownian dynamics of charged particles in a constant magnetic field

    SciTech Connect

    Hou, L. J.; Piel, A.; Miskovic, Z. L.; Shukla, P. K.

    2009-05-15

    Numerical algorithms are proposed for simulating the Brownian dynamics of charged particles in an external magnetic field, taking into account the Brownian motion of charged particles, damping effect, and the effect of magnetic field self-consistently. Performance of these algorithms is tested in terms of their accuracy and long-time stability by using a three-dimensional Brownian oscillator model with constant magnetic field. Step-by-step recipes for implementing these algorithms are given in detail. It is expected that these algorithms can be directly used to study particle dynamics in various dispersed systems in the presence of a magnetic field, including polymer solutions, colloidal suspensions, and, particularly, complex (dusty) plasmas. The proposed algorithms can also be used as thermostat in the usual molecular dynamics simulation in the presence of magnetic field.

  3. Controlled Payload Release by Magnetic Field Triggered Neural Stem Cell Destruction for Malignant Glioma Treatment

    PubMed Central

    Muroski, Megan E.; Morshed, Ramin A.; Cheng, Yu; Vemulkar, Tarun; Mansell, Rhodri; Han, Yu; Zhang, Lingjiao; Aboody, Karen S.; Cowburn, Russell P.; Lesniak, Maciej S.

    2016-01-01

    Stem cells have recently garnered attention as drug and particle carriers to sites of tumors, due to their natural ability to track to the site of interest. Specifically, neural stem cells (NSCs) have demonstrated to be a promising candidate for delivering therapeutics to malignant glioma, a primary brain tumor that is not curable by current treatments, and inevitably fatal. In this article, we demonstrate that NSCs are able to internalize 2 μm magnetic discs (SD), without affecting the health of the cells. The SD can then be remotely triggered in an applied 1 T rotating magnetic field to deliver a payload. Furthermore, we use this NSC-SD delivery system to deliver the SD themselves as a therapeutic agent to mechanically destroy glioma cells. NSCs were incubated with the SD overnight before treatment with a 1T rotating magnetic field to trigger the SD release. The potential timed release effects of the magnetic particles were tested with migration assays, confocal microscopy and immunohistochemistry for apoptosis. After the magnetic field triggered SD release, glioma cells were added and allowed to internalize the particles. Once internalized, another dose of the magnetic field treatment was administered to trigger mechanically induced apoptotic cell death of the glioma cells by the rotating SD. We are able to determine that NSC-SD and magnetic field treatment can achieve over 50% glioma cell death when loaded at 50 SD/cell, making this a promising therapeutic for the treatment of glioma. PMID:26734932

  4. Controlled Payload Release by Magnetic Field Triggered Neural Stem Cell Destruction for Malignant Glioma Treatment.

    PubMed

    Muroski, Megan E; Morshed, Ramin A; Cheng, Yu; Vemulkar, Tarun; Mansell, Rhodri; Han, Yu; Zhang, Lingjiao; Aboody, Karen S; Cowburn, Russell P; Lesniak, Maciej S

    2016-01-01

    Stem cells have recently garnered attention as drug and particle carriers to sites of tumors, due to their natural ability to track to the site of interest. Specifically, neural stem cells (NSCs) have demonstrated to be a promising candidate for delivering therapeutics to malignant glioma, a primary brain tumor that is not curable by current treatments, and inevitably fatal. In this article, we demonstrate that NSCs are able to internalize 2 μm magnetic discs (SD), without affecting the health of the cells. The SD can then be remotely triggered in an applied 1 T rotating magnetic field to deliver a payload. Furthermore, we use this NSC-SD delivery system to deliver the SD themselves as a therapeutic agent to mechanically destroy glioma cells. NSCs were incubated with the SD overnight before treatment with a 1T rotating magnetic field to trigger the SD release. The potential timed release effects of the magnetic particles were tested with migration assays, confocal microscopy and immunohistochemistry for apoptosis. After the magnetic field triggered SD release, glioma cells were added and allowed to internalize the particles. Once internalized, another dose of the magnetic field treatment was administered to trigger mechanically induced apoptotic cell death of the glioma cells by the rotating SD. We are able to determine that NSC-SD and magnetic field treatment can achieve over 50% glioma cell death when loaded at 50 SD/cell, making this a promising therapeutic for the treatment of glioma.

  5. Geometric properties-dependent neural synchrony modulated by extracellular subthreshold electric field

    NASA Astrophysics Data System (ADS)

    Wei, Xile; Si, Kaili; Yi, Guosheng; Wang, Jiang; Lu, Meili

    2016-07-01

    In this paper, we use a reduced two-compartment neuron model to investigate the interaction between extracellular subthreshold electric field and synchrony in small world networks. It is observed that network synchronization is closely related to the strength of electric field and geometric properties of the two-compartment model. Specifically, increasing the electric field induces a gradual improvement in network synchrony, while increasing the geometric factor results in an abrupt decrease in synchronization of network. In addition, increasing electric field can make the network become synchronous from asynchronous when the geometric parameter is set to a given value. Furthermore, it is demonstrated that network synchrony can also be affected by the firing frequency and dynamical bifurcation feature of single neuron. These results highlight the effect of weak field on network synchrony from the view of biophysical model, which may contribute to further understanding the effect of electric field on network activity.

  6. dFasArt: dynamic neural processing in FasArt model.

    PubMed

    Cano-Izquierdo, Jose-Manuel; Almonacid, Miguel; Pinzolas, Miguel; Ibarrola, Julio

    2009-05-01

    The temporal character of the input is, generally, not taken into account in the neural models. This paper presents an extension of the FasArt model focused on the treatment of temporal signals. FasArt model is proposed as an integration of the characteristic elements of the Fuzzy System Theory in an ART architecture. A duality between the activation concept and membership function is established. FasArt maintains the structure of the Fuzzy ARTMAP architecture, implying a static character since the dynamic response of the input is not considered. The proposed novel model, dynamic FasArt (dFasArt), uses dynamic equations for the processing stages of FasArt: activation, matching and learning. The new formulation of dFasArt includes time as another characteristic of the input. This allows the activation of the units to have a history-dependent character instead of being only a function of the last input value. Therefore, dFasArt model is robust to spurious values and noisy inputs. As experimental work, some cases have been used to check the robustness of dFasArt. A possible application has been proposed for the detection of variations in the system dynamics.

  7. Reconstructing Protein Structures by Neural Network Pairwise Interaction Fields and Iterative Decoy Set Construction

    PubMed Central

    Mirabello, Claudio; Adelfio, Alessandro; Pollastri, Gianluca

    2014-01-01

    Predicting the fold of a protein from its amino acid sequence is one of the grand problems in computational biology. While there has been progress towards a solution, especially when a protein can be modelled based on one or more known structures (templates), in the absence of templates, even the best predictions are generally much less reliable. In this paper, we present an approach for predicting the three-dimensional structure of a protein from the sequence alone, when templates of known structure are not available. This approach relies on a simple reconstruction procedure guided by a novel knowledge-based evaluation function implemented as a class of artificial neural networks that we have designed: Neural Network Pairwise Interaction Fields (NNPIF). This evaluation function takes into account the contextual information for each residue and is trained to identify native-like conformations from non-native-like ones by using large sets of decoys as a training set. The training set is generated and then iteratively expanded during successive folding simulations. As NNPIF are fast at evaluating conformations, thousands of models can be processed in a short amount of time, and clustering techniques can be adopted for model selection. Although the results we present here are very preliminary, we consider them to be promising, with predictions being generated at state-of-the-art levels in some of the cases. PMID:24970210

  8. Multi-bump solutions in a neural field model with external inputs

    NASA Astrophysics Data System (ADS)

    Ferreira, Flora; Erlhagen, Wolfram; Bicho, Estela

    2016-07-01

    We study the conditions for the formation of multiple regions of high activity or "bumps" in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to N-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N > 2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.

  9. Neural dynamics in Parkinsonian brain: The boundary between synchronized and nonsynchronized dynamics

    NASA Astrophysics Data System (ADS)

    Park, Choongseok; Worth, Robert M.; Rubchinsky, Leonid L.

    2011-04-01

    Synchronous oscillatory dynamics is frequently observed in the human brain. We analyze the fine temporal structure of phase-locking in a realistic network model and match it with the experimental data from Parkinsonian patients. We show that the experimentally observed intermittent synchrony can be generated just by moderately increased coupling strength in the basal ganglia circuits due to the lack of dopamine. Comparison of the experimental and modeling data suggest that brain activity in Parkinson's disease resides in the large boundary region between synchronized and nonsynchronized dynamics. Being on the edge of synchrony may allow for easy formation of transient neuronal assemblies.

  10. Dynamically important magnetic fields near accreting supermassive black holes.

    PubMed

    Zamaninasab, M; Clausen-Brown, E; Savolainen, T; Tchekhovskoy, A

    2014-06-05

    Accreting supermassive black holes at the centres of active galaxies often produce 'jets'--collimated bipolar outflows of relativistic particles. Magnetic fields probably play a critical role in jet formation and in accretion disk physics. A dynamically important magnetic field was recently found near the Galactic Centre black hole. If this is common and if the field continues to near the black hole event horizon, disk structures will be affected, invalidating assumptions made in standard models. Here we report that jet magnetic field and accretion disk luminosity are tightly correlated over seven orders of magnitude for a sample of 76 radio-loud active galaxies. We conclude that the jet-launching regions of these radio-loud galaxies are threaded by dynamically important fields, which will affect the disk properties. These fields obstruct gas infall, compress the accretion disk vertically, slow down the disk rotation by carrying away its angular momentum in an outflow and determine the directionality of jets.

  11. 3-D components of a biological neural network visualized in computer generated imagery. I - Macular receptive field organization

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cutler, Lynn; Meyer, Glenn; Lam, Tony; Vaziri, Parshaw

    1990-01-01

    Computer-assisted, 3-dimensional reconstructions of macular receptive fields and of their linkages into a neural network have revealed new information about macular functional organization. Both type I and type II hair cells are included in the receptive fields. The fields are rounded, oblong, or elongated, but gradations between categories are common. Cell polarizations are divergent. Morphologically, each calyx of oblong and elongated fields appears to be an information processing site. Intrinsic modulation of information processing is extensive and varies with the kind of field. Each reconstructed field differs in detail from every other, suggesting that an element of randomness is introduced developmentally and contributes to endorgan adaptability.

  12. A Novel Nonparametric Approach for Neural Encoding and Decoding Models of Multimodal Receptive Fields.

    PubMed

    Agarwal, Rahul; Chen, Zhe; Kloosterman, Fabian; Wilson, Matthew A; Sarma, Sridevi V

    2016-07-01

    Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron's spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat's trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history-independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat's trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model's performance remains invariant to the apparent modality of the neuron's receptive field.

  13. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields.

    PubMed

    Lee, Christopher M; Osman, Ahmad F; Volgushev, Maxim; Escabí, Monty A; Read, Heather L

    2016-04-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices.

  14. Phase reduction analysis of coupled neural oscillators: application to epileptic seizure dynamics

    NASA Astrophysics Data System (ADS)

    Takeshita, Daisuke; Sato, Yasuomi; Bahar, Sonya

    2006-03-01

    Epileptic seizures are generally held to be result from excess and synchronized neural activity. To investigate how seizures initiate, we develop a model of a neocortical network based on a model suggested by Wilson [1]. We simulate the effect of the potassium channel blocker 4-aminopyridine, which is often used in experiments to induce epileptic seizures, by decreasing the conductance of the potassium channels (gK) in neurons in our model. We applied phase reduction to the Wilson model to study how gK in the model affects the stability of the phase difference. At a normal value of gK, the stable phase difference is small, but the neurons are not exactly in phase. At low gK, in-phase and out-of-phase firing patterns become simultaneously stable. We constructed a network of 20 by 20 neurons. By decreasing gK to zero, a dramatic increase in the amplitude of mean field was observed. This is due to the fact that in-phase firing becomes stable at low gK. The pattern was similar to local field potential in 4-aminopyridine induced seizures. Therefore, it was concluded that the neural activity in drug-induced seizure may be caused by a bifurcation in stable phase differences between neurons. [1] Wilson H.R., J. Theor. Biol. (1999) 200, 375-388 [2] Ermentrout, G.B. and Kopell, N., SIAM J. Math. Anal. (1984), 215-237

  15. Dynamic transcriptional signature and cell fate analysis reveals plasticity of individual neural plate border cells.

    PubMed

    Roellig, Daniela; Tan-Cabugao, Johanna; Esaian, Sevan; Bronner, Marianne E

    2017-03-29

    The 'neural plate border' of vertebrate embryos contains precursors of neural crest and placode cells, both defining vertebrate characteristics. How these lineages segregate from neural and epidermal fates has been a matter of debate. We address this by performing a fine-scale quantitative temporal analysis of transcription factor expression in the neural plate border of chick embryos. The results reveal significant overlap of transcription factors characteristic of multiple lineages in individual border cells from gastrula through neurula stages. Cell fate analysis using a Sox2 (neural) enhancer reveals that cells that are initially Sox2+ cells can contribute not only to neural tube but also to neural crest and epidermis. Moreover, modulating levels of Sox2 or Pax7 alters the apportionment of neural tube versus neural crest fates. Our results resolve a long-standing question and suggest that many individual border cells maintain ability to contribute to multiple ectodermal lineages until or beyond neural tube closure.

  16. Monitoring the Earth's Dynamic Magnetic Field

    USGS Publications Warehouse

    Love, Jeffrey J.; Applegate, David; Townshend, John B.

    2008-01-01

    The mission of the U.S. Geological Survey's Geomagnetism Program is to monitor the Earth's magnetic field. Using ground-based observatories, the Program provides continuous records of magnetic field variations covering long timescales; disseminates magnetic data to various governmental, academic, and private institutions; and conducts research into the nature of geomagnetic variations for purposes of scientific understanding and hazard mitigation. The program is an integral part of the U.S. Government's National Space Weather Program (NSWP), which also includes programs in the National Aeronautics and Space Administration (NASA), the Department of Defense (DOD), the National Oceanic and Atmospheric Administration (NOAA), and the National Science Foundation (NSF). The NSWP works to provide timely, accurate, and reliable space weather warnings, observations, specifications, and forecasts, and its work is important for the U.S. economy and national security. Please visit the National Geomagnetism Program?s website, http://geomag.usgs.gov, where you can learn more about the Program and the science of geomagnetism. You can find additional related information at the Intermagnet website, http://www.intermagnet.org.

  17. Laminar Neural Field Model of Laterally Propagating Waves of Orientation Selectivity

    PubMed Central

    2015-01-01

    We construct a laminar neural-field model of primary visual cortex (V1) consisting of a superficial layer of neurons that encode the spatial location and orientation of a local visual stimulus coupled to a deep layer of neurons that only encode spatial location. The spatially-structured connections in the deep layer support the propagation of a traveling front, which then drives propagating orientation-dependent activity in the superficial layer. Using a combination of mathematical analysis and numerical simulations, we establish that the existence of a coherent orientation-selective wave relies on the presence of weak, long-range connections in the superficial layer that couple cells of similar orientation preference. Moreover, the wave persists in the presence of feedback from the superficial layer to the deep layer. Our results are consistent with recent experimental studies that indicate that deep and superficial layers work in tandem to determine the patterns of cortical activity observed in vivo. PMID:26491877

  18. A dynamic model of Venus's gravity field

    NASA Technical Reports Server (NTRS)

    Kiefer, W. S.; Richards, M. A.; Hager, B. H.; Bills, B. G.

    1984-01-01

    Unlike Earth, long wavelength gravity anomalies and topography correlate well on Venus. Venus's admittance curve from spherical harmonic degree 2 to 18 is inconsistent with either Airy or Pratt isostasy, but is consistent with dynamic support from mantle convection. A model using whole mantle flow and a high viscosity near surface layer overlying a constant viscosity mantle reproduces this admittance curve. On Earth, the effective viscosity deduced from geoid modeling increases by a factor of 300 from the asthenosphere to the lower mantle. These viscosity estimates may be biased by the neglect of lateral variations in mantle viscosity associated with hot plumes and cold subducted slabs. The different effective viscosity profiles for Earth and Venus may reflect their convective styles, with tectonism and mantle heat transport dominated by hot plumes on Venus and by subducted slabs on Earth. Convection at degree 2 appears much stronger on Earth than on Venus. A degree 2 convective structure may be unstable on Venus, but may have been stabilized on Earth by the insulating effects of the Pangean supercontinental assemblage.

  19. Lattice dynamical wavelet neural networks implemented using particle swarm optimization for spatio-temporal system identification.

    PubMed

    Wei, Hua-Liang; Billings, Stephen A; Zhao, Yifan; Guo, Lingzhong

    2009-01-01

    In this brief, by combining an efficient wavelet representation with a coupled map lattice model, a new family of adaptive wavelet neural networks, called lattice dynamical wavelet neural networks (LDWNNs), is introduced for spatio-temporal system identification. A new orthogonal projection pursuit (OPP) method, coupled with a particle swarm optimization (PSO) algorithm, is proposed for augmenting the proposed network. A novel two-stage hybrid training scheme is developed for constructing a parsimonious network model. In the first stage, by applying the OPP algorithm, significant wavelet neurons are adaptively and successively recruited into the network, where adjustable parameters of the associated wavelet neurons are optimized using a particle swarm optimizer. The resultant network model, obtained in the first stage, however, may be redundant. In the second stage, an orthogonal least squares algorithm is then applied to refine and improve the initially trained network by removing redundant wavelet neurons from the network. An example for a real spatio-temporal system identification problem is presented to demonstrate the performance of the proposed new modeling framework.

  20. Neural substrates and behavioral profiles of romantic jealousy and its temporal dynamics

    PubMed Central

    Sun, Yan; Yu, Hongbo; Chen, Jie; Liang, Jie; Lu, Lin; Zhou, Xiaolin; Shi, Jie

    2016-01-01

    Jealousy is not only a way of experiencing love but also a stabilizer of romantic relationships, although morbid romantic jealousy is maladaptive. Being engaged in a formal romantic relationship can tune one’s romantic jealousy towards a specific target. Little is known about how the human brain processes romantic jealousy by now. Here, by combining scenario-based imagination and functional MRI, we investigated the behavioral and neural correlates of romantic jealousy and their development across stages (before vs. after being in a formal relationship). Romantic jealousy scenarios elicited activations primarily in the basal ganglia (BG) across stages, and were significantly higher after the relationship was established in both the behavioral rating and BG activation. The intensity of romantic jealousy was related to the intensity of romantic happiness, which mainly correlated with ventral medial prefrontal cortex activation. The increase in jealousy across stages was associated with the tendency for interpersonal aggression. These results bridge the gap between the theoretical conceptualization of romantic jealousy and its neural correlates and shed light on the dynamic changes in jealousy. PMID:27273024

  1. The neural circuit and synaptic dynamics underlying perceptual decision-making

    NASA Astrophysics Data System (ADS)

    Liu, Feng

    2015-03-01

    Decision-making with several choice options is central to cognition. To elucidate the neural mechanisms of multiple-choice motion discrimination, we built a continuous recurrent network model to represent a local circuit in the lateral intraparietal area (LIP). The network is composed of pyramidal cells and interneurons, which are directionally tuned. All neurons are reciprocally connected, and the synaptic connectivity strength is heterogeneous. Specifically, we assume two types of inhibitory connectivity to pyramidal cells: opposite-feature and similar-feature inhibition. The model accounted for both physiological and behavioral data from monkey experiments. The network is endowed with slow excitatory reverberation, which subserves the buildup and maintenance of persistent neural activity, and predominant feedback inhibition, which underlies the winner-take-all competition and attractor dynamics. The opposite-feature and opposite-feature inhibition have different effects on decision-making, and only their combination allows for a categorical choice among 12 alternatives. Together, our work highlights the importance of structured synaptic inhibition in multiple-choice decision-making processes.

  2. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles

    PubMed Central

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently. PMID:28255297

  3. Dynamic Surface Control Using Neural Networks for a Class of Uncertain Nonlinear Systems With Input Saturation.

    PubMed

    Chen, Mou; Tao, Gang; Jiang, Bin

    2015-09-01

    In this paper, a dynamic surface control (DSC) scheme is proposed for a class of uncertain strict-feedback nonlinear systems in the presence of input saturation and unknown external disturbance. The radial basis function neural network (RBFNN) is employed to approximate the unknown system function. To efficiently tackle the unknown external disturbance, a nonlinear disturbance observer (NDO) is developed. The developed NDO can relax the known boundary requirement of the unknown disturbance and can guarantee the disturbance estimation error converge to a bounded compact set. Using NDO and RBFNN, the DSC scheme is developed for uncertain nonlinear systems based on a backstepping method. Using a DSC technique, the problem of explosion of complexity inherent in the conventional backstepping method is avoided, which is specially important for designs using neural network approximations. Under the proposed DSC scheme, the ultimately bounded convergence of all closed-loop signals is guaranteed via Lyapunov analysis. Simulation results are given to show the effectiveness of the proposed DSC design using NDO and RBFNN.

  4. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    PubMed

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  5. Self: an adaptive pressure arising from self-organization, chaotic dynamics, and neural Darwinism.

    PubMed

    Bruzzo, Angela Alessia; Vimal, Ram Lakhan Pandey

    2007-12-01

    In this article, we establish a model to delineate the emergence of "self" in the brain making recourse to the theory of chaos. Self is considered as the subjective experience of a subject. As essential ingredients of subjective experiences, our model includes wakefulness, re-entry, attention, memory, and proto-experiences. The stability as stated by chaos theory can potentially describe the non-linear function of "self" as sensitive to initial conditions and can characterize it as underlying order from apparently random signals. Self-similarity is discussed as a latent menace of a pathological confusion between "self" and "others". Our test hypothesis is that (1) consciousness might have emerged and evolved from a primordial potential or proto-experience in matter, such as the physical attractions and repulsions experienced by electrons, and (2) "self" arises from chaotic dynamics, self-organization and selective mechanisms during ontogenesis, while emerging post-ontogenically as an adaptive pressure driven by both volume and synaptic-neural transmission and influencing the functional connectivity of neural nets (structure).

  6. Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.

    PubMed

    Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian

    2016-10-01

    In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.

  7. Quantum electron-vibrational dynamics at finite temperature: Thermo field dynamics approach.

    PubMed

    Borrelli, Raffaele; Gelin, Maxim F

    2016-12-14

    Quantum electron-vibrational dynamics in molecular systems at finite temperature is described using an approach based on the thermo field dynamics theory. This formulation treats temperature effects in the Hilbert space without introducing the Liouville space. A comparison with the theoretically equivalent density matrix formulation shows the key numerical advantages of the present approach. The solution of thermo field dynamics equations with a novel technique for the propagation of tensor trains (matrix product states) is discussed. Numerical applications to model spin-boson systems show that the present approach is a promising tool for the description of quantum dynamics of complex molecular systems at finite temperature.

  8. Neural Dynamics of Emotional Salience Processing in Response to Voices during the Stages of Sleep.

    PubMed

    Chen, Chenyi; Sung, Jia-Ying; Cheng, Yawei

    2016-01-01

    Sleep has been related to emotional functioning. However, the extent to which emotional salience is processed during sleep is unknown. To address this concern, we investigated night sleep in healthy adults regarding brain reactivity to the emotionally (happily, fearfully) spoken meaningless syllables dada, along with correspondingly synthesized nonvocal sounds. Electroencephalogram (EEG) signals were continuously acquired during an entire night of sleep while we applied a passive auditory oddball paradigm. During all stages of sleep, mismatch negativity (MMN) in response to emotional syllables, which is an index for emotional salience processing of voices, was detected. In contrast, MMN to acoustically matching nonvocal sounds was undetected during Sleep Stage 2 and 3 as well as rapid eye movement (REM) sleep. Post-MMN positivity (PMP) was identified with larger amplitudes during Stage 3, and at earlier latencies during REM sleep, relative to wakefulness. These findings clearly demonstrated the neural dynamics of emotional salience processing during the stages of sleep.

  9. Neural Dynamics of Emotional Salience Processing in Response to Voices during the Stages of Sleep

    PubMed Central

    Chen, Chenyi; Sung, Jia-Ying; Cheng, Yawei

    2016-01-01

    Sleep has been related to emotional functioning. However, the extent to which emotional salience is processed during sleep is unknown. To address this concern, we investigated night sleep in healthy adults regarding brain reactivity to the emotionally (happily, fearfully) spoken meaningless syllables dada, along with correspondingly synthesized nonvocal sounds. Electroencephalogram (EEG) signals were continuously acquired during an entire night of sleep while we applied a passive auditory oddball paradigm. During all stages of sleep, mismatch negativity (MMN) in response to emotional syllables, which is an index for emotional salience processing of voices, was detected. In contrast, MMN to acoustically matching nonvocal sounds was undetected during Sleep Stage 2 and 3 as well as rapid eye movement (REM) sleep. Post-MMN positivity (PMP) was identified with larger amplitudes during Stage 3, and at earlier latencies during REM sleep, relative to wakefulness. These findings clearly demonstrated the neural dynamics of emotional salience processing during the stages of sleep. PMID:27378870

  10. Strong geomagnetic activity forecast by neural networks under dominant southern orientation of the interplanetary magnetic field

    NASA Astrophysics Data System (ADS)

    Valach, Fridrich; Bochníček, Josef; Hejda, Pavel; Revallo, Miloš

    2014-02-01

    The paper deals with the relation of the southern orientation of the north-south component Bz of the interplanetary magnetic field to geomagnetic activity (GA) and subsequently a method is suggested of using the found facts to forecast potentially dangerous high GA. We have found that on a day with very high GA hourly averages of Bz with a negative sign occur at least 16 times in typical cases. Since it is very difficult to estimate the orientation of Bz in the immediate vicinity of the Earth one day or even a few days in advance, we have suggested using a neural-network model, which assumes the worse of the possibilities to forecast the danger of high GA - the dominant southern orientation of the interplanetary magnetic field. The input quantities of the proposed model were information about X-ray flares, type II and IV radio bursts as well as information about coronal mass ejections (CME). In comparing the GA forecasts with observations, we obtain values of the Hanssen-Kuiper skill score ranging from 0.463 to 0.727, which are usual values for similar forecasts of space weather. The proposed model provides forecasts of potentially dangerous high geomagnetic activity should the interplanetary CME (ICME), the originator of geomagnetic storms, hit the Earth under the most unfavorable configuration of cosmic magnetic fields. We cannot know in advance whether the unfavorable configuration is going to occur or not; we just know that it will occur with the probability of 31%.

  11. Neuroplasticity in dynamic neural networks comprised of neurons attached to adaptive base plate.

    PubMed

    Joghataie, Abdolreza; Shafiei Dizaji, Mehrdad

    2016-03-01

    In this paper, a learning algorithm is developed for Dynamic Plastic Continuous Neural Networks (DPCNNs) to improve their learning of highly nonlinear time dependent problems. A DPCNN is comprised of a base medium, which is nonlinear and plastic, and a number of neurons that are attached to the base by wire-like connections similar to perceptrons. The information is distributed within DPCNNs gradually and through wave propagation mechanism. While a DPCNN is adaptive due to its connection weights, the material properties of its base medium can also be adjusted to improve its learning. The material of the medium is plastic and can contribute to memorizing the history of input-response similar to neuroplasticity in natural brain. The results obtained from numerical simulation of DPCNNs have been encouraging. Nonlinear plastic finite element modeling has been used for numerical simulation of dynamic behavior and wave propagation in the medium. Two significant differences of DPCNNs with other types of neural networks are that: (1) there is a medium to which the neurons are attached where the medium can contribute to the learning, (2) the input layer is not made of nodes but it is an edge terminal which is capable of receiving a continuous function over the input edge, though it is discretized in the finite element model. A DPCNN is reduced to a perceptron if the medium is removed and the neurons are connected to each other only by wires. Continuity of the input lets the discretization of data take place intrinsically within the DPCNN instead of being applied by the user.

  12. Neural network modelling and dynamical system theory: are they relevant to study the governing dynamics of association football players?

    PubMed

    Dutt-Mazumder, Aviroop; Button, Chris; Robins, Anthony; Bartlett, Roger

    2011-12-01

    Recent studies have explored the organization of player movements in team sports using a range of statistical tools. However, the factors that best explain the performance of association football teams remain elusive. Arguably, this is due to the high-dimensional behavioural outputs that illustrate the complex, evolving configurations typical of team games. According to dynamical system analysts, movement patterns in team sports exhibit nonlinear self-organizing features. Nonlinear processing tools (i.e. Artificial Neural Networks; ANNs) are becoming increasingly popular to investigate the coordination of participants in sports competitions. ANNs are well suited to describing high-dimensional data sets with nonlinear attributes, however, limited information concerning the processes required to apply ANNs exists. This review investigates the relative value of various ANN learning approaches used in sports performance analysis of team sports focusing on potential applications for association football. Sixty-two research sources were summarized and reviewed from electronic literature search engines such as SPORTDiscus, Google Scholar, IEEE Xplore, Scirus, ScienceDirect and Elsevier. Typical ANN learning algorithms can be adapted to perform pattern recognition and pattern classification. Particularly, dimensionality reduction by a Kohonen feature map (KFM) can compress chaotic high-dimensional datasets into low-dimensional relevant information. Such information would be useful for developing effective training drills that should enhance self-organizing coordination among players. We conclude that ANN-based qualitative analysis is a promising approach to understand the dynamical attributes of association football players.

  13. Acceleration of adiabatic quantum dynamics in electromagnetic fields

    SciTech Connect

    Masuda, Shumpei; Nakamura, Katsuhiro

    2011-10-15

    We show a method to accelerate quantum adiabatic dynamics of wave functions under electromagnetic field (EMF) by developing the preceding theory [Masuda and Nakamura, Proc. R. Soc. London Ser. A 466, 1135 (2010)]. Treating the orbital dynamics of a charged particle in EMF, we derive the driving field which accelerates quantum adiabatic dynamics in order to obtain the final adiabatic states in any desired short time. The scheme is consolidated by describing a way to overcome possible singularities in both the additional phase and driving potential due to nodes proper to wave functions under EMF. As explicit examples, we exhibit the fast forward of adiabatic squeezing and transport of excited Landau states with nonzero angular momentum, obtaining the result consistent with the transitionless quantum driving applied to the orbital dynamics in EMF.

  14. Electron Dynamics in Nanostructures in Strong Laser Fields

    SciTech Connect

    Kling, Matthias

    2014-09-11

    The goal of our research was to gain deeper insight into the collective electron dynamics in nanosystems in strong, ultrashort laser fields. The laser field strengths will be strong enough to extract and accelerate electrons from the nanoparticles and to transiently modify the materials electronic properties. We aimed to observe, with sub-cycle resolution reaching the attosecond time domain, how collective electronic excitations in nanoparticles are formed, how the strong field influences the optical and electrical properties of the nanomaterial, and how the excitations in the presence of strong fields decay.

  15. Nonlinear System Identification and Forecasting of Earthquake Fault Dynamics Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Anghel, M.; Ben-Zion, Y.

    2001-12-01

    We analyze quantitatively the dynamic behavior of a class of models (Ben-Zion and Rice [1]) of a discrete heterogeneous strike-slip fault system in a 3D elastic half-space, using Artificial Neural Networks (ANNs). A given model realization is characterized by a set of parameters that describe the dynamics, rheology, property disorder and fault geometry. The experimental data from the system come in a form of a time series of observations {O(x,ti}), i=1,...,N where x represent the spatial coordinates of the observation points. The observable O can measure the magnitude of earthquake events or, as chosen in our work, the surface deformations ui(x), i=1,2,3 that accompany strike-slip faulting accumulated over a time interval T. The fault dynamics is dissipative and as a result the actual dimension of the attractor can be expected to be significantly smaller than the space in which the dynamics takes place. Our strategy of system reduction is to search for a few coherent structures, φ k(x), that dominates the dynamics and to capture the interaction between these coherent structures [2]. The identification of the basic interacting structures is obtained by using the Proper Orthogonal Decomposition (POD) of the average covariance matrix R(x,x')=[3]. The eigenvectors of this decomposition provide the coherent structures while the corresponding eigenvalues measure the average ``energy'' in each mode. As a measure of the dimensionality of the system we count the number of eigenfunctions that capture 90% of the total ``energy''. We investigate how this measure depends on the model parameters or the time interval T. The construction of a discrete-time model for the dynamics employs the evolution of the time dependent modal coefficients ak(t) in a representation of the form O(x,t) = ∑ k ak}(t)φ {k(x) restricted to a reduced number, K, of coherent structures. The set of values ak(t),k=1,...,K approximates the state of the system at time t. We use a feed

  16. Effect of Thermal Gradients Created by Electromagnetic Fields on Cell-Membrane Electroporation Probed by Molecular-Dynamics Simulations

    NASA Astrophysics Data System (ADS)

    Song, J.; Garner, A. L.; Joshi, R. P.

    2017-02-01

    The use of nanosecond-duration-pulsed voltages with high-intensity electric fields (˜100 kV /cm ) is a promising development with many biomedical applications. Electroporation occurs in this regime, and has been attributed to the high fields. However, here we focus on temperature gradients. Our numerical simulations based on molecular dynamics predict the formation of nanopores and water nanowires, but only in the presence of a temperature gradient. Our results suggest a far greater role of temperature gradients in enhancing biophysical responses, including possible neural stimulation by infrared lasers.

  17. A comparative study between nonlinear regression and artificial neural network approaches for modelling wild oat (Avena fatua) field emergence

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...

  18. Data-driven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe

    PubMed Central

    Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan

    2014-01-01

    The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442

  19. Fractional dynamics of charged particles in magnetic fields

    NASA Astrophysics Data System (ADS)

    Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Alvarado-Méndez, E.; Guerrero-Ramírez, G. V.; Escobar-Jiménez, R. F.

    2016-02-01

    In many physical applications the electrons play a relevant role. For example, when a beam of electrons accelerated to relativistic velocities is used as an active medium to generate Free Electron Lasers (FEL), the electrons are bound to atoms, but move freely in a magnetic field. The relaxation time, longitudinal effects and transverse variations of the optical field are parameters that play an important role in the efficiency of this laser. The electron dynamics in a magnetic field is a means of radiation source for coupling to the electric field. The transverse motion of the electrons leads to either gain or loss energy from or to the field, depending on the position of the particle regarding the phase of the external radiation field. Due to the importance to know with great certainty the displacement of charged particles in a magnetic field, in this work we study the fractional dynamics of charged particles in magnetic fields. Newton’s second law is considered and the order of the fractional differential equation is (0;1]. Based on the Grünwald-Letnikov (GL) definition, the discretization of fractional differential equations is reported to get numerical simulations. Comparison between the numerical solutions obtained on Euler’s numerical method for the classical case and the GL definition in the fractional approach proves the good performance of the numerical scheme applied. Three application examples are shown: constant magnetic field, ramp magnetic field and harmonic magnetic field. In the first example the results obtained show bistability. Dissipative effects are observed in the system and the standard dynamic is recovered when the order of the fractional derivative is 1.

  20. Coherent Dynamics Following Strong Field Ionization of Polyatomic Molecules

    NASA Astrophysics Data System (ADS)

    Konar, Arkaprabha; Shu, Yinan; Lozovoy, Vadim; Jackson, James; Levine, Benjamin; Dantus, Marcos

    2015-03-01

    Molecules, as opposed to atoms, present confounding possibilities of nuclear and electronic motion upon strong field ionization. The dynamics and fragmentation patterns in response to the laser field are structure sensitive; therefore, a molecule cannot simply be treated as a ``bag of atoms'' during field induced ionization. We consider here to what extent molecules retain their molecular identity and properties under strong laser fields. Using time-of-flight mass spectrometry in conjunction with pump-probe techniques we study the dynamical behavior of these molecules, monitoring ion yield modulation caused by intramolecular motions post ionization. The delay scans show that among positional isomers the variations in relative energies, amounting to only a few hundred meVs, influence the dynamical behavior of the molecules despite their having experienced such high fields (V/Å). Ab initio calculations were performed to predict dynamics along with single and multiphoton resonances in the neutral and ionic states. We propose that single electron ionization occurs within an optical cycle with the electron carrying away essentially all of the energy, leaving behind little internal energy in the cation. Evidence for this observation comes from coherent vibrational motion governed by the potential energy surface of the ground state of the cation. Subsequent fragmentation of the cation takes place as a result of further photon absorption modulated by one- and two-photon resonances, which provide sufficient energy to overcome the dissociation energy.

  1. Interrelating anatomical, effective, and functional brain connectivity using propagators and neural field theory

    NASA Astrophysics Data System (ADS)

    Robinson, P. A.

    2012-01-01

    It is shown how to compute effective and functional connection matrices (eCMs and fCMs) from anatomical CMs (aCMs) and corresponding strength-of-connection matrices (sCMs) using propagator methods in which neural interactions play the role of scatterings. This analysis demonstrates how network effects dress the bare propagators (the sCMs) to yield effective propagators (the eCMs) that can be used to compute the covariances customarily used to define fCMs. The results incorporate excitatory and inhibitory connections, multiple structures and populations, asymmetries, time delays, and measurement effects. They can also be postprocessed in the same manner as experimental measurements for direct comparison with data and thereby give insights into the role of coarse-graining, thresholding, and other effects in determining the structure of CMs. The spatiotemporal results show how to generalize CMs to include time delays and how natural network modes give rise to long-range coherence at resonant frequencies. The results are demonstrated using tractable analytic cases via neural field theory of cortical and corticothalamic systems. These also demonstrate close connections between the structure of CMs and proximity to critical points of the system, highlight the importance of indirect links between brain regions and raise the possibility of imaging specific levels of indirect connectivity. Aside from the results presented explicitly here, the expression of the connections among aCMs, sCMs, eCMs, and fCMs in terms of propagators opens the way for propagator theory to be further applied to analysis of connectivity.

  2. A nonlinear dynamics for the scalar field in Randers spacetime

    NASA Astrophysics Data System (ADS)

    Silva, J. E. G.; Maluf, R. V.; Almeida, C. A. S.

    2017-03-01

    We investigate the properties of a real scalar field in the Finslerian Randers spacetime, where the local Lorentz violation is driven by a geometrical background vector. We propose a dynamics for the scalar field by a minimal coupling of the scalar field and the Finsler metric. The coupling is intrinsically defined on the Randers spacetime, and it leads to a non-canonical kinetic term for the scalar field. The nonlinear dynamics can be split into a linear and nonlinear regimes, which depend perturbatively on the even and odd powers of the Lorentz-violating parameter, respectively. We analyze the plane-waves solutions and the modified dispersion relations, and it turns out that the spectrum is free of tachyons up to second-order.

  3. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human-Robot Interaction.

    PubMed

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.

  4. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-02-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

  5. Biosensing with microbial fuel cells and artificial neural networks: laboratory and field investigations.

    PubMed

    Feng, Yinghua; Harper, Willie F

    2013-11-30

    In this study microbial fuel cell-based biosensing was integrated with artificial neural networks (ANNs) in laboratory and field testing of water samples. Inoculation revealed two types of anode-respiring bacteria (ARB) induction profiles, a relatively slow gradual profile and a faster profile that was preceded by a significant lag time. During laboratory testing, the MFCs generated well-organized normally distributed profiles but during field experiments the peaks had irregular shapes and were smaller in magnitude. Generally, the COD concentration correlated better with peak area than with peak height. The ANN predicted the COD concentration (R(2) = 0.99) with one layer of hidden neurons and for concentrations as low as 5 mg acetate-COD/L. Adding 50 mM of 2-bromoethanesulfonate amplified the electrical signals when glucose was the substrate. This report is the first to identify two types of ARB induction profiles and to demonstrate the power of ANNs for interpreting a wide variety of electrical response peaks.

  6. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.

  7. Dynamical localization: Hydrogen atoms in magnetic and microwave fields

    SciTech Connect

    Benvenuto, F.; Casati, G.; Shepelyansky, D.L.

    1997-03-01

    We show that dynamical localization for excited hydrogen atoms in magnetic and microwave fields takes place at quite low microwave frequency ({omega}n{sup 3}{lt}1). Estimates of the localization length are given for different parameter regimes, showing that the quantum delocalization border drops significantly as compared to the case of zero magnetic field. This opens up broad possibilities for laboratory investigations. {copyright} {ital 1997} {ital The American Physical Society}

  8. The Effect of Varying Magnetic Field Gradient on Combustion Dynamic

    NASA Astrophysics Data System (ADS)

    Suzdalenko, Vera; Zake, Maija; Barmina, Inesa; Gedrovics, Martins

    2011-01-01

    The focus of the recent experimental research is to provide control of the combustion dynamics and complex measurements (flame temperature, heat production rate, and composition of polluting emissions) for pelletized wood biomass using a non-uniform magnetic field that produces magnetic force interacting with magnetic moment of paramagnetic oxygen. The experimental results have shown that a gradient magnetic field provides enhanced mixing of the flame compounds by increasing combustion efficiency and enhancing the burnout of volatiles.

  9. Combined effects of flow-induced shear stress and electromagnetic field on neural differentiation of mesenchymal stem cells.

    PubMed

    Mascotte-Cruz, Juan Uriel; Ríos, Amelia; Escalante, Bruno

    2016-01-01

    Differentiation of bone marrow-derived mesenchymal stem cells (MSCs) into neural phenotype has been induced by either flow-induced shear stress (FSS) or electromagnetic fields (EMF). However, procedures are still expensive and time consuming. In the present work, induction for 1 h with the combination of both forces showed the presence of the neural precursor nestin as early as 9 h in culture after treatment and this result lasted for the following 6 d. In conclusion, the use of a combination of FSS and EMF for a short-time renders in neurite-like cells, although further investigation is required to analyze cell functionality.

  10. Hysteretic dynamics of active particles in a periodic orienting field

    PubMed Central

    Romensky, Maksym; Scholz, Dimitri; Lobaskin, Vladimir

    2015-01-01

    Active motion of living organisms and artificial self-propelling particles has been an area of intense research at the interface of biology, chemistry and physics. Significant progress in understanding these phenomena has been related to the observation that dynamic self-organization in active systems has much in common with ordering in equilibrium condensed matter such as spontaneous magnetization in ferromagnets. The velocities of active particles may behave similar to magnetic dipoles and develop global alignment, although interactions between the individuals might be completely different. In this work, we show that the dynamics of active particles in external fields can also be described in a way that resembles equilibrium condensed matter. It follows simple general laws, which are independent of the microscopic details of the system. The dynamics is revealed through hysteresis of the mean velocity of active particles subjected to a periodic orienting field. The hysteresis is measured in computer simulations and experiments on unicellular organisms. We find that the ability of the particles to follow the field scales with the ratio of the field variation period to the particles' orientational relaxation time, which, in turn, is related to the particle self-propulsion power and the energy dissipation rate. The collective behaviour of the particles due to aligning interactions manifests itself at low frequencies via increased persistence of the swarm motion when compared with motion of an individual. By contrast, at high field frequencies, the active group fails to develop the alignment and tends to behave like a set of independent individuals even in the presence of interactions. We also report on asymptotic laws for the hysteretic dynamics of active particles, which resemble those in magnetic systems. The generality of the assumptions in the underlying model suggests that the observed laws might apply to a variety of dynamic phenomena from the motion of

  11. Neural crest induction at the neural plate border in vertebrates.

    PubMed

    Milet, Cécile; Monsoro-Burq, Anne H

    2012-06-01

    The neural crest is a transient and multipotent cell population arising at the edge of the neural plate in vertebrates. Recent findings highlight that neural crest patterning is initiated during gastrulation, i.e. earlier than classically described, in a progenitor domain named the neural border. This chapter reviews the dynamic and complex molecular interactions underlying neural border formation and neural crest emergence.

  12. DYNAMICS OF CHROMOSPHERIC UPFLOWS AND UNDERLYING MAGNETIC FIELDS

    SciTech Connect

    Yurchyshyn, V.; Abramenko, V.; Goode, P.

    2013-04-10

    We used H{alpha}-0.1 nm and magnetic field (at 1.56{mu}) data obtained with the New Solar Telescope to study the origin of the disk counterparts to type II spicules, so-called rapid blueshifted excursions (RBEs). The high time cadence of our chromospheric (10 s) and magnetic field (45 s) data allowed us to generate x-t plots using slits parallel to the spines of the RBEs. These plots, along with potential field extrapolation, led us to suggest that the occurrence of RBEs is generally correlated with the appearance of new, mixed, or unipolar fields in close proximity to network fields. RBEs show a tendency to occur at the interface between large-scale fields and small-scale dynamic magnetic loops and thus are likely to be associated with the existence of a magnetic canopy. Detection of kinked and/or inverse {sup Y-}shaped RBEs further confirm this conclusion.

  13. EEG neural oscillatory dynamics reveal semantic and response conflict at difference levels of conflict awareness.

    PubMed

    Jiang, Jun; Zhang, Qinglin; Van Gaal, Simon

    2015-07-14

    Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict awareness was manipulated by masking a conflict-inducing color word preceding a color patch target. We isolated semantic from response conflict by introducing four color words/patches, of which two were matched to the same response. We observed that both semantic as well as response conflict were associated with mid-frontal theta-band and parietal alpha-band power modulations, irrespective of the level of conflict awareness (high vs. low), although awareness of conflict increased these conflict-related power dynamics. These results show that both semantic and response conflict can be processed in the human brain and suggest that the neural oscillatory mechanisms in EEG reflect mainly "domain general" conflict processing mechanisms, instead of conflict source specific effects.

  14. Modeling neural correlates of auditory attention in evoked potentials using corticothalamic feedback dynamics.

    PubMed

    Trenado, Carlos; Haab, Lars; Strauss, Daniel J

    2007-01-01

    Auditory evoked cortical potentials (AECP) are well established as diagnostic tool in audiology and gain more and more impact in experimental neuropsychology, neuro-science, and psychiatry, e.g., for the attention deficit disorder, schizophrenia, or for studying the tinnitus decompensation. The modulation of AECP due to exogenous and endogenous attention plays a major role in many clinical applications and has experimentally been studied in neuropsychology. However the relation of corticothalamic feedback dynamics to focal and non-focal attention and its large-scale effect reflected in AECPs is far from being understood. In this paper, we model neural correlates of auditory attention reflected in AECPs using corticothalamic feedback dynamics. We present a mapping of a recently developed multiscale model of evoked potentials to the hearing path and discuss for the first time its neurofunctionality in terms of corticothalamic feedback loops related to focal and non-focal attention. Our model reinforced recent experimental results related to online attention monitoring using AECPs with application as objective tinnitus decompensation measure. It is concluded that our model presents a promising approach to gain a deeper understanding of the neurodynamics of auditory attention and might be use as an efficient forward model to reinforce hypotheses that are obtained from experimental paradigms involving AECPs.

  15. Dark energy parametrization motivated by scalar field dynamics

    NASA Astrophysics Data System (ADS)

    de la Macorra, Axel

    2016-05-01

    We propose a new dark energy (DE) parametrization motivated by the dynamics of a scalar field ϕ. We use an equation of state w parametrized in terms of two functions L and y, closely related to the dynamics of scalar fields, which is exact and has no approximation. By choosing an appropriate ansatz for L we obtain a wide class of behavior for the evolution of DE without the need to specify the scalar potential V. We parametrize L and y in terms of only four parameters, giving w a rich structure and allowing for a wide class of DE dynamics. Our w can either grow and later decrease, or it can happen the other way around; the steepness of the transition is not fixed and it contains the ansatz w={w}o+{w}a(1-a). Our parametrization follows closely the dynamics of a scalar field, and the function L allows us to connect it with the scalar potential V(φ ). While the Universe is accelerating and the slow roll approximation is valid, we get L≃ {({V}\\prime /V)}2. To determine the dynamics of DE we also calculate the background evolution and its perturbations, since they are important to discriminate between different DE models.

  16. Effective electric fields along realistic DTI-based neural trajectories for modelling the stimulation mechanisms of TMS.

    PubMed

    De Geeter, N; Crevecoeur, G; Leemans, A; Dupré, L

    2015-01-21

    In transcranial magnetic stimulation (TMS), an applied alternating magnetic field induces an electric field in the brain that can interact with the neural system. It is generally assumed that this induced electric field is the crucial effect exciting a certain region of the brain. More specifically, it is the component of this field parallel to the neuron's local orientation, the so-called effective electric field, that can initiate neuronal stimulation. Deeper insights on the stimulation mechanisms can be acquired through extensive TMS modelling. Most models study simple representations of neurons with assumed geometries, whereas we embed realistic neural trajectories computed using tractography based on diffusion tensor images. This way of modelling ensures a more accurate spatial distribution of the effective electric field that is in addition patient and case specific. The case study of this paper focuses on the single pulse stimulation of the left primary motor cortex with a standard figure-of-eight coil. Including realistic neural geometry in the model demonstrates the strong and localized variations of the effective electric field between the tracts themselves and along them due to the interplay of factors such as the tract's position and orientation in relation to the TMS coil, the neural trajectory and its course along the white and grey matter interface. Furthermore, the influence of changes in the coil orientation is studied. Investigating the impact of tissue anisotropy confirms that its contribution is not negligible. Moreover, assuming isotropic tissues lead to errors of the same size as rotating or tilting the coil with 10 degrees. In contrast, the model proves to be less sensitive towards the not well-known tissue conductivity values.

  17. Effective electric fields along realistic DTI-based neural trajectories for modelling the stimulation mechanisms of TMS

    NASA Astrophysics Data System (ADS)

    De Geeter, N.; Crevecoeur, G.; Leemans, A.; Dupré, L.

    2015-01-01

    In transcranial magnetic stimulation (TMS), an applied alternating magnetic field induces an electric field in the brain that can interact with the neural system. It is generally assumed that this induced electric field is the crucial effect exciting a certain region of the brain. More specifically, it is the component of this field parallel to the neuron’s local orientation, the so-called effective electric field, that can initiate neuronal stimulation. Deeper insights on the stimulation mechanisms can be acquired through extensive TMS modelling. Most models study simple representations of neurons with assumed geometries, whereas we embed realistic neural trajectories computed using tractography based on diffusion tensor images. This way of modelling ensures a more accurate spatial distribution of the effective electric field that is in addition patient and case specific. The case study of this paper focuses on the single pulse stimulation of the left primary motor cortex with a standard figure-of-eight coil. Including realistic neural geometry in the model demonstrates the strong and localized variations of the effective electric field between the tracts themselves and along them due to the interplay of factors such as the tract’s position and orientation in relation to the TMS coil, the neural trajectory and its course along the white and grey matter interface. Furthermore, the influence of changes in the coil orientation is studied. Investigating the impact of tissue anisotropy confirms that its contribution is not negligible. Moreover, assuming isotropic tissues lead to errors of the same size as rotating or tilting the coil with 10 degrees. In contrast, the model proves to be less sensitive towards the not well-known tissue conductivity values.

  18. OLD-FIELD SUCCESSIONAL DYNAMICS FOLLOWING INTENSIVE HERBIVORY

    EPA Science Inventory

    Community composition and successional patterns can be altered by disturbance and exotic species invasions. Our objective was to describe vegetation dynamics following cessation of severe disturbance, which was heavy grazing by cattle, in an old-field grassland subject to invasi...

  19. Using Dynamic Field Theory to Rethink Infant Habituation

    ERIC Educational Resources Information Center

    Schoner, Gregor; Thelen, Esther

    2006-01-01

    Much of what psychologists know about infant perception and cognition is based on habituation, but the process itself is still poorly understood. Here the authors offer a dynamic field model of infant visual habituation, which simulates the known features of habituation, including familiarity and novelty effects, stimulus intensity effects, and…

  20. Dynamical mean-field theory from a quantum chemical perspective.

    PubMed

    Zgid, Dominika; Chan, Garnet Kin-Lic

    2011-03-07

    We investigate the dynamical mean-field theory (DMFT) from a quantum chemical perspective. Dynamical mean-field theory offers a formalism to extend quantum chemical methods for finite systems to infinite periodic problems within a local correlation approximation. In addition, quantum chemical techniques can be used to construct new ab initio Hamiltonians and impurity solvers for DMFT. Here, we explore some ways in which these things may be achieved. First, we present an informal overview of dynamical mean-field theory to connect to quantum chemical language. Next, we describe an implementation of dynamical mean-field theory where we start from an ab initio Hartree-Fock Hamiltonian that avoids double counting issues present in many applications of DMFT. We then explore the use of the configuration interaction hierarchy in DMFT as an approximate solver for the impurity problem. We also investigate some numerical issues of convergence within DMFT. Our studies are carried out in the context of the cubic hydrogen model, a simple but challenging test for correlation methods. Finally, we finish with some conclusions for future directions.

  1. The effective field theorist's approach to gravitational dynamics

    NASA Astrophysics Data System (ADS)

    Porto, Rafael A.

    2016-05-01

    We review the effective field theory (EFT) approach to gravitational dynamics. We focus on extended objects in long-wavelength backgrounds and gravitational wave emission from spinning binary systems. We conclude with an introduction to EFT methods for the study of cosmological large scale structures.

  2. Book review: old fields: dynamics and restoration of abandoned farmland

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The 2007 volume, “Old Fields: Dynamics and Restoration of Abandoned Farmland”, edited by VA Cramer and RJ Hobbs and published by the Society for Ecological Restoration International (Island Press), is a valuable attempt to synthesize a dozen case studies on agricultural abandonment from all of the ...

  3. Striatal Activity and Reward Relativity: Neural Signals Encoding Dynamic Outcome Valuation

    PubMed Central

    Webber, Emily S.; Mankin, David E.

    2016-01-01

    Abstract The striatum is a key brain region involved in reward processing. Striatal activity has been linked to encoding reward magnitude and integrating diverse reward outcome information. Recent work has supported the involvement of striatum in the valuation of outcomes. The present work extends this idea by examining striatal activity during dynamic shifts in value that include different levels and directions of magnitude disparity. A novel task was used to produce diverse relative reward effects on a chain of instrumental action. Rats (Rattus norvegicus) were trained to respond to cues associated with specific outcomes varying by food pellet magnitude. Animals were exposed to single-outcome sessions followed by mixed-outcome sessions, and neural activity was compared among identical outcome trials from the different behavioral contexts. Results recording striatal activity show that neural responses to different task elements reflect incentive contrast as well as other relative effects that involve generalization between outcomes or possible influences of outcome variety. The activity that was most prevalent was linked to food consumption and post-food consumption periods. Relative encoding was sensitive to magnitude disparity. A within-session analysis showed strong contrast effects that were dependent upon the outcome received in the immediately preceding trial. Significantly higher numbers of responses were found in ventral striatum linked to relative outcome effects. Our results support the idea that relative value can incorporate diverse relationships, including comparisons from specific individual outcomes to general behavioral contexts. The striatum contains these diverse relative processes, possibly enabling both a higher information yield concerning value shifts and a greater behavioral flexibility. PMID:27822506

  4. Dynamic changes in connexin expression following engraftment of neural stem cells to striatal tissue

    SciTech Connect

    Jaederstad, Johan Jaederstad, Linda Maria; Herlenius, Eric

    2011-01-01

    Gap-junctional intercellular communication between grafted neural stem cells (NSCs) and host cells seem to be essential for many of the beneficial effects associated with NSC engraftment. Utilizing murine NSCs (mNSCs) grafted into an organotypic ex vivo model system for striatal tissue we examined the prerequisites for formation of gap-junctional couplings between graft and host cells at different time points following implantation. We utilized flow cytometry (to quantify the proportion of connexin (Cx) 26 and 43 expressing cells), immunohistochemistry (for localization of the gap-junctional proteins in graft and host cells), dye-transfer studies with and without pharmacological gap-junctional blockers (assaying the functionality of the formed gap-junctional couplings), and proliferation assays (to estimate the role of gap junctions for NSC well-being) to this end. Immunohistochemical staining and dye-transfer studies revealed that the NSCs already form functional gap junctions prior to engraftment, thereby creating a substrate for subsequent graft and host communication. The expression of Cx43 by grafted NSCs was decreased by neurotrophin-3 overexpression in NSCs and culturing of grafted tissue in serum-free Neurobasal B27 medium. Cx43 expression in NSC-derived cells also changed significantly following engraftment. In host cells the expression of Cx43 peaked following traumatic stimulation and then declined within two weeks, suggesting a window of opportunity for successful host cell rescue by NSC engraftment. Further investigation of the dynamic changes in gap junction expression in graft and host cells and the associated variations in intercellular communication between implanted and endogenous cells might help to understand and control the early positive and negative effects evident following neural stem cell transplantation and thereby optimize the outcome of future clinical NSC transplantation therapies.

  5. Striatal Activity and Reward Relativity: Neural Signals Encoding Dynamic Outcome Valuation.

    PubMed

    Webber, Emily S; Mankin, David E; Cromwell, Howard C

    2016-01-01

    The striatum is a key brain region involved in reward processing. Striatal activity has been linked to encoding reward magnitude and integrating diverse reward outcome information. Recent work has supported the involvement of striatum in the valuation of outcomes. The present work extends this idea by examining striatal activity during dynamic shifts in value that include different levels and directions of magnitude disparity. A novel task was used to produce diverse relative reward effects on a chain of instrumental action. Rats (Rattus norvegicus) were trained to respond to cues associated with specific outcomes varying by food pellet magnitude. Animals were exposed to single-outcome sessions followed by mixed-outcome sessions, and neural activity was compared among identical outcome trials from the different behavioral contexts. Results recording striatal activity show that neural responses to different task elements reflect incentive contrast as well as other relative effects that involve generalization between outcomes or possible influences of outcome variety. The activity that was most prevalent was linked to food consumption and post-food consumption periods. Relative encoding was sensitive to magnitude disparity. A within-session analysis showed strong contrast effects that were dependent upon the outcome received in the immediately preceding trial. Significantly higher numbers of responses were found in ventral striatum linked to relative outcome effects. Our results support the idea that relative value can incorporate diverse relationships, including comparisons from specific individual outcomes to general behavioral contexts. The striatum contains these diverse relative processes, possibly enabling both a higher information yield concerning value shifts and a greater behavioral flexibility.

  6. Tunable nonequilibrium dynamics of field quenches in spin ice

    PubMed Central

    Mostame, Sarah; Castelnovo, Claudio; Moessner, Roderich; Sondhi, Shivaji L.

    2014-01-01

    We present nonequilibrium physics in spin ice as a unique setting that combines kinematic constraints, emergent topological defects, and magnetic long-range Coulomb interactions. In spin ice, magnetic frustration leads to highly degenerate yet locally constrained ground states. Together, they form a highly unusual magnetic state—a “Coulomb phase”—whose excitations are point-like defects—magnetic monopoles—in the absence of which effectively no dynamics is possible. Hence, when they are sparse at low temperature, dynamics becomes very sluggish. When quenching the system from a monopole-rich to a monopole-poor state, a wealth of dynamical phenomena occur, the exposition of which is the subject of this article. Most notably, we find reaction diffusion behavior, slow dynamics owing to kinematic constraints, as well as a regime corresponding to the deposition of interacting dimers on a honeycomb lattice. We also identify potential avenues for detecting the magnetic monopoles in a regime of slow-moving monopoles. The interest in this model system is further enhanced by its large degree of tunability and the ease of probing it in experiment: With varying magnetic fields at different temperatures, geometric properties—including even the effective dimensionality of the system—can be varied. By monitoring magnetization, spin correlations or zero-field NMR, the dynamical properties of the system can be extracted in considerable detail. This establishes spin ice as a laboratory of choice for the study of tunable, slow dynamics. PMID:24379372

  7. Tunable nonequilibrium dynamics of field quenches in spin ice.

    PubMed

    Mostame, Sarah; Castelnovo, Claudio; Moessner, Roderich; Sondhi, Shivaji L

    2014-01-14

    We present nonequilibrium physics in spin ice as a unique setting that combines kinematic constraints, emergent topological defects, and magnetic long-range Coulomb interactions. In spin ice, magnetic frustration leads to highly degenerate yet locally constrained ground states. Together, they form a highly unusual magnetic state--a "Coulomb phase"--whose excitations are point-like defects--magnetic monopoles--in the absence of which effectively no dynamics is possible. Hence, when they are sparse at low temperature, dynamics becomes very sluggish. When quenching the system from a monopole-rich to a monopole-poor state, a wealth of dynamical phenomena occur, the exposition of which is the subject of this article. Most notably, we find reaction diffusion behavior, slow dynamics owing to kinematic constraints, as well as a regime corresponding to the deposition of interacting dimers on a honeycomb lattice. We also identify potential avenues for detecting the magnetic monopoles in a regime of slow-moving monopoles. The interest in this model system is further enhanced by its large degree of tunability and the ease of probing it in experiment: With varying magnetic fields at different temperatures, geometric properties--including even the effective dimensionality of the system--can be varied. By monitoring magnetization, spin correlations or zero-field NMR, the dynamical properties of the system can be extracted in considerable detail. This establishes spin ice as a laboratory of choice for the study of tunable, slow dynamics.

  8. Ab initio molecular dynamics of hydrogen dissociation on metal surfaces using neural networks and novelty sampling.

    PubMed

    Ludwig, Jeffery; Vlachos, Dionisios G

    2007-10-21

    We outline a hybrid multiscale approach for the construction of ab initio potential energy surfaces (PESs) useful for performing six-dimensional (6D) classical or quantum mechanical molecular dynamics (MD) simulations of diatomic molecules reacting at single crystal surfaces. The algorithm implements concepts from the corrugation reduction procedure, which reduces energetic variation in the PES, and uses neural networks for interpolation of smoothed ab initio data. A novelty sampling scheme is implemented and used to identify configurations that are most likely to be predicted inaccurately by the neural network. This hybrid multiscale approach, which couples PES construction at the electronic structure level to MD simulations at the atomistic scale, reduces the number of density functional theory (DFT) calculations needed to specify an accurate PES. Due to the iterative nature of the novelty sampling algorithm, it is possible to obtain a quantitative measure of the convergence of the PES with respect to the number of ab initio calculations used to train the neural network. We demonstrate the algorithm by first applying it to two analytic potentials, which model the H2/Pt(111) and H2/Cu(111) systems. These potentials are of the corrugated London-Eyring-Polanyi-Sato form, which are based on DFT calculations, but are not globally accurate. After demonstrating the convergence of the PES using these simple potentials, we use DFT calculations directly and obtain converged semiclassical trajectories for the H2/Pt(111) system at the PW91/generalized gradient approximation level. We obtain a converged PES for a 6D hydrogen-surface dissociation reaction using novelty sampling coupled directly to DFT. These results, in excellent agreement with experiments and previous theoretical work, are compared to previous simulations in order to explore the sensitivity of the PES (and therefore MD) to the choice of exchange and correlation functional. Despite having a lower energetic

  9. Quantum emitters dynamically coupled to a quantum field

    NASA Astrophysics Data System (ADS)

    Acevedo, O. L.; Quiroga, L.; Rodríguez, F. J.; Johnson, N. F.

    2013-12-01

    We study theoretically the dynamical response of a set of solid-state quantum emitters arbitrarily coupled to a single-mode microcavity system. Ramping the matter-field coupling strength in round trips, we quantify the hysteresis or irreversible quantum dynamics. The matter-field system is modeled as a finite-size Dicke model which has previously been used to describe equilibrium (including quantum phase transition) properties of systems such as quantum dots in a microcavity. Here we extend this model to address non-equilibrium situations. Analyzing the system's quantum fidelity, we find that the near-adiabatic regime exhibits the richest phenomena, with a strong asymmetry in the internal collective dynamics depending on which phase is chosen as the starting point. We also explore signatures of the crossing of the critical points on the radiation subsystem by monitoring its Wigner function; then, the subsystem can exhibit the emergence of non-classicality and complexity.

  10. Approximate photochemical dynamics of azobenzene with reactive force fields

    SciTech Connect

    Li, Yan; Hartke, Bernd

    2013-12-14

    We have fitted reactive force fields of the ReaxFF type to the ground and first excited electronic states of azobenzene, using global parameter optimization by genetic algorithms. Upon coupling with a simple energy-gap transition probability model, this setup allows for completely force-field-based simulations of photochemical cis→trans- and trans→cis-isomerizations of azobenzene, with qualitatively acceptable quantum yields. This paves the way towards large-scale dynamics simulations of molecular machines, including bond breaking and formation (via the reactive force field) as well as photochemical engines (presented in this work)

  11. Approximate photochemical dynamics of azobenzene with reactive force fields

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hartke, Bernd

    2013-12-01

    We have fitted reactive force fields of the ReaxFF type to the ground and first excited electronic states of azobenzene, using global parameter optimization by genetic algorithms. Upon coupling with a simple energy-gap transition probability model, this setup allows for completely force-field-based simulations of photochemical cis→trans- and trans→cis-isomerizations of azobenzene, with qualitatively acceptable quantum yields. This paves the way towards large-scale dynamics simulations of molecular machines, including bond breaking and formation (via the reactive force field) as well as photochemical engines (presented in this work).

  12. Born in weak fields: below-threshold photoelectron dynamics

    NASA Astrophysics Data System (ADS)

    Williams, J. B.; Saalmann, U.; Trinter, F.; Schöffler, M. S.; Weller, M.; Burzynski, P.; Goihl, C.; Henrichs, K.; Janke, C.; Griffin, B.; Kastirke, G.; Neff, J.; Pitzer, M.; Waitz, M.; Yang, Y.; Schiwietz, G.; Zeller, S.; Jahnke, T.; Dörner, R.

    2017-02-01

    We investigate the dynamics of ultra-low kinetic energy photoelectrons. Many experimental techniques employed for the detection of photoelectrons require the presence of (more or less) weak electric extraction fields in order to perform the measurement. Our studies show that ultra-low energy photoelectrons exhibit a characteristic shift in their apparent measured momentum when the target system is exposed to such static electric fields. Already fields as weak as 1 V cm-1 have an observable influence on the detected electron momentum. This apparent shift is demonstrated by an experiment on zero energy photoelectrons emitted from He and explained through theoretical model calculations.

  13. Guided migration of neural stem cells derived from human embryonic stem cells by an electric field.

    PubMed

    Feng, Jun-Feng; Liu, Jing; Zhang, Xiu-Zhen; Zhang, Lei; Jiang, Ji-Yao; Nolta, Jan; Zhao, Min

    2012-02-01

    Small direct current (DC) electric fields (EFs) guide neurite growth and migration of rodent neural stem cells (NSCs). However, this could be species dependent. Therefore, it is critical to investigate how human NSCs (hNSCs) respond to EF before any possible clinical attempt. Aiming to characterize the EF-stimulated and guided migration of hNSCs, we derived hNSCs from a well-established human embryonic stem cell line H9. Small applied DC EFs, as low as 16 mV/mm, induced significant directional migration toward the cathode. Reversal of the field polarity reversed migration of hNSCs. The galvanotactic/electrotactic response was both time and voltage dependent. The migration directedness and distance to the cathode increased with the increase of field strength. (Rho-kinase) inhibitor Y27632 is used to enhance viability of stem cells and has previously been reported to inhibit EF-guided directional migration in induced pluripotent stem cells and neurons. However, its presence did not significantly affect the directionality of hNSC migration in an EF. Cytokine receptor [C-X-C chemokine receptor type 4 (CXCR4)] is important for chemotaxis of NSCs in the brain. The blockage of CXCR4 did not affect the electrotaxis of hNSCs. We conclude that hNSCs respond to a small EF by directional migration. Applied EFs could potentially be further exploited to guide hNSCs to injured sites in the central nervous system to improve the outcome of various diseases.

  14. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction.

  15. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    PubMed Central

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503

  16. Neural dynamics of feedforward and feedback processing in figure-ground segregation.

    PubMed

    Layton, Oliver W; Mingolla, Ennio; Yazdanbakhsh, Arash

    2014-01-01

    Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.

  17. Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction

    PubMed Central

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2016-01-01

    To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language–behavior relationships and the temporal patterns of interaction. Here, “internal dynamics” refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human’s linguistic instruction. After learning, the network actually formed the attractor structure representing both language–behavior relationships and the task’s temporal pattern in its internal dynamics. In the dynamics, language–behavior mapping was achieved by the branching structure. Repetition of human’s instruction and robot’s behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases. PMID:27471463

  18. Phase-space dynamics of runaway electrons in magnetic fields

    NASA Astrophysics Data System (ADS)

    Guo, Zehua; McDevitt, Christopher J.; Tang, Xian-Zhu

    2017-04-01

    Dynamics of runaway electrons in magnetic fields are governed by the competition of three dominant physics: parallel electric field acceleration, Coulomb collision, and synchrotron radiation. Examination of the energy and pitch-angle flows reveals that the presence of local vortex structure and global circulation is crucial to the saturation of primary runaway electrons. Models for the vortex structure, which has an O-point to X-point connection, and the bump of runaway electron distribution in energy space have been developed and compared against the simulation data. Identification of these velocity-space structures opens a new venue to re-examine the conventional understanding of runaway electron dynamics in magnetic fields.

  19. Quantum dynamics of charge state in silicon field evaporation

    NASA Astrophysics Data System (ADS)

    Silaeva, Elena P.; Uchida, Kazuki; Watanabe, Kazuyuki

    2016-08-01

    The charge state of an ion field-evaporating from a silicon-atom cluster is analyzed using time-dependent density functional theory coupled to molecular dynamics. The final charge state of the ion is shown to increase gradually with increasing external electrostatic field in agreement with the average charge state of silicon ions detected experimentally. When field evaporation is triggered by laser-induced electronic excitations the charge state also increases with increasing intensity of the laser pulse. At the evaporation threshold, the charge state of the evaporating ion does not depend on the electrostatic field due to the strong contribution of laser excitations to the ionization process both at low and high laser energies. A neutral silicon atom escaping the cluster due to its high initial kinetic energy is shown to be eventually ionized by external electrostatic field.

  20. TOPICAL REVIEW: Electron dynamics in inhomogeneous magnetic fields

    NASA Astrophysics Data System (ADS)

    Nogaret, Alain

    2010-06-01

    This review explores the dynamics of two-dimensional electrons in magnetic potentials that vary on scales smaller than the mean free path. The physics of microscopically inhomogeneous magnetic fields relates to important fundamental problems in the fractional quantum Hall effect, superconductivity, spintronics and graphene physics and spins out promising applications which will be described here. After introducing the initial work done on electron localization in random magnetic fields, the experimental methods for fabricating magnetic potentials are presented. Drift-diffusion phenomena are then described, which include commensurability oscillations, magnetic channelling, resistance resonance effects and magnetic dots. We then review quantum phenomena in magnetic potentials including magnetic quantum wires, magnetic minibands in superlattices, rectification by snake states, quantum tunnelling and Klein tunnelling. The third part is devoted to spintronics in inhomogeneous magnetic fields. This covers spin filtering by magnetic field gradients and circular magnetic fields, electrically induced spin resonance, spin resonance fluorescence and coherent spin manipulation.

  1. First principles molecular dynamics without self-consistent field optimization

    SciTech Connect

    Souvatzis, Petros; Niklasson, Anders M. N.

    2014-01-28

    We present a first principles molecular dynamics approach that is based on time-reversible extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] in the limit of vanishing self-consistent field optimization. The optimization-free dynamics keeps the computational cost to a minimum and typically provides molecular trajectories that closely follow the exact Born-Oppenheimer potential energy surface. Only one single diagonalization and Hamiltonian (or Fockian) construction are required in each integration time step. The proposed dynamics is derived for a general free-energy potential surface valid at finite electronic temperatures within hybrid density functional theory. Even in the event of irregular functional behavior that may cause a dynamical instability, the optimization-free limit represents a natural starting guess for force calculations that may require a more elaborate iterative electronic ground state optimization. Our optimization-free dynamics thus represents a flexible theoretical framework for a broad and general class of ab initio molecular dynamics simulations.

  2. First principles molecular dynamics without self-consistent field optimization.

    PubMed

    Souvatzis, Petros; Niklasson, Anders M N

    2014-01-28

    We present a first principles molecular dynamics approach that is based on time-reversible extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] in the limit of vanishing self-consistent field optimization. The optimization-free dynamics keeps the computational cost to a minimum and typically provides molecular trajectories that closely follow the exact Born-Oppenheimer potential energy surface. Only one single diagonalization and Hamiltonian (or Fockian) construction are required in each integration time step. The proposed dynamics is derived for a general free-energy potential surface valid at finite electronic temperatures within hybrid density functional theory. Even in the event of irregular functional behavior that may cause a dynamical instability, the optimization-free limit represents a natural starting guess for force calculations that may require a more elaborate iterative electronic ground state optimization. Our optimization-free dynamics thus represents a flexible theoretical framework for a broad and general class of ab initio molecular dynamics simulations.

  3. Advances in neural networks research: an introduction.

    PubMed

    Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar

    2009-01-01

    The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications.

  4. A system of recurrent neural networks for modularising, parameterising and dynamic analysis of cell signalling networks.

    PubMed

    Samarasinghe, S; Ling, H

    2017-02-04

    In this paper, we show how to extend our previously proposed novel continuous time Recurrent Neural Networks (RNN) approach that retains the advantage of continuous dynamics offered by Ordinary Differential Equations (ODE) while enabling parameter estimation through adaptation, to larger signalling networks using a modular approach. Specifically, the signalling network is decomposed into several sub-models based on important temporal events in the network. Each sub-model is represented by the proposed RNN and trained using data generated from the corresponding ODE model. Trained sub-models are assembled into a whole system RNN which is then subjected to systems dynamics and sensitivity analyses. The concept is illustrated by application to G1/S transition in cell cycle using Iwamoto et al. (2008) ODE model. We decomposed the G1/S network into 3 sub-models: (i) E2F transcription factor release; (ii) E2F and CycE positive feedback loop for elevating cyclin levels; and (iii) E2F and CycA negative feedback to degrade E2F. The trained sub-models accurately represented system dynamics and parameters were in good agreement with the ODE model. The whole system RNN however revealed couple of parameters contributing to compounding errors due to feedback and required refinement to sub-model 2. These related to the reversible reaction between CycE/CDK2 and p27, its inhibitor. The revised whole system RNN model very accurately matched dynamics of the ODE system. Local sensitivity analysis of the whole system model further revealed the most dominant influence of the above two parameters in perturbing G1/S transition, giving support to a recent hypothesis that the release of inhibitor p27 from Cyc/CDK complex triggers cell cycle stage transition. To make the model useful in a practical setting, we modified each RNN sub-model with a time relay switch to facilitate larger interval input data (≈20min) (original model used data for 30s or less) and retrained them that produced

  5. Dynamic Structure of Neural Variability in the Cortical Representation of Speech Sounds

    PubMed Central

    Dichter, Benjamin K.; Bouchard, Kristofer E.

    2016-01-01

    Accurate sensory discrimination is commonly believed to require precise representations in the nervous system; however, neural stimulus responses can be highly variable, even to identical stimuli. Recent studies suggest that cortical response variability decreases during stimulus processing, but the implications of such effects on stimulus discrimination are unclear. To address this, we examined electrocorticographic cortical field potential recordings from the human nonprimary auditory cortex (superior temporal gyrus) while subjects listened to speech syllables. Compared with a prestimulus baseline, activation variability decreased upon stimulus onset, similar to findings from microelectrode recordings in animal studies. We found that this decrease was simultaneous with encoding and spatially specific for those electrodes that most strongly discriminated speech sounds. We also found that variability was predominantly reduced in a correlated subspace across electrodes. We then compared signal and variability (noise) correlations and found that noise correlations reduce more for electrodes with strong signal correlations. Furthermore, we found that this decrease in variability is strongest in the high gamma band, which correlates with firing rate response. Together, these findings indicate that the structure of single-trial response variability is shaped to enhance discriminability despite non–stimulus-related noise. SIGNIFICANCE STATEMENT Cortical responses can be highly variable to auditory speech sounds. Despite this, sensory perception can be remarkably stable. Here, we recorded from the human superior temporal gyrus, a high-order auditory cortex, and studied the changes in the cortical representation of speech stimuli across multiple repetitions. We found that neural variability is reduced upon stimulus onset across electrodes that encode speech sounds. PMID:27413155

  6. Single image depth estimation based on convolutional neural network and sparse connected conditional random field

    NASA Astrophysics Data System (ADS)

    Zhu, Leqing; Wang, Xun; Wang, Dadong; Wang, Huiyan

    2016-10-01

    Deep convolutional neural networks (DCNNs) have attracted significant interest in the computer vision community in the recent years and have exhibited high performance in resolving many computer vision problems, such as image classification. We address the pixel-level depth prediction from a single image by combining DCNN and sparse connected conditional random field (CRF). Owing to the invariance properties of DCNNs that make them suitable for high-level tasks, their outputs are generally not localized enough for detailed pixel-level regression. A multiscale DCNN and sparse connected CRF are combined to overcome this localization weakness. We have evaluated our framework using the well-known NYU V2 depth dataset, and the results show that the proposed method can improve the depth prediction accuracy both qualitatively and quantitatively, as compared to previous works. This finding shows the potential use of the proposed method in three-dimensional (3-D) modeling or 3-D video production from the given two-dimensional (2-D) images or 2-D videos.

  7. Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks.

    PubMed

    Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin

    2016-01-01

    The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V.Database URL: http://219.223.252.210:8080/SS/cdr.html.

  8. Nanosecond pulsed electric field thresholds for nanopore formation in neural cells

    NASA Astrophysics Data System (ADS)

    Roth, Caleb C.; Tolstykh, Gleb P.; Payne, Jason A.; Kuipers, Marjorie A.; Thompson, Gary L.; DeSilva, Mauris N.; Ibey, Bennett L.

    2013-03-01

    The persistent influx of ions through nanopores created upon cellular exposure to nanosecond pulse electric fields (nsPEF) could be used to modulate neuronal function. One ion, calcium (Ca), is important to action potential firing and regulates many ion channels. However, uncontrolled hyper-excitability of neurons leads to Ca overload and neurodegeneration. Thus, to prevent unintended consequences of nsPEF-induced neural stimulation, knowledge of optimum exposure parameters is required. We determined the relationship between nsPEF exposure parameters (pulse width and amplitude) and nanopore formation in two cell types: rodent neuroblastoma (NG108) and mouse primary hippocampal neurons (PHN). We identified thresholds for nanoporation using Annexin V and FM1-43, to detect changes in membrane asymmetry, and through Ca influx using Calcium Green. The ED50 for a single 600 ns pulse, necessary to cause uptake of extracellular Ca, was 1.76 kV/cm for NG108 and 0.84 kV/cm for PHN. At 16.2 kV/cm, the ED50 for pulse width was 95 ns for both cell lines. Cadmium, a nonspecific Ca channel blocker, failed to prevent Ca uptake suggesting that observed influx is likely due to nanoporation. These data demonstrate that moderate amplitude single nsPEF exposures result in rapid Ca influx that may be capable of controllably modulating neurological function.

  9. Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks

    PubMed Central

    Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin

    2016-01-01

    The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V. Database URL: http://219.223.252.210:8080/SS/cdr.html PMID:27777244

  10. Residual Separation of Magnetic Fields Using a Cellular Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Albora, A. M.; Özmen, A.; Uçan, O. N.

    - In this paper, a Cellular Neural Network (CNN) has been applied to a magnetic regional/residual anomaly separation problem. CNN is an analog parallel computing paradigm defined in space and characterized by the locality of connections between processing neurons. The behavior of the CNN is defined by the template matrices A, B and the template vector I. We have optimized weight coefficients of these templates using Recurrent Perceptron Learning Algorithm (RPLA). The advantages of CNN as a real-time stochastic method are that it introduces little distortion to the shape of the original image and that it is not effected significantly by factors such as the overlap of power spectra of residual fields. The proposed method is tested using synthetic examples and the average depth of the buried objects has been estimated by power spectrum analysis. Next the CNN approach is applied to magnetic data over the Golalan chromite mine in Elazig which lies East of Turkey. This area is among the largest and richest chromite masses of the world. We compared the performance of CNN to classical derivative approaches.

  11. Dynamic Decision Making in Complex Task Environments: Principles and Neural Mechanisms

    DTIC Science & Technology

    2013-03-01

    Dynamical models of cognition . Mathematical models of mental processes. Human performance optimization. U U U U Dr. Jay Myung 703-696-8487 Reset 1...we have continued to develop a neurodynamic theory of decision making, using a combination of computational and experimental approaches, to address...a long history in the field of human cognitive psychology. The theoretical foundations of this research can be traced back to signal detection

  12. Regional neural response differences in the determination of faces or houses positioned in a wide visual field.

    PubMed

    Wang, Bin; Yan, Tianyi; Wu, Jinglong; Chen, Kewei; Imajyo, Satoshi; Ohno, Seiichiro; Kanazawa, Susumu

    2013-01-01

    In human visual cortex, the primary visual cortex (V1) is considered to be essential for visual information processing; the fusiform face area (FFA) and parahippocampal place area (PPA) are considered as face-selective region and places-selective region, respectively. Recently, a functional magnetic resonance imaging (fMRI) study showed that the neural activity ratios between V1 and FFA were constant as eccentricities increasing in central visual field. However, in wide visual field, the neural activity relationships between V1 and FFA or V1 and PPA are still unclear. In this work, using fMRI and wide-view present system, we tried to address this issue by measuring neural activities in V1, FFA and PPA for the images of faces and houses aligning in 4 eccentricities and 4 meridians. Then, we further calculated ratio relative to V1 (RRV1) as comparing the neural responses amplitudes in FFA or PPA with those in V1. We found V1, FFA, and PPA showed significant different neural activities to faces and houses in 3 dimensions of eccentricity, meridian, and region. Most importantly, the RRV1s in FFA and PPA also exhibited significant differences in 3 dimensions. In the dimension of eccentricity, both FFA and PPA showed smaller RRV1s at central position than those at peripheral positions. In meridian dimension, both FFA and PPA showed larger RRV1s at upper vertical positions than those at lower vertical positions. In the dimension of region, FFA had larger RRV1s than PPA. We proposed that these differential RRV1s indicated FFA and PPA might have different processing strategies for encoding the wide field visual information from V1. These different processing strategies might depend on the retinal position at which faces or houses are typically observed in daily life. We posited a role of experience in shaping the information processing strategies in the ventral visual cortex.

  13. Dynamic transcriptional signature and cell fate analysis reveals plasticity of individual neural plate border cells

    PubMed Central

    Roellig, Daniela; Tan-Cabugao, Johanna; Esaian, Sevan; Bronner, Marianne E

    2017-01-01

    The ‘neural plate border’ of vertebrate embryos contains precursors of neural crest and placode cells, both defining vertebrate characteristics. How these lineages segregate from neural and epidermal fates has been a matter of debate. We address this by performing a fine-scale quantitative temporal analysis of transcription factor expression in the neural plate border of chick embryos. The results reveal significant overlap of transcription factors characteristic of multiple lineages in individual border cells from gastrula through neurula stages. Cell fate analysis using a Sox2 (neural) enhancer reveals that cells that are initially Sox2+ cells can contribute not only to neural tube but also to neural crest and epidermis. Moreover, modulating levels of Sox2 or Pax7 alters the apportionment of neural tube versus neural crest fates. Our results resolve a long-standing question and suggest that many individual border cells maintain ability to contribute to multiple ectodermal lineages until or beyond neural tube closure. DOI: http://dx.doi.org/10.7554/eLife.21620.001 PMID:28355135

  14. Dynamics of Compact Binaries in Effective Field Theory Formalism

    NASA Astrophysics Data System (ADS)

    Perrodin, Delphine

    2010-02-01

    Coalescing compact binaries are predicted to be powerful emitters of gravitational waves, and provide a strong gravity environment ideal for the testing of gravity theories. We study the gravitational dynamics in the early inspiral phase of coalescing compact binaries using Non-Relativistic General Relativity (NRGR) - an effective field theory formalism based on the Post-Newtonian approximation to General Relativity, but which provides a consistent lagrangian framework and a systematic way in which to study binary dynamics and gravitational wave emission. We calculate in this framework the spin-orbit correction to the newtonian potential at 2.5 PN. )

  15. Mean field theory for U(n) dynamical groups

    NASA Astrophysics Data System (ADS)

    Rosensteel, G.

    2011-04-01

    Algebraic mean field theory (AMFT) is a many-body physics modeling tool which firstly, is a generalization of Hartree-Fock mean field theory, and secondly, an application of the orbit method from Lie representation theory. The AMFT ansatz is that the physical system enjoys a dynamical group, which may be either a strong or a weak dynamical Lie group G. When G is a strong dynamical group, the quantum states are, by definition, vectors in one irreducible unitary representation (irrep) space, and AMFT is equivalent to the Kirillov orbit method for deducing properties of a representation from a direct geometrical analysis of the associated integral co-adjoint orbit. AMFT can be the only tractable method for analyzing some complex many-body systems when the dimension of the irrep space of the strong dynamical group is very large or infinite. When G is a weak dynamical group, the quantum states are not vectors in one irrep space, but AMFT applies if the densities of the states lie on one non-integral co-adjoint orbit. The computational simplicity of AMFT is the same for both strong and weak dynamical groups. This paper formulates AMFT explicitly for unitary Lie algebras, and applies the general method to the Lipkin-Meshkov-Glick {\\mathfrak s}{\\mathfrak u} (2) model and the Elliott {\\mathfrak s}{\\mathfrak u} (3) model. When the energy in the {\\mathfrak s}{\\mathfrak u} (3) theory is a rotational scalar function, Marsden-Weinstein reduction simplifies AMFT dynamics to a two-dimensional phase space.

  16. Emergence of Coordinated Neural Dynamics Underlies Neuroprosthetic Learning and Skillful Control.

    PubMed

    Athalye, Vivek R; Ganguly, Karunesh; Costa, Rui M; Carmena, Jose M

    2017-02-22

    During motor learning, movements and underlying neural activity initially exhibit large trial-to-trial variability that decreases over learning. However, it is unclear how task-relevant neural populations coordinate to explore and consolidate activity patterns. Exploration and consolidation could happen for each neuron independently, across the population jointly, or both. We disambiguated among these possibilities by investigating how subjects learned de novo to control a brain-machine interface using neurons from motor cortex. We decomposed population activity into the sum of private and shared signals, which produce uncorrelated and correlated neural variance, respectively, and examined how these signals' evolution causally shapes behavior. We found that initially large trial-to-trial movement and private neural variability reduce over learning. Concomitantly, task-relevant shared variance increases, consolidating a manifold containing consistent neural trajectories that generate refined control. These results suggest that motor cortex acquires skillful control by leveraging both independent and coordinated variance to explore and consolidate neural patterns.

  17. Potential dynamics of the human striate cortex cerebrum realistic neural network under the influence of an external signal

    NASA Astrophysics Data System (ADS)

    Melnikov, Leonid A.; Novosselova, Anna V.; Blinova, Nadejda V.; Vinitsky, Sergey I.; Serov, Vladislav V.; Bakutkin, Valery V.; Camenskich, T. G.; Guileva, E. V.

    2000-03-01

    In this work the numerical investigations of a potential dynamics of a neural network as the non-linear system and dynamics of the visual nerve which connect the eye retina receptors with the striate cortex cerebrum as the answer to the through-skin excitement of the eye retina by the electrical signal were realized. The visual evoked potential is the answer and characterizes the human brain state over the structures retina state and the conduction of the visual nerve fibers. The results of these investigations were presented. Specific features of the neural network, such as the excitation and depression, we took into account too. The discussion about the model parameters, used at the time of the numerical investigation, was made. The comparative analysis of the retina potential data and the data of the external signal filing by the brain hemicerebrum visual centers was made too.

  18. Dynamic evolving spiking neural networks for on-line spatio- and spectro-temporal pattern recognition.

    PubMed

    Kasabov, Nikola; Dhoble, Kshitij; Nuntalid, Nuttapod; Indiveri, Giacomo

    2013-05-01

    On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain-computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes

  19. Anomaly-Induced Dynamical Refringence in Strong-Field QED.

    PubMed

    Mueller, N; Hebenstreit, F; Berges, J

    2016-08-05

    We investigate the impact of the Adler-Bell-Jackiw anomaly on the nonequilibrium evolution of strong-field quantum electrodynamics (QED) using real-time lattice gauge theory techniques. For field strengths exceeding the Schwinger limit for pair production, we encounter a highly absorptive medium with anomaly induced dynamical refractive properties. In contrast to earlier expectations based on equilibrium properties, where net anomalous effects vanish because of the trivial vacuum structure, we find that out-of-equilibrium conditions can have dramatic consequences for the presence of quantum currents with distinctive macroscopic signatures. We observe an intriguing tracking behavior, where the system spends longest times near collinear field configurations with maximum anomalous current. Apart from the potential relevance of our findings for future laser experiments, similar phenomena related to the chiral magnetic effect are expected to play an important role for strong QED fields during initial stages of heavy-ion collision experiments.

  20. Effect of random synaptic dilution on recalling dynamics in an oscillator neural network

    NASA Astrophysics Data System (ADS)

    Kitano, Katsunori; Aoyagi, Toshio

    1998-05-01

    In the present paper, we study the effect of random synaptic dilution in an oscillator neural network in which information is encoded by the relative timing of neuronal firing. In order to analyze the recalling process in this oscillator network, we apply the method of statistical neurodynamics. The results show that the dynamical equations are described by some macroscopic order parameters, such as that representing the overlap with the retrieved pattern. We also present the phase diagram showing both the basin of attraction and the equilibrium overlap in the retrieval state. Our results are supported by numerical simulation. Consequently, it is found that both the attractor and the basin are preserved even though dilution is promoted. Moreover, as compared with the basin of attraction in the traditional binary model, it is suggested that the oscillator model is more robust against the synaptic dilution. Taking into account the fact that oscillator networks contain more detailed information than binary networks, the obtained results constitute significant support for the plausibility of temporal coding.

  1. Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.

    PubMed

    Okamoto, Hiroshi

    2016-08-01

    Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known.

  2. Modified neural dynamic surface approach to output feedback of MIMO nonlinear systems.

    PubMed

    Sun, Guofa; Li, Dongwu; Ren, Xuemei

    2015-02-01

    We report an adaptive output feedback dynamic surface control (DSC), maintaining the prescribed performance, for a class of uncertain nonlinear systems with multiinput and multioutput. Designing neural network observers and modifying the DSC method achieves several control objectives. First, to achieve output feedback control, the finite-time echo state networks (ESN) observer with fast convergence is designed to obtain the online system states. Thus, the immeasurable states in traditional state feedback control are estimated and the unknown functions are approximated by ESN. Then, a modified DSC approach is developed by introducing a high-order sliding mode differentiator to replace the first-order filter in each step. Thus, the effect of filter performance on closed-loop stability is reduced. Furthermore, the input to state stability guarantees that all signals of the whole closed-loop system are semiglobally uniformly ultimately bounded. Specifically, the performance functions make the tracking errors converge to a compact set around equilibrium. Two numerical examples illustrated the proposed control scheme with satisfactory results.

  3. Incorporating Artificial Neural Networks in the dynamic thermal-hydraulic model of a controlled cryogenic circuit

    NASA Astrophysics Data System (ADS)

    Carli, S.; Bonifetto, R.; Savoldi, L.; Zanino, R.

    2015-09-01

    A model based on Artificial Neural Networks (ANNs) is developed for the heated line portion of a cryogenic circuit, where supercritical helium (SHe) flows and that also includes a cold circulator, valves, pipes/cryolines and heat exchangers between the main loop and a saturated liquid helium (LHe) bath. The heated line mimics the heat load coming from the superconducting magnets to their cryogenic cooling circuits during the operation of a tokamak fusion reactor. An ANN is trained, using the output from simulations of the circuit performed with the 4C thermal-hydraulic (TH) code, to reproduce the dynamic behavior of the heated line, including for the first time also scenarios where different types of controls act on the circuit. The ANN is then implemented in the 4C circuit model as a new component, which substitutes the original 4C heated line model. For different operational scenarios and control strategies, a good agreement is shown between the simplified ANN model results and the original 4C results, as well as with experimental data from the HELIOS facility confirming the suitability of this new approach which, extended to an entire magnet systems, can lead to real-time control of the cooling loops and fast assessment of control strategies for heat load smoothing to the cryoplant.

  4. Task-dependent neural representations of salient events in dynamic auditory scenes

    PubMed Central

    Shuai, Lan; Elhilali, Mounya

    2014-01-01

    Selecting pertinent events in the cacophony of sounds that impinge on our ears every day is regulated by the acoustic salience of sounds in the scene as well as their behavioral relevance as dictated by top-down task-dependent demands. The current study aims to explore the neural signature of both facets of attention, as well as their possible interactions in the context of auditory scenes. Using a paradigm with dynamic auditory streams with occasional salient events, we recorded neurophysiological responses of human listeners using EEG while manipulating the subjects' attentional state as well as the presence or absence of a competing auditory stream. Our results showed that salient events caused an increase in the auditory steady-state response (ASSR) irrespective of attentional state or complexity of the scene. Such increase supplemented ASSR increases due to task-driven attention. Salient events also evoked a strong N1 peak in the ERP response when listeners were attending to the target sound stream, accompanied by an MMN-like component in some cases and changes in the P1 and P300 components under all listening conditions. Overall, bottom-up attention induced by a salient change in the auditory stream appears to mostly modulate the amplitude of the steady-state response and certain event-related potentials to salient sound events; though this modulation is affected by top-down attentional processes and the prominence of these events in the auditory scene as well. PMID:25100934

  5. Principal dynamic mode analysis of neural mass model for the identification of epileptic states

    NASA Astrophysics Data System (ADS)

    Cao, Yuzhen; Jin, Liu; Su, Fei; Wang, Jiang; Deng, Bin

    2016-11-01

    The detection of epileptic seizures in Electroencephalography (EEG) signals is significant for the diagnosis and treatment of epilepsy. In this paper, in order to obtain characteristics of various epileptiform EEGs that may differentiate different states of epilepsy, the concept of Principal Dynamic Modes (PDMs) was incorporated to an autoregressive model framework. First, the neural mass model was used to simulate the required intracerebral EEG signals of various epileptiform activities. Then, the PDMs estimated from the nonlinear autoregressive Volterra models, as well as the corresponding Associated Nonlinear Functions (ANFs), were used for the modeling of epileptic EEGs. The efficient PDM modeling approach provided physiological interpretation of the system. Results revealed that the ANFs of the 1st and 2nd PDMs for the auto-regressive input exhibited evident differences among different states of epilepsy, where the ANFs of the sustained spikes' activity encountered at seizure onset or during a seizure were the most differentiable from that of the normal state. Therefore, the ANFs may be characteristics for the classification of normal and seizure states in the clinical detection of seizures and thus provide assistance for the diagnosis of epilepsy.

  6. MEG analysis of neural dynamics in attention-deficit/hyperactivity disorder with fuzzy entropy.

    PubMed

    Monge, Jesús; Gómez, Carlos; Poza, Jesús; Fernández, Alberto; Quintero, Javier; Hornero, Roberto

    2015-04-01

    The aim of this study was to analyze the neural dynamics in attention-deficit/hyperactivity disorder (ADHD). For this purpose, magnetoencephalographic (MEG) background activity was analyzed using fuzzy entropy (FuzzyEn), an entropy measure that quantifies signal irregularity, in 13 ADHD patients and 14 control children. Additionally, relative power (RP) was computed in conventional frequency bands (delta, theta, alpha, beta and gamma). FuzzyEn results showed that MEG activity was more regular in ADHD patients than in controls. Moreover, we found an increase of power in delta band and a decrease in the remaining frequency bands. Statistically significant differences (p-values <0.05; nonparametric permutation test for multiple comparisons) were detected for FuzzyEn in the posterior and left temporal regions, and for RP in the posterior, anterior and left temporal regions. Our results support the hypothesis that ADHD involves widespread functional brain abnormalities, affecting more areas than fronto-striatal circuits, such as the left temporal and posterior regions.

  7. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  8. Speech recognition using Kohonen neural networks, dynamic programming, and multi-feature fusion

    NASA Astrophysics Data System (ADS)

    Stowe, Francis S.

    1990-12-01

    The purpose of this thesis was to develop and evaluate the performance of a three-feature speech recognition system. The three features used were LPC spectrum, formants (F1/F2), and cepstrum. The system uses Kohonen neural networks, dynamic programming, and a rule-based, feature-fusion process which integrates the three input features into one output result. The first half of this research involved evaluating the system in a speaker-dependent atmosphere. For this, the 70 word F-16 cockpit command vocabulary was used and both isolated and connected speech was tested. Results obtained are compared to a two-feature system with the same system configuration. Isolated-speech testing yielded 98.7 percent accuracy. Connected-speech testing yielded 75/0 percent accuracy. The three-feature system performed an average of 1.7 percent better than the two-feature system for isolated-speech. The second half of this research was concerned with the speaker-independent performance of the system. First, cross-speaker testing was performed using an updated 86 word library. In general, this testing yielded less than 50 percent accuracy. Then, testing was performed using averaged templates. This testing yielded an overall average in-template recognition rate of approximately 90 percent and an out-of-template recognition rate of approximately 75 percent.

  9. Surface electrocardiogram reconstruction from intracardiac electrograms using a dynamic time delay artificial neural network

    PubMed Central

    Porée, Fabienne; Kachenoura, Amar; Carrault, Guy; Dal Molin, Renzo; Mabo, Philippe; Hernandez, Alfredo I.

    2013-01-01

    The study proposes a method to facilitate the remote follow-up of patients suffering from cardiac pathologies and treated with an implantable device, by synthesizing a 12-lead surface ECG from the intracardiac electrograms (EGM) recorded by the device. Two methods (direct and indirect), based on dynamic Time Delay artificial Neural Networks (TDNN) are proposed and compared with classical linear approaches. The direct method aims to estimate 12 different transfer functions between the EGM and each surface ECG signal. The indirect method is based on a preliminary orthogonalization phase of the available EGM and ECG signals, and the application of the TDNN between these orthogonalized signals, using only three transfer functions. These methods are evaluated on a dataset issued from 15 patients. Correlation coefficients calculated between the synthesized and the real ECG show that the proposed TDNN methods represent an efficient way to synthesize 12-lead ECG, from two or four EGM and perform better than the linear ones. We also evaluate the results as a function of the EGM configuration. Results are also supported by the comparison of extracted features and a qualitative analysis performed by a cardiologist. PMID:23086502

  10. Switching dynamics of single and coupled VO2-based oscillators as elements of neural networks

    NASA Astrophysics Data System (ADS)

    Velichko, Andrey; Belyaev, Maksim; Putrolaynen, Vadim; Pergament, Alexander; Perminov, Valentin

    2017-01-01

    In the present paper, we report on the switching dynamics of both single and coupled VO2-based oscillators, with resistive and capacitive coupling, and explore the capability of their application in oscillatory neural networks. Based on these results, we further select an adequate SPICE model to describe the modes of operation of coupled oscillator circuits. Physical mechanisms influencing the time of forward and reverse electrical switching, that determine the applicability limits of the proposed model, are identified. For the resistive coupling, it is shown that synchronization takes place at a certain value of the coupling resistance, though it is unstable and a synchronization failure occurs periodically. For the capacitive coupling, two synchronization modes, with weak and strong coupling, are found. The transition between these modes is accompanied by chaotic oscillations. A decrease in the width of the spectrum harmonics in the weak-coupling mode, and its increase in the strong-coupling one, is detected. The dependences of frequencies and phase differences of the coupled oscillatory circuits on the coupling capacitance are found. Examples of operation of coupled VO2 oscillators as a central pattern generator are demonstrated.

  11. Principal dynamic mode analysis of neural mass model for the identification of epileptic states.

    PubMed

    Cao, Yuzhen; Jin, Liu; Su, Fei; Wang, Jiang; Deng, Bin

    2016-11-01

    The detection of epileptic seizures in Electroencephalography (EEG) signals is significant for the diagnosis and treatment of epilepsy. In this paper, in order to obtain characteristics of various epileptiform EEGs that may differentiate different states of epilepsy, the concept of Principal Dynamic Modes (PDMs) was incorporated to an autoregressive model framework. First, the neural mass model was used to simulate the required intracerebral EEG signals of various epileptiform activities. Then, the PDMs estimated from the nonlinear autoregressive Volterra models, as well as the corresponding Associated Nonlinear Functions (ANFs), were used for the modeling of epileptic EEGs. The efficient PDM modeling approach provided physiological interpretation of the system. Results revealed that the ANFs of the 1st and 2nd PDMs for the auto-regressive input exhibited evident differences among different states of epilepsy, where the ANFs of the sustained spikes' activity encountered at seizure onset or during a seizure were the most differentiable from that of the normal state. Therefore, the ANFs may be characteristics for the classification of normal and seizure states in the clinical detection of seizures and thus provide assistance for the diagnosis of epilepsy.

  12. Dynamics of Dual Prism Adaptation: Relating Novel Experimental Results to a Minimalistic Neural Model

    PubMed Central

    Arévalo, Orlando; Bornschlegl, Mona A.; Eberhardt, Sven; Ernst, Udo; Pawelzik, Klaus; Fahle, Manfred

    2013-01-01

    In everyday life, humans interact with a dynamic environment often requiring rapid adaptation of visual perception and motor control. In particular, new visuo–motor mappings must be learned while old skills have to be kept, such that after adaptation, subjects may be able to quickly change between two different modes of generating movements (‘dual–adaptation’). A fundamental question is how the adaptation schedule determines the acquisition speed of new skills. Given a fixed number of movements in two different environments, will dual–adaptation be faster if switches (‘phase changes’) between the environments occur more frequently? We investigated the dynamics of dual–adaptation under different training schedules in a virtual pointing experiment. Surprisingly, we found that acquisition speed of dual visuo–motor mappings in a pointing task is largely independent of the number of phase changes. Next, we studied the neuronal mechanisms underlying this result and other key phenomena of dual–adaptation by relating model simulations to experimental data. We propose a simple and yet biologically plausible neural model consisting of a spatial mapping from an input layer to a pointing angle which is subjected to a global gain modulation. Adaptation is performed by reinforcement learning on the model parameters. Despite its simplicity, the model provides a unifying account for a broad range of experimental data: It quantitatively reproduced the learning rates in dual–adaptation experiments for both direct effect, i.e. adaptation to prisms, and aftereffect, i.e. behavior after removal of prisms, and their independence on the number of phase changes. Several other phenomena, e.g. initial pointing errors that are far smaller than the induced optical shift, were also captured. Moreover, the underlying mechanisms, a local adaptation of a spatial mapping and a global adaptation of a gain factor, explained asymmetric spatial transfer and generalization of

  13. Physiological modules for generating discrete and rhythmic movements: action identification by a dynamic recurrent neural network.

    PubMed

    Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy

    2014-01-01

    In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.

  14. Learning from adaptive neural dynamic surface control of strict-feedback systems.

    PubMed

    Wang, Min; Wang, Cong

    2015-06-01

    Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.

  15. Cosmological dynamics of a Dirac-Born-Infeld field

    SciTech Connect

    Copeland, Edmund J.; Mizuno, Shuntaro; Shaeri, Maryam

    2010-06-15

    We analyze the dynamics of a Dirac-Born-Infeld (DBI) field in a cosmological setup which includes a perfect fluid. Introducing convenient dynamical variables, we show that the evolution equations form an autonomous system when the potential and the brane tension of the DBI field are arbitrary power law or exponential functions of the DBI field. In particular we find scaling solutions can exist when powers of the field in the potential and warp factor satisfy specific relations. A new class of fixed-point solutions are obtained corresponding to points which initially appear singular in the evolution equations, but on closer inspection are actually well defined. In all cases, we perform a phase-space analysis and obtain the late-time attractor structure of the system. Of particular note when considering cosmological perturbations in DBI inflation is a fixed-point solution where the Lorentz factor is a finite large constant and the equation of state parameter of the DBI field is w=-1. Since in this case the speed of sound c{sub s} becomes constant, the solution can be thought to serve as a good background to perturb about.

  16. Dynamics of a scalar field in Robertson-Walker spacetimes

    NASA Astrophysics Data System (ADS)

    Copeland, Edmund J.; Mizuno, Shuntaro; Shaeri, Maryam

    2009-05-01

    We analyze the dynamics of a single scalar field in Friedmann-Robertson-Walker universes with spatial curvature. We obtain the fixed point solutions which are shown to be late time attractors. In particular, we determine the corresponding scalar field potentials which correspond to these stable solutions. The analysis is quite general and incorporates expanding and contracting universes with both positive and negative scalar potentials. We demonstrate that the known power law, exponential, and de Sitter solutions are certain limits of our general set of solutions.

  17. A sensor fusion field experiment in forest ecosystem dynamics

    NASA Technical Reports Server (NTRS)

    Smith, James A.; Ranson, K. Jon; Williams, Darrel L.; Levine, Elissa R.; Goltz, Stewart M.

    1990-01-01

    The background of the Forest Ecosystem Dynamics field campaign is presented, a progress report on the analysis of the collected data and related modeling activities is provided, and plans for future experiments at different points in the phenological cycle are outlined. The ecological overview of the study site is presented, and attention is focused on forest stands, needles, and atmospheric measurements. Sensor deployment and thermal and microwave observations are discussed, along with two examples of the optical radiation measurements obtained during the experiment in support of radiative transfer modeling. Future activities pertaining to an archival system, synthetic aperture radar, carbon acquisition modeling, and upcoming field experiments are considered.

  18. Post-Newtonian celestial dynamics in cosmology: Field equations

    NASA Astrophysics Data System (ADS)

    Kopeikin, Sergei M.; Petrov, Alexander N.

    2013-02-01

    Post-Newtonian celestial dynamics is a relativistic theory of motion of massive bodies and test particles under the influence of relatively weak gravitational forces. The standard approach for development of this theory relies upon the key concept of the isolated astronomical system supplemented by the assumption that the background spacetime is flat. The standard post-Newtonian theory of motion was instrumental in the explanation of the existing experimental data on binary pulsars, satellite, and lunar laser ranging, and in building precise ephemerides of planets in the Solar System. Recent studies of the formation of large-scale structures in our Universe indicate that the standard post-Newtonian mechanics fails to describe more subtle dynamical effects in motion of the bodies comprising the astronomical systems of larger size—galaxies and clusters of galaxies—where the Riemann curvature of the expanding Friedmann-Lemaître-Robertson-Walker universe interacts with the local gravitational field of the astronomical system and, as such, cannot be ignored. The present paper outlines theoretical principles of the post-Newtonian mechanics in the expanding Universe. It is based upon the gauge-invariant theory of the Lagrangian perturbations of cosmological manifold caused by an isolated astronomical N-body system (the Solar System, a binary star, a galaxy, and a cluster of galaxies). We postulate that the geometric properties of the background manifold are described by a homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker metric governed by two primary components—the dark matter and the dark energy. The dark matter is treated as an ideal fluid with the Lagrangian taken in the form of pressure along with the scalar Clebsch potential as a dynamic variable. The dark energy is associated with a single scalar field with a potential which is hold unspecified as long as the theory permits. Both the Lagrangians of the dark matter and the scalar field are

  19. Dissociation dynamics of diatomic molecules in intense fields

    NASA Astrophysics Data System (ADS)

    Magrakvelidze, Maia

    We study the dynamics of diatomic molecules (dimers) in intense IR and XUV laser fields theoretically and compare the results with measured data in collaboration with different experimental groups worldwide. The first three chapters of the thesis cover the introduction and the background on solving time-independent and time-dependent Schrodinger equation. The numerical results in this thesis are presented in four chapters, three of which are focused on diatomic molecules in IR fields. The last one concentrates on diatomic molecules in XUV pulses. The study of nuclear dynamics of H2 or D2 molecules in IR pulses is given in Chapter 4. First, we investigate the optimal laser parameters for observing field-induced bond softening and bond hardening in D2+. Next, the nuclear dynamics of H2 + molecular ions in intense laser fields are investigated by analyzing their fragment kinetic-energy release (KER) spectra as a function of the pump-probe delay τ Lastly, the electron localization is studied for long circularly polarized laser pulses. Chapter 5 covers the dissociation dynamics of O2+ in an IR laser field. The fragment KER spectra are analyzed as a function of the pump-probe delay τ Within the Born-Oppenheimer approximation, we calculate ab-initio adiabatic potential-energy curves and their electric dipole couplings, using the quantum chemistry code GAMESS. In Chapter 6, the dissociation dynamics of the noble gas dimer ions He 2+, Ne2+, Ar2 +, Kr2+, and Xe2 + is investigated in ultrashort pump and probe laser pulses of different wavelengths. We observe a striking "delay gap" in the pump-probe-delay-dependent KER spectrum only if the probe-pulse wavelength exceeds the pump-pulse wavelength. Comparing pump-probe-pulse-delay dependent KER spectra for different noble gas dimer cations, we quantitatively discuss quantum-mechanical versus classical aspects of the nuclear vibrational motion as a function of the nuclear mass. Chapter 7 focuses on diatomic molecules in XUV

  20. Dynamic boundary layer based neural network quasi-sliding mode control for soft touching down on asteroid

    NASA Astrophysics Data System (ADS)

    Liu, Xiaosong; Shan, Zebiao; Li, Yuanchun

    2017-04-01

    Pinpoint landing is a critical step in some asteroid exploring missions. This paper is concerned with the descent trajectory control for soft touching down on a small irregularly-shaped asteroid. A dynamic boundary layer based neural network quasi-sliding mode control law is proposed to track a desired descending path. The asteroid's gravitational acceleration acting on the spacecraft is described by the polyhedron method. Considering the presence of input constraint and unmodeled acceleration, the dynamic equation of relative motion is presented first. The desired descending path is planned using cubic polynomial method, and a collision detection algorithm is designed. To perform trajectory tracking, a neural network sliding mode control law is given first, where the sliding mode control is used to ensure the convergence of system states. Two radial basis function neural networks (RBFNNs) are respectively used as an approximator for the unmodeled term and a compensator for the difference between the actual control input with magnitude constraint and nominal control. To improve the chattering induced by the traditional sliding mode control and guarantee the reachability of the system, a specific saturation function with dynamic boundary layer is proposed to replace the sign function in the preceding control law. Through the Lyapunov approach, the reachability condition of the control system is given. The improved control law can guarantee the system state move within a gradually shrinking quasi-sliding mode band. Numerical simulation results demonstrate the effectiveness of the proposed control strategy.

  1. Hybrid bright-field and hologram imaging of cell dynamics

    PubMed Central

    Byeon, Hyeokjun; Lee, Jaehyun; Doh, Junsang; Lee, Sang Joon

    2016-01-01

    Volumetric observation is essential for understanding the details of complex biological phenomena. In this study, a bright-field microscope, which provides information on a specific 2D plane, and a holographic microscope, which provides information spread over 3D volumes, are integrated to acquire two complementary images simultaneously. The developed system was successfully applied to capture distinct T-cell adhesion dynamics on inflamed endothelial layers, including capture, rolling, crawling, transendothelial migration, and subendothelial migration. PMID:27640337

  2. Stochastic Mean-Field Dynamics For Nuclear Collisions

    SciTech Connect

    Ayik, Sakir

    2008-11-11

    We discuss a stochastic approach to improve description of nuclear dynamics beyond the mean-field approximation at low energies. For small amplitude fluctuations, this approach gives a result for the dispersion of a one-body observable that is identical to the result obtained previously through a variational approach. Furthermore, it incorporates one-body dissipation and fluctuation mechanisms in accordance with quantal fluctuation-dissipation relation.

  3. Hybrid bright-field and hologram imaging of cell dynamics

    NASA Astrophysics Data System (ADS)

    Byeon, Hyeokjun; Lee, Jaehyun; Doh, Junsang; Lee, Sang Joon

    2016-09-01

    Volumetric observation is essential for understanding the details of complex biological phenomena. In this study, a bright-field microscope, which provides information on a specific 2D plane, and a holographic microscope, which provides information spread over 3D volumes, are integrated to acquire two complementary images simultaneously. The developed system was successfully applied to capture distinct T-cell adhesion dynamics on inflamed endothelial layers, including capture, rolling, crawling, transendothelial migration, and subendothelial migration.

  4. Laser Velocimetry Measurements of Oscillating Airfoil Dynamic Stall Flow Field

    DTIC Science & Technology

    1991-06-01

    Velocimetry Measurements of Oscillating Airfoil Dynamic Stall Flow Field By M.S.Chandrasekharal Navy-NASA Joint Institute of Aeronautics and Fluid Mechanics ...tunnel of the Fluid Mechanics Laboratory(FML) angle information. The other could be used for the at NASA Ames Research Center (ARC). It is one of...were on throat is always kept choked so that no disturbances a different traverse mechanism , but this was driven as can propagate upstream into the

  5. Downscaling Transpiration from the Field to the Tree Scale using the Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Hopmans, J. W.

    2015-12-01

    Estimating actual evapotranspiration (ETa) spatial variability in orchards is key when trying to quantify water (and associated nutrients) leaching, both with the mass balance and inverse modeling methods. ETa measurements however generally occur at larger scales (e.g. Eddy-covariance method) or have a limited quantitative accuracy. In this study we propose to establish a statistical relation between field ETa and field averaged variables known to be closely related to it, such as stem water potential (WP), soil water storage (WS) and ETc. For that we use 4 years of soil and almond trees water status data to train artificial neural networks (ANNs) predicting field scale ETa and downscale the relation to the individual tree scale. ANNs composed of only two neurons in a hidden layer (11 parameters on total) proved to be the most accurate (overall RMSE = 0.0246 mm/h, R2 = 0.944), seemingly because adding more neurons generated overfitting of noise in the training dataset. According to the optimized weights in the best ANNs, the first hidden neuron could be considered in charge of relaying the ETc information while the other one would deal with the water stress response to stem WP, soil WS, and ETc. As individual trees had specific signatures for combinations of these variables, variability was generated in their ETa responses. The relative canopy cover was the main source of variability of ETa while stem WP was the most influent factor for the ETa / ETc ratio. Trees on drip-irrigated side of the orchard appeared to be less affected by low estimated soil WS in the root zone than on the fanjet micro-sprinklers side, possibly due to a combination of (i) more substantial root biomass increasing the plant hydraulic conductance, (ii) bias in the soil WS estimation due to soil moisture heterogeneity on the drip-side, and (iii) the access to deeper water resource. Tree scale ETa responses are in good agreement with soil-plant water relations reported in the literature, and

  6. Dynamical mean-field theory for flat-band ferromagnetism

    NASA Astrophysics Data System (ADS)

    Nguyen, Hong-Son; Tran, Minh-Tien

    2016-09-01

    The magnetically ordered phase in the Hubbard model on the infinite-dimensional hyper-perovskite lattice is investigated within dynamical mean-field theory. It turns out for the infinite-dimensional hyper-perovskite lattice the self-consistent equations of dynamical mean-field theory are exactly solved, and this makes the Hubbard model exactly solvable. We find electron spins are aligned in the ferromagnetic or ferrimagnetic configuration at zero temperature and half filling of the edge-centered sites of the hyper-perovskite lattice. A ferromagnetic-ferrimagnetic phase transition driven by the energy level splitting is found and it occurs through a phase separation. The origin of ferromagnetism and ferrimagnetism arises from the band flatness and the virtual hybridization between macroscopically degenerate flat bands and dispersive ones. Based on the exact solution in the infinite-dimensional limit, a modified exact diagonalization as the impurity solver for dynamical mean-field theory on finite-dimensional perovskite lattices is also proposed and examined.

  7. Dynamics of a polyelectrolyte under a constant electric field

    NASA Astrophysics Data System (ADS)

    Park, Pyeong Jun

    2015-11-01

    We perform a molecular dynamics simulation of a polyelectrolyte in a viscous fluid under an external electric field to study the dynamics of gel-free electrophoresis. To incorporate the hydrodynamic effects, we employ a coarse-grained description of water by using multiparticle collision dynamics. We use a screened Coulomb interaction among the monomers and explicit monovalent counterions to model the electrostatic interactions in an ionic solution. The mobility of the polyelectrolyte µ is obtained as a function of the molecular weight N, the electric field strength E,and the Debye screening length of the solvent λ. The mobility is found to be independent of N for large N and to exhibit a maximum at a certain N for a large λ, which are in agreement with experimental results. The dependence of µ on E is also examined and discussed by considering the effects of an electric field on counterion condensation. The dependence of µ on λ shows a discrepancy between our simulation and experiments, which implies that the added salts not only screen out the Coulomb interaction but also participate in the counterion condensation significantly.

  8. Inflationary dynamics of kinetically-coupled gauge fields

    SciTech Connect

    Ferreira, Ricardo Z.; Ganc, Jonathan E-mail: ganc@cp3.dias.sdu.dk

    2015-04-01

    We investigate the inflationary dynamics of two kinetically-coupled massless U(1) gauge fields with time-varying kinetic-term coefficients. Ensuring that the system does not have strongly coupled regimes shrinks the parameter space. Also, we further restrict ourselves to systems that can be quantized using the standard creation, annihilation operator algebra. This second constraint limits us to scenarios where the system can be diagonalized into the sum of two decoupled, massless, vector fields with a varying kinetic-term coefficient. Such a system might be interesting for magnetogenesis because of how the strong coupling problem generalizes. We explore this idea by assuming that one of the gauge fields is the Standard Model U(1) field and that the other dark gauge field has no particles charged under its gauge group. We consider whether it would be possible to transfer a magnetic field from the dark sector, generated perhaps before the coupling was turned on, to the visible sector. We also investigate whether the simple existence of the mixing provides more opportunities to generate magnetic fields. We find that neither possibility works efficiently, consistent with the well-known difficulties in inflationary magnetogenesis.

  9. Dynamical quark mass generation in a strong external magnetic field

    NASA Astrophysics Data System (ADS)

    Mueller, Niklas; Bonnet, Jacqueline A.; Fischer, Christian S.

    2014-05-01

    We investigate the effect of a strong magnetic field on dynamical chiral symmetry breaking in quenched and unquenched QCD. To this end we apply the Ritus formalism to the coupled set of (truncated) Dyson-Schwinger equations for the quark and gluon propagator under the presence of an external constant Abelian magnetic field. We work with an approximation that is trustworthy for large fields eH >ΛQCD2 but is not restricted to the lowest Landau level. We confirm the linear rise of the quark condensate with a large external field previously found in other studies and observe the transition to the asymptotic power law at extremely large fields. We furthermore quantify the validity of the lowest Landau level approximation and find substantial quantitative differences to the full calculation even at very large fields. We discuss unquenching effects in the strong field propagators, condensate and the magnetic polarization of the vacuum. We find a significant weakening of magnetic catalysis caused by the backreaction of quarks on the Yang-Mills sector. Our results support explanations of the inverse magnetic catalysis found in recent lattice studies due to unquenching effects.

  10. Multi-field open inflation model and multi-field dynamics in tunneling

    SciTech Connect

    Sugimura, Kazuyuki; Yamauchi, Daisuke; Sasaki, Misao E-mail: yamauchi@icrr.u-tokyo.ac.jp

    2012-01-01

    We consider a multi-field open inflation model, in which one of the fields dominates quantum tunneling from a false vacuum while the other field governs slow-roll inflation within the bubble nucleated from false vacuum decay. We call the former the tunneling field and the latter the inflaton field. In the limit of a negligible interaction between the two fields, the false vacuum decay is described by a Coleman-De Luccia instanton. Here we take into account the coupling between the two fields and construct explicitly a multi-field instanton for a simple quartic potential model. We also solve the evolution of the scalar fields within the bubble. We find our model realizes open inflation successfully. This is the first concrete, viable model of open inflation realized with a simple potential. We then study the effect of the multi-field dynamics on the false vacuum decay, specifically on the tunneling rate. We find the tunneling rate increases in general provided that the multi-field effect can be treated perturbatively.

  11. Low-Dimensional Models of “Neuro-Glio-Vascular Unit” for Describing Neural Dynamics under Normal and Energy-Starved Conditions

    PubMed Central

    Chhabria, Karishma; Chakravarthy, V. Srinivasa

    2016-01-01

    The motivation of developing simple minimal models for neuro-glio-vascular (NGV) system arises from a recent modeling study elucidating the bidirectional information flow within the NGV system having 89 dynamic equations (1). While this was one of the first attempts at formulating a comprehensive model for neuro-glio-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low-­dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the NGV system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system, which takes neural firing rate as input and returns an “energy” variable (analogous to ATP) as output. To this end, we present two models: biophysical neuro-energy (Model 1 with five variables), comprising KATP channel activity governed by neuronal ATP dynamics, and the dynamic threshold (Model 2 with three variables), depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes, such as continuous spiking, phasic, and tonic bursting depending on the ATP production coefficient, ɛp, and external current. We then demonstrate that in a network comprising such energy-dependent neuron units, ɛp could modulate the local field potential (LFP) frequency and amplitude. Interestingly, low-frequency LFP dominates under low ɛp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed “neuron-energy” unit may be implemented in building models of NGV networks to simulate data obtained from multimodal neuroimaging systems, such as functional near infrared spectroscopy coupled to electroencephalogram and functional magnetic resonance imaging coupled to electroencephalogram. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies

  12. The role of membrane dynamics in electrical and infrared neural stimulation

    NASA Astrophysics Data System (ADS)

    Moen, Erick K.; Beier, Hope T.; Ibey, Bennett L.; Armani, Andrea M.

    2016-03-01

    We recently developed a nonlinear optical imaging technique based on second harmonic generation (SHG) to identify membrane disruption events in live cells. This technique was used to detect nanoporation in the plasma membrane following nanosecond pulsed electric field (nsPEF) exposure. It has been hypothesized that similar poration events could be induced by the thermal gradients generated by infrared (IR) laser energy. Optical pulses are a highly desirable stimulus for the nervous system, as they are capable of inhibiting and producing action potentials in a highly localized but non-contact fashion. However, the underlying mechanisms involved with infrared neural stimulation (INS) are not well understood. The ability of our method to non-invasively measure membrane structure and transmembrane potential via Two Photon Fluorescence (TPF) make it uniquely suited to neurological research. In this work, we leverage our technique to understand what role membrane structure plays during INS and contrast it with nsPEF stimulation. We begin by examining the effect of IR pulses on CHO-K1 cells before progressing to primary hippocampal neurons. The use of these two cell lines allows us to directly compare poration as a result of IR pulses to nsPEF exposure in both a neuron-derived cell line, and one likely lacking native channels sensitive to thermal stimuli.

  13. Neural Network Prediction of Failure of Damaged Composite Pressure Vessels from Strain Field Data Acquired by a Computer Vision Method

    NASA Technical Reports Server (NTRS)

    Russell, Samuel S.; Lansing, Matthew D.

    1997-01-01

    This effort used a new and novel method of acquiring strains called Sub-pixel Digital Video Image Correlation (SDVIC) on impact damaged Kevlar/epoxy filament wound pressure vessels during a proof test. To predict the burst pressure, the hoop strain field distribution around the impact location from three vessels was used to train a neural network. The network was then tested on additional pressure vessels. Several variations on the network were tried. The best results were obtained using a single hidden layer. SDVIC is a fill-field non-contact computer vision technique which provides in-plane deformation and strain data over a load differential. This method was used to determine hoop and axial displacements, hoop and axial linear strains, the in-plane shear strains and rotations in the regions surrounding impact sites in filament wound pressure vessels (FWPV) during proof loading by internal pressurization. The relationship between these deformation measurement values and the remaining life of the pressure vessels, however, requires a complex theoretical model or numerical simulation. Both of these techniques are time consuming and complicated. Previous results using neural network methods had been successful in predicting the burst pressure for graphite/epoxy pressure vessels based upon acoustic emission (AE) measurements in similar tests. The neural network associates the character of the AE amplitude distribution, which depends upon the extent of impact damage, with the burst pressure. Similarly, higher amounts of impact damage are theorized to cause a higher amount of strain concentration in the damage effected zone at a given pressure and result in lower burst pressures. This relationship suggests that a neural network might be able to find an empirical relationship between the SDVIC strain field data and the burst pressure, analogous to the AE method, with greater speed and simplicity than theoretical or finite element modeling. The process of testing SDVIC

  14. Neural Architectures for Control

    NASA Technical Reports Server (NTRS)

    Peterson, James K.

    1991-01-01

    The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.

  15. A Novel Variable Field System for Field-Cycled Dynamic Nuclear Polarization Spectroscopy

    PubMed Central

    Shet, Keerthi; Caia, George L.; Kesselring, Eric; Samouilov, Alexandre; Petryakov, Sergey; Lurie, David J.; Zweier, Jay L.

    2014-01-01

    Dynamic nuclear polarization (DNP) is an NMR-based technique which enables detection and spectral characterization of endogenous and exogenous paramagnetic substances measured via transfer of polarization from the saturated unpaired electron spin system to the NMR active nuclei. A variable field system capable of performing DNP spectroscopy with NMR detection at any magnetic field in the range 0 - 0.38 T is described. The system is built around a clinical open-MRI system. To obtain EPR spectra via DNP, partial cancellation of the detection field B0NMR is required to alter the evolution field B0EPR at which the EPR excitation is achieved. The addition of resistive actively shielded field cancellation coils in the gap of the primary magnet provides this field offset in the range of 0–100 mT. A description of the primary magnet, cancellation coils, power supplies, interfacing hardware, RF electronics and console are included. Performance of the instrument has been evaluated by acquiring DNP spectra of phantoms with aqueous nitroxide solutions (TEMPOL) at three NMR detection fields of 97 G, 200 G and 587 G corresponding to 413 kHz, 851.6 kHz and 2.5 MHz respectively and fixed EPR evolution field of 100 G corresponding to an irradiation frequency of 282.3 MHz. This variable field DNP system offers great flexibility for the performance of DNP spectroscopy with independent optimum choice of EPR excitation and NMR detection fields. PMID:20570197

  16. Dynamical properties of random-field Ising model.

    PubMed

    Sinha, Suman; Mandal, Pradipta Kumar

    2013-02-01

    Extensive Monte Carlo simulations are performed on a two-dimensional random field Ising model. The purpose of the present work is to study the disorder-induced changes in the properties of disordered spin systems. The time evolution of the domain growth, the order parameter, and the spin-spin correlation functions are studied in the nonequilibrium regime. The dynamical evolution of the order parameter and the domain growth shows a power law scaling with disorder-dependent exponents. It is observed that for weak random fields, the two-dimensional random field Ising model possesses long-range order. Except for weak disorder, exchange interaction never wins over pinning interaction to establish long-range order in the system.

  17. Dynamic-local-field approximation for the quantum solids

    NASA Technical Reports Server (NTRS)

    Etters, R. D.; Danilowicz, R. L.

    1974-01-01

    A local-molecular-field description for the ground-state properties of the quantum solids is presented. The dynamical behavior of atoms contributing to the local field, which acts on an arbitrary pair of test particles, is incorporated by decoupling the pair correlations between these field atoms. The energy, pressure, compressibility, single-particle-distribution function, and the rms atomic deviations about the equilibrium lattice sites are calculated for H2, He-3, and He-4 over the volume range from 5 to 24.5 cu cm/mole. The results are in close agreement with existing Monte Carlo calculations wherever comparisons are possible. At very high pressure, the results agree with simplified descriptions which depend on negligible overlap of the system wave function between neighboring lattice sites.

  18. Dynamical State Transition by Neuromodulation Due to Acetylcholine in Neural Network Model for Oscillatory Phenomena in Thalamus

    NASA Astrophysics Data System (ADS)

    Omori, Toshiaki; Horiguchi, Tsuyoshi

    2004-12-01

    We propose a two-layered neural network model for oscillatory phenomena in the thalamic system and investigate an effect of neuromodulation due to the acetylcholine on the oscillatory phenomena by numerical simulations. The proposed model consists of a layer of the thalamic reticular neurons and that of the cholinergic neurons. We introduce a dynamics of concentration of the acetylcholine which depends on a state of the cholinergic neurons, and assume that the conductance of the thalamic reticular neurons is dynamically regulated by the acetylcholine. From the results obtained by numerical simulations, we find that a dynamical transition between a bursting state and a resting state occurs successively in the layer of the thalamic reticular neurons due to the acetylcholine. Therefore it turns out that the neuromodulation due to the acetylcholine is important for the dynamical state transition in the thalamic system.

  19. Quantitative Live Imaging of Human Embryonic Stem Cell Derived Neural Rosettes Reveals Structure-Function Dynamics Coupled to Cortical Development.

    PubMed

    Ziv, Omer; Zaritsky, Assaf; Yaffe, Yakey; Mutukula, Naresh; Edri, Reuven; Elkabetz, Yechiel

    2015-10-01

    Neural stem cells (NSCs) are progenitor cells for brain development, where cellular spatial composition (cytoarchitecture) and dynamics are hypothesized to be linked to critical NSC capabilities. However, understanding cytoarchitectural dynamics of this process has been limited by the difficulty to quantitatively image brain development in vivo. Here, we study NSC dynamics within Neural Rosettes--highly organized multicellular structures derived from human pluripotent stem cells. Neural rosettes contain NSCs with strong epithelial polarity and are expected to perform apical-basal interkinetic nuclear migration (INM)--a hallmark of cortical radial glial cell development. We developed a quantitative live imaging framework to characterize INM dynamics within rosettes. We first show that the tendency of cells to follow the INM orientation--a phenomenon we referred to as radial organization, is associated with rosette size, presumably via mechanical constraints of the confining structure. Second, early forming rosettes, which are abundant with founder NSCs and correspond to the early proliferative developing cortex, show fast motions and enhanced radial organization. In contrast, later derived rosettes, which are characterized by reduced NSC capacity and elevated numbers of differentiated neurons, and thus correspond to neurogenesis mode in the developing cortex, exhibit slower motions and decreased radial organization. Third, later derived rosettes are characterized by temporal instability in INM measures, in agreement with progressive loss in rosette integrity at later developmental stages. Finally, molecular perturbations of INM by inhibition of actin or non-muscle myosin-II (NMII) reduced INM measures. Our framework enables quantification of cytoarchitecture NSC dynamics and may have implications in functional molecular studies, drug screening, and iPS cell-based platforms for disease modeling.

  20. Characterization of voltage degradation in dynamic field gradient focusing

    PubMed Central

    Burke, Jeffrey M.; Ivory, Cornelius F.

    2010-01-01

    Dynamic field gradient focusing (DFGF) is an equilibrium gradient method that utilizes an electric field gradient to simultaneously separate and concentrate charged analytes based on their individual electrophoretic mobilities. This work describes the use of a 2-D nonlinear, numerical simulation to examine the impact of voltage loss from the electrodes to the separation channel, termed voltage degradation, and distortions in the electric field on the performance of DFGF. One of the design parameters that has a large impact on the degree of voltage degradation is the placement of the electrodes in relation to the separation channel. The simulation shows that a distance of about 3 mm from the electrodes to the separation channel gives the electric field profile with least amount of voltage degradation. The simulation was also used to describe the elution of focused protein peaks. The simulation shows that elution under constant electric field gradient gives better performance than elution through shallowing of the electric field. Qualitative agreement between the numerical simulation and experimental results is shown. The simulation also illustrates that the presence of a defocusing region at the cathodic end of the separation channel causes peak dispersion during elution. The numerical model is then used to design a system that does not suffer from a defocusing region. Peaks eluted under this design experienced no band broadening in our simulations. Preliminary experimental results using the redesigned chamber are shown. PMID:18306183

  1. The Dynamics of Ultrasonically Levitated Drops in an Electric Field

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.; Holt, R. G.; Thiessen, D. B.

    1996-01-01

    Ultrasonic and electrostatic levitation techniques have allowed the experimental investigation of the nonlinear oscillatory dynamics of free droplets with diameter between 0.1 and 0.4 cm. The measurement of the resonance frequencies of the first three normal modes of large amplitude shape oscillations in an electric field of varying magnitude has been carried out with and without surface charges for weakly conducting liquids in air. These oscillations of nonspherical levitated drops have been driven by either modulating the ultrasonic field or by using a time-varying electric field, and the free decay from the oscillatory state has been recorded. A decrease in the resonance frequency of the driven fundamental quadrupole mode has been measured for increasing oblate deformation in the absence of an electric field. Similarly, a decrease in this frequency has also been found for increasing DC electric field magnitude. A soft nonlinearity exists in the amplitude dependence of the resonant mode frequencies for freely decaying as well as ultrasonically and electrically driven uncharged drops. This decrease in resonance frequency is accentuated by the presence of free surface charge on the drop. Subharmonic resonance excitation has been observed for drops in a time-varying electric field, and hysteresis exists for resonant modes driven to large amplitude. Mode coupling from lower-order resonances to higher-order modes has been found to be very weak, even for fairly large amplitude shape oscillations. Most of these results are in general agreement with predictions from recent analytical and numerical investigations.

  2. Dynamic diagnostics of the error fields in tokamaks

    NASA Astrophysics Data System (ADS)

    Pustovitov, V. D.

    2007-07-01

    The error field diagnostics based on magnetic measurements outside the plasma is discussed. The analysed methods rely on measuring the plasma dynamic response to the finite-amplitude external magnetic perturbations, which are the error fields and the pre-programmed probing pulses. Such pulses can be created by the coils designed for static error field correction and for stabilization of the resistive wall modes, the technique developed and applied in several tokamaks, including DIII-D and JET. Here analysis is based on the theory predictions for the resonant field amplification (RFA). To achieve the desired level of the error field correction in tokamaks, the diagnostics must be sensitive to signals of several Gauss. Therefore, part of the measurements should be performed near the plasma stability boundary, where the RFA effect is stronger. While the proximity to the marginal stability is important, the absolute values of plasma parameters are not. This means that the necessary measurements can be done in the diagnostic discharges with parameters below the nominal operating regimes, with the stability boundary intentionally lowered. The estimates for ITER are presented. The discussed diagnostics can be tested in dedicated experiments in existing tokamaks. The diagnostics can be considered as an extension of the 'active MHD spectroscopy' used recently in the DIII-D tokamak and the EXTRAP T2R reversed field pinch.

  3. Ionization and dissociation dynamics of molecules in strong laser fields

    NASA Astrophysics Data System (ADS)

    Lai, Wei

    The fast advancement of ultrashort-pulsed high-intensity laser technology allows for generating an electric field equivalent to the Coulomb field inside an atom or a molecule (e.g., EC=5.14x109 V/cm at the 1s orbit radius a0=0.0529 nm of the hydrogen atom, which corresponds to an intensity of 3.54x1016 W/cm2). Atoms and molecules exposed in such a field will easily be ionized, as the external field is strong enough to remove the electrons from the core. This is usually referred to "strong field". Strong fields provide a new tool for studying the interaction of atoms and molecules with light in the nonlinear nonperturbative regime. During the past three decades, significant progress has been made in the strong field science. Today, most phenomena involving atoms in strong fields have been relatively well understood by the single-active-electron (SAE) approximation. However, the interpretation of these responses in molecules has encountered great difficulties. Not like atoms that only undergo excitation and ionization, various dissociation channels accompanying excitation and ionization can occur in molecules during the laser pulse interaction, which imparts further complexity to the study of molecules in strong fields. Previous studies have shown that molecules can behave significantly different from rare gas atoms in phenomena as simple as single and double ionization. Molecular dissociation following ionization also presents challenges in strong fields compared to what we have learned in the weak-field regime. This dissertation focuses on experimental studies on ionization and dissociation of some commonly-seen small molecules in strong laser fields. Previous work of molecules in strong fields will be briefly reviewed, particularly on some open questions about multiple dissociation channels, nonsequential double ionization, enhanced ionization and molecular alignment. The identification of various molecular dissociation channels by recent experimental technical

  4. Molecular dynamics simulations of ice nucleation by electric fields.

    PubMed

    Yan, J Y; Patey, G N

    2012-07-05

    Molecular dynamics simulations are used to investigate heterogeneous ice nucleation in model systems where an electric field acts on water molecules within 10-20 Å of a surface. Two different water models (the six-site and TIP4P/Ice models) are considered, and in both cases, it is shown that a surface field can serve as a very effective ice nucleation catalyst in supercooled water. Ice with a ferroelectric cubic structure nucleates near the surface, and dipole disordered cubic ice grows outward from the surface layer. We examine the influences of temperature and two important field parameters, the field strength and distance from the surface over which it acts, on the ice nucleation process. For the six-site model, the highest temperature where we observe field-induced ice nucleation is 280 K, and for TIP4P/Ice 270 K (note that the estimated normal freezing points of the six-site and TIP4P/Ice models are ∼289 and ∼270 K, respectively). The minimum electric field strength required to nucleate ice depends a little on how far the field extends from the surface. If it extends 20 Å, then a field strength of 1.5 × 10(9) V/m is effective for both models. If the field extent is 10 Å, then stronger fields are required (2.5 × 10(9) V/m for TIP4P/Ice and 3.5 × 10(9) V/m for the six-site model). Our results demonstrate that fields of realistic strength, that act only over a narrow surface region, can effectively nucleate ice at temperatures not far below the freezing point. This further supports the possibility that local electric fields can be a significant factor influencing heterogeneous ice nucleation in physical situations. We would expect this to be especially relevant for ice nuclei with very rough surfaces where one would expect local fields of varying strength and direction.

  5. Neural network prediction of carbonate lithofacies from well logs, Big Bow and Sand Arroyo Creek fields, Southwest Kansas

    USGS Publications Warehouse

    Qi, L.; Carr, T.R.

    2006-01-01

    In the Hugoton Embayment of southwestern Kansas, St. Louis Limestone reservoirs have relatively low recovery efficiencies, attributed to the heterogeneous nature of the oolitic deposits. This study establishes quantitative relationships between digital well logs and core description data, and applies these relationships in a probabilistic sense to predict lithofacies in 90 uncored wells across the Big Bow and Sand Arroyo Creek fields. In 10 wells, a single hidden-layer neural network based on digital well logs and core described lithofacies of the limestone depositional texture was used to train and establish a non-linear relationship between lithofacies assignments from detailed core descriptions and selected log curves. Neural network models were optimized by selecting six predictor variables and automated cross-validation with neural network parameters and then used to predict lithofacies on the whole data set of the 2023 half-foot intervals from the 10 cored wells with the selected network size of 35 and a damping parameter of 0.01. Predicted lithofacies results compared to actual lithofacies displays absolute accuracies of 70.37-90.82%. Incorporating adjoining lithofacies, within-one lithofacies improves accuracy slightly (93.72%). Digital logs from uncored wells were batch processed to predict lithofacies and probabilities related to each lithofacies at half-foot resolution corresponding to log units. The results were used to construct interpolated cross-sections and useful depositional patterns of St. Louis lithofacies were illustrated, e.g., the concentration of oolitic deposits (including lithofacies 5 and 6) along local highs and the relative dominance of quartz-rich carbonate grainstone (lithofacies 1) in the zones A and B of the St. Louis Limestone. Neural network techniques are applicable to other complex reservoirs, in which facies geometry and distribution are the key factors controlling heterogeneity and distribution of rock properties. Future work

  6. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    ERIC Educational Resources Information Center

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  7. Event-related potential study of dynamic neural mechanisms of semantic organizational strategies in verbal learning.

    PubMed

    Blanchet, Sophie; Gagnon, Geneviève; Bastien, Célyne

    2007-09-19

    Neuroimaging and neuropsychological data indicate that the frontal regions are implicated in semantic organizational strategies in verbal learning. Whereas these approaches tend to adopt a localizationist view, we used event-related potentials (ERPs) to investigate the dynamic neural mechanisms involved in these strategies. We recorded ERPs using a 128-channel system in 12 young adults (23.75+/-3.02 years) during 3 encoding conditions that manipulated the levels of semantic organization demands. In the Unrelated condition, the words to encode did not share any semantic attributes. For both Spontaneous and Guided conditions, the words in each list were drawn from four semantic categories. In the Spontaneous condition, participants were not informed about the semantic relationship between items. In contrast, in the Guided condition, participants were instructed to improve their subsequent recall by mentally regrouping related items with the aid of category labels. Results indicated that the P200 amplitude increased with the greater organizational demand of semantic strategies. In contrast, the late positive component (LPC) amplitude was larger in both encoding conditions with semantic related words regardless of their instructions as compared to the Unrelated condition. Finally, there was greater right frontal sustained activity in the Spontaneous condition than in the Unrelated condition. Thus, our data indicate that the P200 is sensitive to attentional processes that increase with the organizational semantic demand. The LPC indexes associative processes voluntarily involved in linking related items together. Finally, the right frontal region appears to play an important role in the self-initiation of semantic organizational strategies.