Spatiotemporal dynamics of continuum neural fields
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2012-01-01
We survey recent analytical approaches to studying the spatiotemporal dynamics of continuum neural fields. Neural fields model the large-scale dynamics of spatially structured biological neural networks in terms of nonlinear integrodifferential equations whose associated integral kernels represent the spatial distribution of neuronal synaptic connections. They provide an important example of spatially extended excitable systems with nonlocal interactions and exhibit a wide range of spatially coherent dynamics including traveling waves oscillations and Turing-like patterns.
Metastable dynamics in heterogeneous neural fields
Schwappach, Cordula; Hutt, Axel; beim Graben, Peter
2015-01-01
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671
Metastable dynamics in heterogeneous neural fields.
Schwappach, Cordula; Hutt, Axel; Beim Graben, Peter
2015-01-01
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671
Neural Field Dynamics with Heterogeneous Connection Topology
NASA Astrophysics Data System (ADS)
Qubbaj, Murad R.; Jirsa, Viktor K.
2007-06-01
Neural fields receive inputs from local and nonlocal sources. Notably in a biologically realistic architecture the latter vary under spatial translations (heterogeneous), the former do not (homogeneous). To understand the mutual effects of homogeneous and heterogeneous connectivity, we study the stability of the steady state activity of a neural field as a function of its connectivity and transmission speed. We show that myelination, a developmentally relevant change of the heterogeneous connectivity, always results in the stabilization of the steady state via oscillatory instabilities, independent of the local connectivity. Nonoscillatory instabilities are shown to be independent of any influences of time delay.
Fluctuation-response relation unifies dynamical behaviors in neural fields
NASA Astrophysics Data System (ADS)
Fung, C. C. Alan; Wong, K. Y. Michael; Mao, Hongzi; Wu, Si
2015-08-01
Anticipation is a strategy used by neural fields to compensate for transmission and processing delays during the tracking of dynamical information and can be achieved by slow, localized, inhibitory feedback mechanisms such as short-term synaptic depression, spike-frequency adaptation, or inhibitory feedback from other layers. Based on the translational symmetry of the mobile network states, we derive generic fluctuation-response relations, providing unified predictions that link their tracking behaviors in the presence of external stimuli to the intrinsic dynamics of the neural fields in their absence.
Conditions of activity bubble uniqueness in dynamic neural fields.
Mikhailova, Inna; Goerick, Christian
2005-02-01
Dynamic neural fields (DNFs) offer a rich spectrum of dynamic properties like hysteresis, spatiotemporal information integration, and coexistence of multiple attractors. These properties make DNFs more and more popular in implementations of sensorimotor loops for autonomous systems. Applications often imply that DNFs should have only one compact region of firing neurons (activity bubble), whereas the rest of the field should not fire (e.g., if the field represents motor commands). In this article we prove the conditions of activity bubble uniqueness in the case of locally symmetric input bubbles. The qualitative condition on inhomogeneous inputs used in earlier work on DNFs is transfered to a quantitative condition of a balance between the internal dynamics and the input. The mathematical analysis is carried out for the two-dimensional case with methods that can be extended to more than two dimensions. The article concludes with an example of how our theoretical results facilitate the practical use of DNFs. PMID:15685393
Dynamic neural fields as a step toward cognitive neuromorphic architectures.
Sandamirskaya, Yulia
2013-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620
Dynamic neural fields as a step toward cognitive neuromorphic architectures.
Sandamirskaya, Yulia
2013-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning.
Dynamic neural fields as a step toward cognitive neuromorphic architectures
Sandamirskaya, Yulia
2014-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620
Validating a model for detecting magnetic field intensity using dynamic neural fields.
Taylor, Brian K
2016-11-01
Several animals use properties of Earth's magnetic field as a part of their navigation toolkit to accomplish tasks ranging from local homing to continental migration. Studying these behaviors has led to the postulation of both a magnetite-based sense, and a chemically based radical-pair mechanism. Several researchers have proposed models aimed at both understanding these mechanisms, and offering insights into future physiological experiments. The present work mathematically implements a previously developed conceptual model for sensing and processing magnetite-based magnetosensory feedback by using dynamic neural fields, a computational neuroscience tool for modeling nervous system dynamics and processing. Results demonstrate the plausibility of the conceptual model's predictions. Specifically, a population of magnetoreceptors in which each individual can only sense directional information can encode magnetic intensity en masse. Multiple populations can encode both magnetic direction, and intensity, two parameters that several animals use in their navigational toolkits. This work can be expanded to test other magnetoreceptor models.
Validating a model for detecting magnetic field intensity using dynamic neural fields.
Taylor, Brian K
2016-11-01
Several animals use properties of Earth's magnetic field as a part of their navigation toolkit to accomplish tasks ranging from local homing to continental migration. Studying these behaviors has led to the postulation of both a magnetite-based sense, and a chemically based radical-pair mechanism. Several researchers have proposed models aimed at both understanding these mechanisms, and offering insights into future physiological experiments. The present work mathematically implements a previously developed conceptual model for sensing and processing magnetite-based magnetosensory feedback by using dynamic neural fields, a computational neuroscience tool for modeling nervous system dynamics and processing. Results demonstrate the plausibility of the conceptual model's predictions. Specifically, a population of magnetoreceptors in which each individual can only sense directional information can encode magnetic intensity en masse. Multiple populations can encode both magnetic direction, and intensity, two parameters that several animals use in their navigational toolkits. This work can be expanded to test other magnetoreceptor models. PMID:27521527
Neural field simulator: two-dimensional spatio-temporal dynamics involving finite transmission speed
Nichols, Eric J.; Hutt, Axel
2015-01-01
Neural Field models (NFM) play an important role in the understanding of neural population dynamics on a mesoscopic spatial and temporal scale. Their numerical simulation is an essential element in the analysis of their spatio-temporal dynamics. The simulation tool described in this work considers scalar spatially homogeneous neural fields taking into account a finite axonal transmission speed and synaptic temporal derivatives of first and second order. A text-based interface offers complete control of field parameters and several approaches are used to accelerate simulations. A graphical output utilizes video hardware acceleration to display running output with reduced computational hindrance compared to simulators that are exclusively software-based. Diverse applications of the tool demonstrate breather oscillations, static and dynamic Turing patterns and activity spreading with finite propagation speed. The simulator is open source to allow tailoring of code and this is presented with an extension use case. PMID:26539105
The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields
Deco, Gustavo; Jirsa, Viktor K.; Robinson, Peter A.; Breakspear, Michael; Friston, Karl
2008-01-01
The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space–time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences
Mean-field theory of globally coupled integrate-and-fire neural oscillators with dynamic synapses.
Bressloff, P C
1999-08-01
We analyze the effects of synaptic depression or facilitation on the existence and stability of the splay or asynchronous state in a population of all-to-all, pulse-coupled neural oscillators. We use mean-field techniques to derive conditions for the local stability of the splay state and determine how stability depends on the degree of synaptic depression or facilitation. We also consider the effects of noise. Extensions of the mean-field results to finite networks are developed in terms of the nonlinear firing time map.
Dynamic recurrent neural networks: a dynamical analysis.
Draye, J S; Pavisic, D A; Cheron, G A; Libert, G A
1996-01-01
In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters: the classical weights between the units and the time constants associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for modeling and simulating dynamic recurrent neural networks. In order to achieve this, we study the effect of the statistical distribution of the weights and of the time constants on the network dynamics and we make a statistical analysis of the neural transformation. We examine the network power spectra (to draw some conclusions over the frequential behaviour of the network) and we compute the stability regions to explore the stability of the model. We show that the network is sensitive to the variations of the mean values of the weights and the time constants (because of the temporal aspects of the learned tasks). Nevertheless, our results highlight the improvements in the network dynamics due to the introduction of adaptative time constants and indicate that dynamic recurrent neural networks can bring new powerful features in the field of neural computing.
Neural field dynamics under variation of local and global connectivity and finite transmission speed
NASA Astrophysics Data System (ADS)
Qubbaj, Murad R.; Jirsa, Viktor K.
2009-12-01
Spatially continuous networks with heterogeneous connections are ubiquitous in biological systems, in particular neural systems. To understand the mutual effects of locally homogeneous and globally heterogeneous connectivity, we investigate the stability of the steady state activity of a neural field as a function of its connectivity. The variation of the connectivity is implemented through manipulation of a heterogeneous two-point connection embedded into the otherwise homogeneous connectivity matrix and by variation of the connectivity strength and transmission speed. Detailed examples including the Ginzburg-Landau equation and various other local architectures are discussed. Our analysis shows that developmental changes such as the myelination of the cortical large-scale fiber system generally result in the stabilization of steady state activity independent of the local connectivity. Non-oscillatory instabilities are shown to be independent of any influences of time delay.
Dynamics of neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Rich spectrum of neural field dynamics in the presence of short-term synaptic depression.
Wang, He; Lam, Kin; Fung, C C Alan; Wong, K Y Michael; Wu, Si
2015-09-01
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system. PMID:26465541
Rich spectrum of neural field dynamics in the presence of short-term synaptic depression
NASA Astrophysics Data System (ADS)
Wang, He; Lam, Kin; Fung, C. C. Alan; Wong, K. Y. Michael; Wu, Si
2015-09-01
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.
Perone, Sammy; Spencer, John P.
2013-01-01
Looking is a fundamental exploratory behavior by which infants acquire knowledge about the world. In theories of infant habituation, however, looking as an exploratory behavior has been deemphasized relative to the reliable nature with which looking indexes active cognitive processing. We present a new theory that connects looking to the dynamics of memory formation and formally implement this theory in a Dynamic Neural Field model that learns autonomously as it actively looks and looks away from a stimulus. We situate this model in a habituation task and illustrate the mechanisms by which looking, encoding, working memory formation, and long-term memory formation give rise to habituation across multiple stimulus and task contexts. We also illustrate how the act of looking and the temporal dynamics of learning affect each other. Finally, we test a new hypothesis about the sources of developmental differences in looking. PMID:23136815
Hou, Saing Paul; Haddad, Wassim M; Meskin, Nader; Bailey, James M
2015-12-01
With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory. PMID:26438186
Dynamic interactions in neural networks
Arbib, M.A. ); Amari, S. )
1989-01-01
The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.
BINOCULAR RIVALRY AND NEURAL DYNAMICS.
Blake, Randolph; Lee, Sang-Hun; Heeger, David
2008-06-01
The Gestalt psychologists were fascinated with dynamics evident in visual perception, and they theorized that these dynamics were attributable to ever-changing electrical potentials within topographically organized brain fields. Dynamic field theory, as it was called, was subsequently discredited on grounds that the brain does not comprise a unitary electrical field but, instead, a richly interconnected network of discrete computing elements. Still, this modern conceptualization of brain function faces the challenge of explaining the fact that perception is dynamic in space and in time. To pursue the question of visual perception and cortical dynamics, we have focused on spatio-temporal transitions in dominance during binocular rivalry. We have developed techniques for initiating and measuring these transitions psychophysically and for measuring their neural concomitants using functional magnetic resonance imaging (fMRI). Our findings disclose the existence of waves of cortical activity that travel across the retinotopic maps that define primary and secondary visual areas within occipital cortex, in correspondence with the subjective perception of spreading waves of dominance during binocular rivalry. This paper reviews the results from those studies.
Creative-Dynamics Approach To Neural Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1992-01-01
Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.
Dynamical systems, attractors, and neural circuits.
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.
On conductance-based neural field models
Pinotsis, Dimitris A.; Leite, Marco; Friston, Karl J.
2013-01-01
This technical note introduces a conductance-based neural field model that combines biologically realistic synaptic dynamics—based on transmembrane currents—with neural field equations, describing the propagation of spikes over the cortical surface. This model allows for fairly realistic inter-and intra-laminar intrinsic connections that underlie spatiotemporal neuronal dynamics. We focus on the response functions of expected neuronal states (such as depolarization) that generate observed electrophysiological signals (like LFP recordings and EEG). These response functions characterize the model's transfer functions and implicit spectral responses to (uncorrelated) input. Our main finding is that both the evoked responses (impulse response functions) and induced responses (transfer functions) show qualitative differences depending upon whether one uses a neural mass or field model. Furthermore, there are differences between the equivalent convolution and conductance models. Overall, all models reproduce a characteristic increase in frequency, when inhibition was increased by increasing the rate constants of inhibitory populations. However, convolution and conductance-based models showed qualitatively different changes in power, with convolution models showing decreases with increasing inhibition, while conductance models show the opposite effect. These differences suggest that conductance based field models may be important in empirical studies of cortical gain control or pharmacological manipulations. PMID:24273508
Dynamical systems, attractors, and neural circuits
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic—they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions. PMID:27408709
Model Of Neural Network With Creative Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail; Barhen, Jacob
1993-01-01
Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.
Dynamic alignment models for neural coding.
Kollmorgen, Sepp; Hahnloser, Richard H R
2014-03-01
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448
Dynamic Alignment Models for Neural Coding
Kollmorgen, Sepp; Hahnloser, Richard H. R.
2014-01-01
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448
The Complexity of Dynamics in Small Neural Circuits
Panzeri, Stefano
2016-01-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
The Complexity of Dynamics in Small Neural Circuits.
Fasoli, Diego; Cattani, Anna; Panzeri, Stefano
2016-08-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
Neural network with formed dynamics of activity
Dunin-Barkovskii, V.L.; Osovets, N.B.
1995-03-01
The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.
On lateral competition in dynamic neural networks
Bellyustin, N.S.
1995-02-01
Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.
Dynamics and kinematics of simple neural systems
Rabinovich, M. |; Selverston, A.; Rubchinsky, L.; Huerta, R.
1996-09-01
The dynamics of simple neural systems is of interest to both biologists and physicists. One of the possible roles of such systems is the production of rhythmic patterns, and their alterations (modification of behavior, processing of sensory information, adaptation, control). In this paper, the neural systems are considered as a subject of modeling by the dynamical systems approach. In particular, we analyze how a stable, ordinary behavior of a small neural system can be described by simple finite automata models, and how more complicated dynamical systems modeling can be used. The approach is illustrated by biological and numerical examples: experiments with and numerical simulations of the stomatogastric central pattern generators network of the California spiny lobster. {copyright} {ital 1996 American Institute of Physics.}
Kubota, Michinori; Miyamoto, Akihiro; Hosokawa, Yutaka; Sugimoto, Shunji; Horikawa, Junsei
2012-05-30
Auditory induction is a continuity illusion in which missing sounds are perceived under appropriate conditions, for example, when noise is inserted during silent gaps in the sound. To elucidate the neural mechanisms underlying auditory induction, neural responses to tones interrupted by a silent gap or noise were examined in the core and belt fields of the auditory cortex using real-time optical imaging with a voltage-sensitive dye. Tone stimuli interrupted by a silent gap elicited responses to the second tone following the gap as well as early phasic responses to the first tone. Tone stimuli interrupted by broad-band noise (BN), considered to cause auditory induction, considerably reduced or eliminated responses to the tone following the noise. This reduction was stronger in the dorsocaudal field (field DC) and belt fields compared with the anterior field (the primary auditory cortex of guinea pig). Tone stimuli interrupted by notched (band-stopped) noise centered at the tone frequency, considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. These results suggest that substantial changes between responses to silent gap-inserted tones and those to BN-inserted tones emerged in field DC and belt fields. Moreover, the findings indicate that field DC is the first area in which these changes emerge, suggesting that it may be an important region for auditory induction of simple sounds. PMID:22473291
Kubota, Michinori; Miyamoto, Akihiro; Hosokawa, Yutaka; Sugimoto, Shunji; Horikawa, Junsei
2012-05-30
Auditory induction is a continuity illusion in which missing sounds are perceived under appropriate conditions, for example, when noise is inserted during silent gaps in the sound. To elucidate the neural mechanisms underlying auditory induction, neural responses to tones interrupted by a silent gap or noise were examined in the core and belt fields of the auditory cortex using real-time optical imaging with a voltage-sensitive dye. Tone stimuli interrupted by a silent gap elicited responses to the second tone following the gap as well as early phasic responses to the first tone. Tone stimuli interrupted by broad-band noise (BN), considered to cause auditory induction, considerably reduced or eliminated responses to the tone following the noise. This reduction was stronger in the dorsocaudal field (field DC) and belt fields compared with the anterior field (the primary auditory cortex of guinea pig). Tone stimuli interrupted by notched (band-stopped) noise centered at the tone frequency, considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. These results suggest that substantial changes between responses to silent gap-inserted tones and those to BN-inserted tones emerged in field DC and belt fields. Moreover, the findings indicate that field DC is the first area in which these changes emerge, suggesting that it may be an important region for auditory induction of simple sounds.
Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio
2012-08-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2015-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of
Large Deviations for Nonlocal Stochastic Neural Fields
2014-01-01
We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297
Synthesis of recurrent neural networks for dynamical system simulation.
Trischler, Adam P; D'Eleuterio, Gabriele M T
2016-08-01
We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time.
Dynamic process modeling with recurrent neural networks
You, Yong; Nikolaou, M. . Dept. of Chemical Engineering)
1993-10-01
Mathematical models play an important role in control system synthesis. However, due to the inherent nonlinearity, complexity and uncertainty of chemical processes, it is usually difficult to obtain an accurate model for a chemical engineering system. A method of nonlinear static and dynamic process modeling via recurrent neural networks (RNNs) is studied. An RNN model is a set of coupled nonlinear ordinary differential equations in continuous time domain with nonlinear dynamic node characteristics as well as both feed forward and feedback connections. For such networks, each physical input to a system corresponds to exactly one input to the network. The system's dynamics are captured by the internal structure of the network. The structure of RNN models may be more natural and attractive than that of feed forward neural network models, but computation time for training is longer. Simulation results show that RNNs can learn both steady-state relationships and process dynamics of continuous and batch, single-input/single-output and multi-input/multi-output systems in a simple and direct manner. Training of RNNs shows only small degradation in the presence of noise in the training data. Thus, RNNs constitute a feasible alternative to layered feed forward back propagation neural networks in steady-state and dynamic process modeling and model-based control.
Nonlinear dynamics of neural delayed feedback
Longtin, A.
1990-01-01
Neural delayed feedback is a property shared by many circuits in the central and peripheral nervous systems. The evolution of the neural activity in these circuits depends on their present state as well as on their past states, due to finite propagation time of neural activity along the feedback loop. These systems are often seen to undergo a change from a quiescent state characterized by low level fluctuations to an oscillatory state. We discuss the problem of analyzing this transition using techniques from nonlinear dynamics and stochastic processes. Our main goal is to characterize the nonlinearities which enable autonomous oscillations to occur and to uncover the properties of the noise sources these circuits interact with. The concepts are illustrated on the human pupil light reflex (PLR) which has been studied both theoretically and experimentally using this approach. 5 refs., 3 figs.
Quantum dissipation and neural net dynamics.
Pessa, E; Vitiello, G
1999-05-01
Inspired by the dissipative quantum model of brain, we model the states of neural nets in terms of collective modes by the help of the formalism of Quantum Field Theory. We exhibit an explicit neural net model which allows to memorize a sequence of several informations without reciprocal destructive interference, namely we solve the overprinting problem in such a way last registered information does not destroy the ones previously registered. Moreover, the net is able to recall not only the last registered information in the sequence, but also anyone of those previously registered.
Multiresolution dynamic predictor based on neural networks
NASA Astrophysics Data System (ADS)
Tsui, Fu-Chiang; Li, Ching-Chung; Sun, Mingui; Sclabassi, Robert J.
1996-03-01
We present a multiresolution dynamic predictor (MDP) based on neural networks for multi- step prediction of a time series. The MDP utilizes the discrete biorthogonal wavelet transform to compute wavelet coefficients at several scale levels and recurrent neural networks (RNNs) to form a set of dynamic nonlinear models for prediction of the time series. By employing RNNs in wavelet coefficient space, the MDP is capable of predicting a time series for both the long-term (with coarse resolution) and short-term (with fine resolution). Experimental results have demonstrated the effectiveness of the MDP for multi-step prediction of intracranial pressure (ICP) recorded from head-trauma patients. This approach has applicability to quasi- stationary signals and is suitable for on-line computation.
Artificial Neural Network L* from different magnetospheric field models
NASA Astrophysics Data System (ADS)
Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.
2011-12-01
The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.
The neural dynamics of sensory focus
Clarke, Stephen E.; Longtin, André; Maler, Leonard
2015-01-01
Coordinated sensory and motor system activity leads to efficient localization behaviours; but what neural dynamics enable object tracking and what are the underlying coding principles? Here we show that optimized distance estimation from motion-sensitive neurons underlies object tracking performance in weakly electric fish. First, a relationship is presented for determining the distance that maximizes the Fisher information of a neuron's response to object motion. When applied to our data, the theory correctly predicts the distance chosen by an electric fish engaged in a tracking behaviour, which is associated with a bifurcation between tonic and burst modes of spiking. Although object distance, size and velocity alter the neural response, the location of the Fisher information maximum remains invariant, demonstrating that the circuitry must actively adapt to maintain ‘focus' during relative motion. PMID:26549346
Natural neural projection dynamics underlying social behavior.
Gunaydin, Lisa A; Grosenick, Logan; Finkelstein, Joel C; Kauvar, Isaac V; Fenno, Lief E; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J; Airan, Raag D; Zalocusky, Kelly A; Tye, Kay M; Anikeeva, Polina; Malenka, Robert C; Deisseroth, Karl
2014-06-19
Social interaction is a complex behavior essential for many species and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social, but not novel object, interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type 1 dopamine receptor signaling downstream in the NAc. Direct observation of deep projection-specific activity in this way captures a fundamental and previously inaccessible dimension of mammalian circuit dynamics. PMID:24949967
Natural neural projection dynamics underlying social behavior
Gunaydin, Lisa A.; Grosenick, Logan; Finkelstein, Joel C.; Kauvar, Isaac V.; Fenno, Lief E.; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J.; Airan, Raag D.; Zalocusky, Kelly A.; Tye, Kay M.; Anikeeva, Polina; Malenka, Robert C.; Deisseroth, Karl
2014-01-01
Social interaction is a complex behavior essential for many species, and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically- and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social but not novel-object interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type-1 dopamine receptor signaling downstream in the NAc. Direct observation of projection-specific activity in this way captures a fundamental and previously inaccessible dimension of circuit dynamics. PMID:24949967
Beyond mean field theory: statistical field theory for neural networks
Buice, Michael A; Chow, Carson C
2014-01-01
Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi–Peliti–Janssen formalism, are particularly useful in this regard. PMID:25243014
Dynamical system modeling via signal reduction and neural network simulation
Paez, T.L.; Hunter, N.F.
1997-11-01
Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.
Structure-unknown non-linear dynamic systems: identification through neural networks
NASA Astrophysics Data System (ADS)
Masri, S. F.; Chassiakos, A. G.; Caughey, T. K.
1992-03-01
Explores the potential of using parallel distributed processing (neural network) approaches to identify the internal forces of structure-unknown non-linear dynamic systems typically encountered in the field of applied mechanics. The relevant characteristics of neural networks, such as the processing elements, network topology, and learning algorithms, are discussed in the context of system identification. The analogy of the neural network procedure to a qualitatively similar non-parametric identification approach, which was previously developed by the authors for handling arbitrary non-linear systems, is discussed. The utility of the neural network approach is demonstrated by application to several illustrative problems.
Population clocks: motor timing with neural dynamics
Buonomano, Dean V.; Laje, Rodrigo
2010-01-01
An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing. PMID:20889368
Neural attractor network for application in visual field data classification
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2004-07-01
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a ' counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available.
An integrated architecture of adaptive neural network control for dynamic systems
Ke, Liu; Tokar, R.; Mcvey, B.
1994-07-01
In this study, an integrated neural network control architecture for nonlinear dynamic systems is presented. Most of the recent emphasis in the neural network control field has no error feedback as the control input which rises the adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. Feed forward neural network controllers with state feedback establish fixed control mappings which can not adapt when model uncertainties present. With error feedbacks, neural network controllers learn the slopes or the gains respecting to the error feedbacks, which are error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation.
NASA Astrophysics Data System (ADS)
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
, the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools
Two-photon imaging and analysis of neural network dynamics
NASA Astrophysics Data System (ADS)
Lütcke, Henry; Helmchen, Fritjof
2011-08-01
The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.
Identification of power system load dynamics using artificial neural networks
Bostanci, M.; Koplowitz, J.; Taylor, C.W. |
1997-11-01
Power system loads are important for planning and operation of an electric power system. Load characteristics can significantly influence the results of synchronous stability and voltage stability studies. This paper presents a methodology for identification of power system load dynamics using neural networks. Input-output data of a power system dynamic load is used to design a neural network model which comprises delayed inputs and feedback connections. The developed neural network model can predict the future power system dynamic load behavior for arbitrary inputs. In particular, a third-order induction motor load neural network model is developed to verify the methodology. Neural network simulation results are illustrated and compared with the induction motor load response.
Neural dynamics during repetitive visual stimulation
NASA Astrophysics Data System (ADS)
Tsoneva, Tsvetomira; Garcia-Molina, Gary; Desain, Peter
2015-12-01
Objective. Steady-state visual evoked potentials (SSVEPs), the brain responses to repetitive visual stimulation (RVS), are widely utilized in neuroscience. Their high signal-to-noise ratio and ability to entrain oscillatory brain activity are beneficial for their applications in brain-computer interfaces, investigation of neural processes underlying brain rhythmic activity (steady-state topography) and probing the causal role of brain rhythms in cognition and emotion. This paper aims at analyzing the space and time EEG dynamics in response to RVS at the frequency of stimulation and ongoing rhythms in the delta, theta, alpha, beta, and gamma bands. Approach.We used electroencephalography (EEG) to study the oscillatory brain dynamics during RVS at 10 frequencies in the gamma band (40-60 Hz). We collected an extensive EEG data set from 32 participants and analyzed the RVS evoked and induced responses in the time-frequency domain. Main results. Stable SSVEP over parieto-occipital sites was observed at each of the fundamental frequencies and their harmonics and sub-harmonics. Both the strength and the spatial propagation of the SSVEP response seem sensitive to stimulus frequency. The SSVEP was more localized around the parieto-occipital sites for higher frequencies (>54 Hz) and spread to fronto-central locations for lower frequencies. We observed a strong negative correlation between stimulation frequency and relative power change at that frequency, the first harmonic and the sub-harmonic components over occipital sites. Interestingly, over parietal sites for sub-harmonics a positive correlation of relative power change and stimulation frequency was found. A number of distinct patterns in delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz) and beta (15-30 Hz) bands were also observed. The transient response, from 0 to about 300 ms after stimulation onset, was accompanied by increase in delta and theta power over fronto-central and occipital sites, which returned to baseline
The dynamic neural filter: a binary model of spatiotemporal coding.
Quenet, Brigitte; Horn, David
2003-02-01
We describe and discuss the properties of a binary neural network that can serve as a dynamic neural filter (DNF), which maps regions of input space into spatiotemporal sequences of neuronal activity. Both deterministic and stochastic dynamics are studied, allowing the investigation of the stability of spatiotemporal sequences under noisy conditions. We define a measure of the coding capacity of a DNF and develop an algorithm for constructing a DNF that can serve as a source of given codes. On the basis of this algorithm, we suggest using a minimal DNF capable of generating observed sequences as a measure of complexity of spatiotemporal data. This measure is applied to experimental observations in the locust olfactory system, whose reverberating local field potential provides a natural temporal scale allowing the use of a binary DNF. For random synaptic matrices, a DNF can generate very large cycles, thus becoming an efficient tool for producing spatiotemporal codes. The latter can be stabilized by applying to the parameters of the DNF a learning algorithm with suitable margins.
Shaping the learning curve: epigenetic dynamics in neural plasticity
Bronfman, Zohar Z.; Ginsburg, Simona; Jablonka, Eva
2014-01-01
A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation, and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network, and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies. PMID:25071483
Neural Dynamics of Attentional Cross-Modality Control
Rabinovich, Mikhail; Tristan, Irma; Varona, Pablo
2013-01-01
Attentional networks that integrate many cortical and subcortical elements dynamically control mental processes to focus on specific events and make a decision. The resources of attentional processing are finite. Nevertheless, we often face situations in which it is necessary to simultaneously process several modalities, for example, to switch attention between players in a soccer field. Here we use a global brain mode description to build a model of attentional control dynamics. This model is based on sequential information processing stability conditions that are realized through nonsymmetric inhibition in cortical circuits. In particular, we analyze the dynamics of attentional switching and focus in the case of parallel processing of three interacting mental modalities. Using an excitatory-inhibitory network, we investigate how the bifurcations between different attentional control strategies depend on the stimuli and analyze the relationship between the time of attention focus and the strength of the stimuli. We discuss the interplay between attention and decision-making: in this context, a decision-making process is a controllable bifurcation of the attention strategy. We also suggest the dynamical evaluation of attentional resources in neural sequence processing. PMID:23696890
Neural network based dynamic controllers for industrial robots.
Oh, S Y; Shin, W C; Kim, H G
1995-09-01
The industrial robot's dynamic performance is frequently measured by positioning accuracy at high speeds and a good dynamic controller is essential that can accurately compute robot dynamics at a servo rate high enough to ensure system stability. A real-time dynamic controller for an industrial robot is developed here using neural networks. First, an efficient time-selectable hidden layer architecture has been developed based on system dynamics localized in time, which lends itself to real-time learning and control along with enhanced mapping accuracy. Second, the neural network architecture has also been specially tuned to accommodate servo dynamics. This not only facilitates the system design through reduced sensing requirements for the controller but also enhances the control performance over the control architecture neglecting servo dynamics. Experimental results demonstrate the controller's excellent learning and control performances compared with a conventional controller and thus has good potential for practical use in industrial robots.
Absolute stability and synchronization in neural field models with transmission delays
NASA Astrophysics Data System (ADS)
Kao, Chiu-Yen; Shih, Chih-Wen; Wu, Chang-Hong
2016-08-01
Neural fields model macroscopic parts of the cortex which involve several populations of neurons. We consider a class of neural field models which are represented by integro-differential equations with transmission time delays which are space-dependent. The considered domains underlying the systems can be bounded or unbounded. A new approach, called sequential contracting, instead of the conventional Lyapunov functional technique, is employed to investigate the global dynamics of such systems. Sufficient conditions for the absolute stability and synchronization of the systems are established. Several numerical examples are presented to demonstrate the theoretical results.
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems. PMID:11852439
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.
Neural Dynamics Underlying Event-Related Potentials
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Bressler, Steven L.; Knuth, Kevin H.; Ding, Ming-Zhou; Mehta, Ashesh D.; Ulbert, Istvan; Schroeder, Charles E.
2003-01-01
There are two opposing hypotheses about the brain mechanisms underlying sensory event-related potentials (ERPs). One holds that sensory ERPs are generated by phase resetting of ongoing electroencephalographic (EEG) activity, and the other that they result from signal averaging of stimulus-evoked neural responses. We tested several contrasting predictions of these hypotheses by direct intracortical analysis of neural activity in monkeys. Our findings clearly demonstrate evoked response contributions to the sensory ERP in the monkey, and they suggest the likelihood that a mixed (Evoked/Phase Resetting) model may account for the generation of scalp ERPs in humans.
Measuring Whole-Brain Neural Dynamics and Behavior of Freely-Moving C. elegans
NASA Astrophysics Data System (ADS)
Shipley, Frederick; Nguyen, Jeffrey; Plummer, George; Shaevitz, Joshua; Leifer, Andrew
2015-03-01
Bridging the gap between an organism's neural dynamics and its ultimate behavior is the fundamental goal of neuroscience. Previously, to probe neural dynamics, we have been limited to measuring from a limited number of neurons, whether by electrode or optogenetic measurements. Here we present an instrument to simultaneously monitor neural activity from every neuron in a freely moving Caenorhabditis elegans' head, while recording behavior at the same time. Previously, whole-brain imaging has been demonstrated in C. elegans, but only in restrained and anesthetized animals (1). For studying neural coding of behavior it is crucial to study neural activity in freely behaving animals. Neural activity is recorded optically from cells expressing a calcium indicator, GCaMP6. Real time computer vision tracks the worm's position in x-y, while a piezo stage sweeps through the brain in z, yielding five brain-volumes per second. Behavior is recorded under infrared, dark-field imaging. This tool will allow us to directly correlate neural activity with behavior and we will present progress toward this goal. Thank you to the Simons Foundation and Princeton University for supporting this research.
Neural Computations in a Dynamical System with Multiple Time Scales.
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569
Neural Computations in a Dynamical System with Multiple Time Scales
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.
Neural Computations in a Dynamical System with Multiple Time Scales
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569
Discriminating lysosomal membrane protein types using dynamic neural network.
Tripathi, Vijay; Gupta, Dwijendra Kumar
2014-01-01
This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.
Beyond slots and resources: Grounding cognitive concepts in neural dynamics
Johnson, Jeffrey S.; Simmering, Vanessa R.; Buss, Aaron T.
2014-01-01
Research over the past decade has suggested that the ability to hold information in visual working memory (VWM) may be limited to as few as 3-4 items. However, the precise nature and source of these capacity limits remains hotly debated. Most commonly, capacity limits have been inferred from studies of visual change detection, in which performance declines systematically as a function of the number of items participants must remember. According to one view, such declines indicate that a limited number of fixed-resolution representations are held in independent memory ‘slots’. Another view suggests that capacity limits are more apparent than real, emerging as limited memory resources are distributed across more to-be-remembered items. Here we argue that, although both perspectives have merit and have generated and explained an impressive amount of empirical data, their central focus on the representations—rather than processes—underlying VWM may ultimately limit continuing progress in this area. As an alternative, we describe a neurally-grounded, process-based approach to VWM: the dynamic field theory. Simulations demonstrate that this model can account for key aspects of behavioral performance in change detection, in addition to generating novel behavioral predictions that have been confirmed experimentally. Furthermore, we describe extensions of the model to recall tasks, the integration of visual features, cognitive development, individual differences, and functional imaging studies of VWM. We conclude by discussing the importance of grounding psychological concepts in neural dynamics as a first step toward understanding the link between brain and behavior. PMID:24306983
Neural network approaches to dynamic collision-free trajectory generation.
Yang, S X; Meng, M
2001-01-01
In this paper, dynamic collision-free trajectory generation in a nonstationary environment is studied using biologically inspired neural network approaches. The proposed neural network is topologically organized, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The state space of the neural network can be either the Cartesian workspace or the joint space of multi-joint robot manipulators. There are only local lateral connections among neurons. The real-time optimal trajectory is generated through the dynamic activity landscape of the neural network without explicitly searching over the free space nor the collision paths, without explicitly optimizing any global cost functions, without any prior knowledge of the dynamic environment, and without any learning procedures. Therefore the model algorithm is computationally efficient. The stability of the neural network system is guaranteed by the existence of a Lyapunov function candidate. In addition, this model is not very sensitive to the model parameters. Several model variations are presented and the differences are discussed. As examples, the proposed models are applied to generate collision-free trajectories for a mobile robot to solve a maze-type of problem, to avoid concave U-shaped obstacles, to track a moving target and at the same to avoid varying obstacles, and to generate a trajectory for a two-link planar robot with two targets. The effectiveness and efficiency of the proposed approaches are demonstrated through simulation and comparison studies. PMID:18244794
A theory of neural dimensionality, dynamics, and measurement
NASA Astrophysics Data System (ADS)
Ganguli, Surya
In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing millions of behaviorally relevant neurons. Dimensionality reduction has often shown that such datasets are strikingly simple; they can be described using a much smaller number of dimensions than the number of recorded neurons, and the resulting projections onto these dimensions yield a remarkably insightful dynamical portrait of circuit computation. This ubiquitous simplicity raises several profound and timely conceptual questions. What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons? We present a theory that answers these questions, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover dynamical portraits in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. membership pending.
Toward modeling a dynamic biological neural network.
Ross, M D; Dayhoff, J E; Mugler, D H
1990-01-01
Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system. PMID:11538873
Neural network with dynamically adaptable neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.
Spontaneous Neural Dynamics and Multi-scale Network Organization
Foster, Brett L.; He, Biyu J.; Honey, Christopher J.; Jerbi, Karim; Maier, Alexander; Saalmann, Yuri B.
2016-01-01
Spontaneous neural activity has historically been viewed as task-irrelevant noise that should be controlled for via experimental design, and removed through data analysis. However, electrophysiology and functional MRI studies of spontaneous activity patterns, which have greatly increased in number over the past decade, have revealed a close correspondence between these intrinsic patterns and the structural network architecture of functional brain circuits. In particular, by analyzing the large-scale covariation of spontaneous hemodynamics, researchers are able to reliably identify functional networks in the human brain. Subsequent work has sought to identify the corresponding neural signatures via electrophysiological measurements, as this would elucidate the neural origin of spontaneous hemodynamics and would reveal the temporal dynamics of these processes across slower and faster timescales. Here we survey common approaches to quantifying spontaneous neural activity, reviewing their empirical success, and their correspondence with the findings of neuroimaging. We emphasize invasive electrophysiological measurements, which are amenable to amplitude- and phase-based analyses, and which can report variations in connectivity with high spatiotemporal precision. After summarizing key findings from the human brain, we survey work in animal models that display similar multi-scale properties. We highlight that, across many spatiotemporal scales, the covariance structure of spontaneous neural activity reflects structural properties of neural networks and dynamically tracks their functional repertoire. PMID:26903823
Short-Term Load Forecasting using Dynamic Neural Networks
NASA Astrophysics Data System (ADS)
Chogumaira, Evans N.; Hiyama, Takashi
This paper presents short-term electricity load forecasting using dynamic neural networks, DNN. The proposed approach includes an assessment of the DNN's stability to ascertain continued reliability. A comparative study between three different neural network architectures, which include feedforward, Elman and the radial basis neural networks, is performed. The performance and stability of each DNN is evaluated using actual hourly load data. Stability for each of the three different networks is determined through Eigen values analysis. The neural networks weights are dynamically adapted to meet the performance and stability requirements. A new approach for adapting radial basis function (RBF) neural network weights is also proposed. Evaluation of the networks is done in terms of forecasting error, stability and the effort required in training a particular network. The results show that DNN based on the radial basis neural network architecture performs much better than the rest. Eigen value analysis also shows that the radial basis based DNN is more stable making it very reliable as the input varies.
Dynamics of the Model of the Caenorhabditis Elegans Neural Network
NASA Astrophysics Data System (ADS)
Kosinski, R. A.; Zaremba, M.
2007-06-01
The model of the neural network of nematode worm C. elegans resulting from the biological investigations and published in the literature, is proposed. In the model artificial neurons Siin (-1,1) are connected in the same way as in the C. elegans neural network. The dynamics of this network is investigated numerically for the case of simple external simulation, using the methods developed for the nonlinear systems. In the computations a number of different attractors, e.g. point, quasiperiodic and chaotic, as well as the range of their occurrence, were found. These properties are similar to the dynamical properties of a simple one dimensional neural network with comparable number of neurons investigated earlier.
Correlation between eigenvalue spectra and dynamics of neural networks.
Zhou, Qingguo; Jin, Tao; Zhao, Hong
2009-10-01
This letter presents a study of the correlation between the eigenvalue spectra of synaptic matrices and the dynamical properties of asymmetric neural networks with associative memories. For this type of neural network, it was found that there are essentially two different dynamical phases: the chaos phase, with almost all trajectories converging to a single chaotic attractor, and the memory phase, with almost all trajectories being attracted toward fixed-point attractors acting as memories. We found that if a neural network is designed in the chaos phase, the eigenvalue spectrum of its synaptic matrix behaves like that of a random matrix (i.e., all eigenvalues lie uniformly distributed within a circle in the complex plan), and if it is designed in the memory phase, the eigenvalue spectrum will split into two parts: one part corresponds to a random background, the other part equal in number to the memory attractors. The mechanism for these phenomena is discussed in this letter.
Dynamic properties of force fields.
Vitalini, F; Mey, A S J S; Noé, F; Keller, B G
2015-02-28
Molecular-dynamics simulations are increasingly used to study dynamic properties of biological systems. With this development, the ability of force fields to successfully predict relaxation timescales and the associated conformational exchange processes moves into focus. We assess to what extent the dynamic properties of model peptides (Ac-A-NHMe, Ac-V-NHMe, AVAVA, A10) differ when simulated with different force fields (AMBER ff99SB-ILDN, AMBER ff03, OPLS-AA/L, CHARMM27, and GROMOS43a1). The dynamic properties are extracted using Markov state models. For single-residue models (Ac-A-NHMe, Ac-V-NHMe), the slow conformational exchange processes are similar in all force fields, but the associated relaxation timescales differ by up to an order of magnitude. For the peptide systems, not only the relaxation timescales, but also the conformational exchange processes differ considerably across force fields. This finding calls the significance of dynamic interpretations of molecular-dynamics simulations into question.
Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture
Disney, Adam; Reynolds, John
2015-01-01
Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
Logic Dynamics for Deductive Inference -- Its Stability and Neural Basis
NASA Astrophysics Data System (ADS)
Tsuda, Ichiro
2014-12-01
We propose a dynamical model that represents a process of deductive inference. We discuss the stability of logic dynamics and a neural basis for the dynamics. We propose a new concept of descriptive stability, thereby enabling a structure of stable descriptions of mathematical models concerning dynamic phenomena to be clarified. The present theory is based on the wider and deeper thoughts of John S. Nicolis. In particular, it is based on our joint paper on the chaos theory of human short-term memories with a magic number of seven plus or minus two.
Dynamic artificial neural networks with affective systems.
Schuman, Catherine D; Birdwell, J Douglas
2013-01-01
Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance.
Dynamic Pricing in Electronic Commerce Using Neural Network
NASA Astrophysics Data System (ADS)
Ghose, Tapu Kumar; Tran, Thomas T.
In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.
Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno
2013-09-18
Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks.
Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno
2013-09-18
Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks. PMID:24048833
Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns
Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario
2015-01-01
The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming
2015-01-01
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic
Topological and Dynamical Complexity of Random Neural Networks
NASA Astrophysics Data System (ADS)
Wainrib, Gilles; Touboul, Jonathan
2013-03-01
Random neural networks are dynamical descriptions of randomly interconnected neural units. These show a phase transition to chaos as a disorder parameter is increased. The microscopic mechanisms underlying this phase transition are unknown and, similar to spin glasses, shall be fundamentally related to the behavior of the system. In this Letter, we investigate the explosion of complexity arising near that phase transition. We show that the mean number of equilibria undergoes a sharp transition from one equilibrium to a very large number scaling exponentially with the dimension on the system. Near criticality, we compute the exponential rate of divergence, called topological complexity. Strikingly, we show that it behaves exactly as the maximal Lyapunov exponent, a classical measure of dynamical complexity. This relationship unravels a microscopic mechanism leading to chaos which we further demonstrate on a simpler disordered system, suggesting a deep and underexplored link between topological and dynamical complexity.
Helicopter trimming and tracking control using direct neural dynamic programming.
Enns, R; Si, Jennie
2003-01-01
This paper advances a neural-network-based approximate dynamic programming control mechanism that can be applied to complex control problems such as helicopter flight control design. Based on direct neural dynamic programming (DNDP), an approximate dynamic programming methodology, the control system is tailored to learn to maneuver a helicopter. The paper consists of a comprehensive treatise of this DNDP-based tracking control framework and extensive simulation studies for an Apache helicopter. A trim network is developed and seamlessly integrated into the neural dynamic programming (NDP) controller as part of a baseline structure for controlling complex nonlinear systems such as a helicopter. Design robustness is addressed by performing simulations under various disturbance conditions. All designs are tested using FLYRT, a sophisticated industrial scale nonlinear validated model of the Apache helicopter. This is probably the first time that an approximate dynamic programming methodology has been systematically applied to, and evaluated on, a complex, continuous state, multiple-input multiple-output nonlinear system with uncertainty. Though illustrated for helicopters, the DNDP control system framework should be applicable to general purpose tracking control.
A solution to neural field equations by a recurrent neural network method
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2012-09-01
Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.
Nonlinear dynamical system approaches towards neural prosthesis
Torikai, Hiroyuki; Hashimoto, Sho
2011-04-19
An asynchronous discrete-state spiking neurons is a wired system of shift registers that can mimic nonlinear dynamics of an ODE-based neuron model. The control parameter of the neuron is the wiring pattern among the registers and thus they are suitable for on-chip learning. In this paper an asynchronous discrete-state spiking neuron is introduced and its typical nonlinear phenomena are demonstrated. Also, a learning algorithm for a set of neurons is presented and it is demonstrated that the algorithm enables the set of neurons to reconstruct nonlinear dynamics of another set of neurons with unknown parameter values. The learning function is validated by FPGA experiments.
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang
2016-01-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312
Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Kozberg, Mariel G; Thibodeaux, David N; Zhao, Hanzhi T; Yu, Hang; Hillman, Elizabeth M C
2016-10-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results.This article is part of the themed issue 'Interpreting BOLD: a dialogue between cognitive and cellular neuroscience'. PMID:27574312
Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Kozberg, Mariel G; Thibodeaux, David N; Zhao, Hanzhi T; Yu, Hang; Hillman, Elizabeth M C
2016-10-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results.This article is part of the themed issue 'Interpreting BOLD: a dialogue between cognitive and cellular neuroscience'.
On local bifurcations in neural field models with transmission delays.
van Gils, S A; Janssens, S G; Kuznetsov, Yu A; Visser, S
2013-03-01
Neural field models with transmission delays may be cast as abstract delay differential equations (DDE). The theory of dual semigroups (also called sun-star calculus) provides a natural framework for the analysis of a broad class of delay equations, among which DDE. In particular, it may be used advantageously for the investigation of stability and bifurcation of steady states. After introducing the neural field model in its basic functional analytic setting and discussing its spectral properties, we elaborate extensively an example and derive a characteristic equation. Under certain conditions the associated equilibrium may destabilise in a Hopf bifurcation. Furthermore, two Hopf curves may intersect in a double Hopf point in a two-dimensional parameter space. We provide general formulas for the corresponding critical normal form coefficients, evaluate these numerically and interpret the results. PMID:23192328
Slow dynamics in features of synchronized neural network responses
Haroush, Netta; Marom, Shimon
2015-01-01
In this report trial-to-trial variations in the synchronized responses of neural networks are explored over time scales of minutes, in ex-vivo large scale cortical networks. We show that sub-second measures of the individual synchronous response, namely—its latency and decay duration, are related to minutes-scale network response dynamics. Network responsiveness is reflected as residency in, or shifting amongst, areas of the latency-decay plane. The different sensitivities of latency and decay durations to synaptic blockers imply that these two measures reflect aspects of inhibitory and excitatory activities. Taken together, the data suggest that trial-to-trial variations in the synchronized responses of neural networks might be related to effective excitation-inhibition ratio being a dynamic variable over time scales of minutes. PMID:25926787
Specific frontal neural dynamics contribute to decisions to check
Stoll, Frederic M.; Fontanier, Vincent; Procyk, Emmanuel
2016-01-01
Curiosity and information seeking potently shapes our behaviour and are thought to rely on the frontal cortex. Yet, the frontal regions and neural dynamics that control the drive to check for information remain unknown. Here we trained monkeys in a task where they had the opportunity to gain information about the potential delivery of a large bonus reward or continue with a default instructed decision task. Single-unit recordings in behaving monkeys reveal that decisions to check for additional information first engage midcingulate cortex and then lateral prefrontal cortex. The opposite is true for instructed decisions. Importantly, deciding to check engages neurons also involved in performance monitoring. Further, specific midcingulate activity could be discerned several trials before the monkeys actually choose to check the environment. Our data show that deciding to seek information on the current state of the environment is characterized by specific dynamics of neural activity within the prefrontal cortex. PMID:27319361
Traveling bumps and their collisions in a two-dimensional neural field.
Lu, Yao; Sato, Yuzuru; Amari, Shun-Ichi
2011-05-01
A neural field is a continuous version of a neural network model accounting for dynamical pattern forming from populational firing activities in neural tissues. These patterns include standing bumps, moving bumps, traveling waves, target waves, breathers, and spiral waves, many of them observed in various brain areas. They can be categorized into two types: a wave-like activity spreading over the field and a particle-like localized activity. We show through numerical experiments that localized traveling excitation patterns (traveling bumps), which behave like particles, exist in a two-dimensional neural field with excitation and inhibition mechanisms. The traveling bumps do not require any geometric restriction (boundary) to prevent them from propagating away, a fact that might shed light on how neurons in the brain are functionally organized. Collisions of traveling bumps exhibit rich phenomena; they might reveal the manner of information processing in the cortex and be useful in various applications. The trajectories of traveling bumps can be controlled by external inputs.
Shaping the Dynamics of a Bidirectional Neural Interface
Vato, Alessandro; Semprini, Marianna; Maggiolini, Emma; Szymanski, Francois D.; Fadiga, Luciano; Panzeri, Stefano; Mussa-Ivaldi, Ferdinando A.
2012-01-01
Progress in decoding neural signals has enabled the development of interfaces that translate cortical brain activities into commands for operating robotic arms and other devices. The electrical stimulation of sensory areas provides a means to create artificial sensory information about the state of a device. Taken together, neural activity recording and microstimulation techniques allow us to embed a portion of the central nervous system within a closed-loop system, whose behavior emerges from the combined dynamical properties of its neural and artificial components. In this study we asked if it is possible to concurrently regulate this bidirectional brain-machine interaction so as to shape a desired dynamical behavior of the combined system. To this end, we followed a well-known biological pathway. In vertebrates, the communications between brain and limb mechanics are mediated by the spinal cord, which combines brain instructions with sensory information and organizes coordinated patterns of muscle forces driving the limbs along dynamically stable trajectories. We report the creation and testing of the first neural interface that emulates this sensory-motor interaction. The interface organizes a bidirectional communication between sensory and motor areas of the brain of anaesthetized rats and an external dynamical object with programmable properties. The system includes (a) a motor interface decoding signals from a motor cortical area, and (b) a sensory interface encoding the state of the external object into electrical stimuli to a somatosensory area. The interactions between brain activities and the state of the external object generate a family of trajectories converging upon a selected equilibrium point from arbitrary starting locations. Thus, the bidirectional interface establishes the possibility to specify not only a particular movement trajectory but an entire family of motions, which includes the prescribed reactions to unexpected perturbations. PMID
Dynamic neural activity during stress signals resilient coping.
Sinha, Rajita; Lacadie, Cheryl M; Constable, R Todd; Seo, Dongju
2016-08-01
Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping. PMID:27432990
Dynamics of gauge field inflation
Alexander, Stephon; Jyoti, Dhrubo; Kosowsky, Arthur; Marcianò, Antonino
2015-05-05
We analyze the existence and stability of dynamical attractor solutions for cosmological inflation driven by the coupling between fermions and a gauge field. Assuming a spatially homogeneous and isotropic gauge field and fermion current, the interacting fermion equation of motion reduces to that of a free fermion up to a phase shift. Consistency of the model is ensured via the Stückelberg mechanism. We prove the existence of exactly one stable solution, and demonstrate the stability numerically. Inflation arises without fine tuning, and does not require postulating any effective potential or non-standard coupling.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.
Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc
2016-08-01
This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data. PMID:26955020
Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.
Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc
2016-08-01
This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.
The neural dynamics of updating person impressions.
Mende-Siedlecki, Peter; Cai, Yang; Todorov, Alexander
2013-08-01
Person perception is a dynamic, evolving process. Because other people are an endless source of social information, people need to update their impressions of others based upon new information. We devised an fMRI study to identify brain regions involved in updating impressions. Participants saw faces paired with valenced behavioral information and were asked to form impressions of these individuals. Each face was seen five times in a row, each time with a different behavioral description. Critically, for half of the faces the behaviors were evaluatively consistent, while for the other half they were inconsistent. In line with prior work, dorsomedial prefrontal cortex (dmPFC) was associated with forming impressions of individuals based on behavioral information. More importantly, a whole-brain analysis revealed a network of other regions associated with updating impressions of individuals who exhibited evaluatively inconsistent behaviors, including rostrolateral PFC, superior temporal sulcus, right inferior parietal lobule and posterior cingulate cortex.
The neural dynamics of updating person impressions.
Mende-Siedlecki, Peter; Cai, Yang; Todorov, Alexander
2013-08-01
Person perception is a dynamic, evolving process. Because other people are an endless source of social information, people need to update their impressions of others based upon new information. We devised an fMRI study to identify brain regions involved in updating impressions. Participants saw faces paired with valenced behavioral information and were asked to form impressions of these individuals. Each face was seen five times in a row, each time with a different behavioral description. Critically, for half of the faces the behaviors were evaluatively consistent, while for the other half they were inconsistent. In line with prior work, dorsomedial prefrontal cortex (dmPFC) was associated with forming impressions of individuals based on behavioral information. More importantly, a whole-brain analysis revealed a network of other regions associated with updating impressions of individuals who exhibited evaluatively inconsistent behaviors, including rostrolateral PFC, superior temporal sulcus, right inferior parietal lobule and posterior cingulate cortex. PMID:22490923
The dynamical stability of reverberatory neural circuits.
Tegnér, Jesper; Compte, Albert; Wang, Xiao-Jing
2002-12-01
The concept of reverberation proposed by Lorente de Nó and Hebb is key to understanding strongly recurrent cortical networks. In particular, synaptic reverberation is now viewed as a likely mechanism for the active maintenance of working memory in the prefrontal cortex. Theoretically, this has spurred a debate as to how such a potentially explosive mechanism can provide stable working-memory function given the synaptic and cellular mechanisms at play in the cerebral cortex. We present here new evidence for the participation of NMDA receptors in the stabilization of persistent delay activity in a biophysical network model of conductance-based neurons. We show that the stability of working-memory function, and the required NMDA/AMPA ratio at recurrent excitatory synapses, depend on physiological properties of neurons and synaptic interactions, such as the time constants of excitation and inhibition, mutual inhibition between interneurons, differential NMDA receptor participation at excitatory projections to pyramidal neurons and interneurons, or the presence of slow intrinsic ion currents in pyramidal neurons. We review other mechanisms proposed to enhance the dynamical stability of synaptically generated attractor states of a reverberatory circuit. This recent work represents a necessary and significant step towards testing attractor network models by cortical electrophysiology.
Perspective: network-guided pattern formation of neural dynamics.
Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C
2014-10-01
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics. PMID:25180302
The neural dynamics of task context in free recall.
Polyn, Sean M; Kragel, James E; Morton, Neal W; McCluey, Joshua D; Cohen, Zachary D
2012-03-01
Multivariate pattern analysis (MVPA) is a powerful tool for relating theories of cognitive function to the neural dynamics observed while people engage in cognitive tasks. Here, we use the Context Maintenance and Retrieval model of free recall (CMR; Polyn et al., 2009a) to interpret variability in the strength of task-specific patterns of distributed neural activity as participants study and recall lists of words. The CMR model describes how temporal and source-related (here, encoding task) information combine in a contextual representation that is responsible for guiding memory search. Each studied word in the free-recall paradigm is associated with one of two encoding tasks (size and animacy) that have distinct neural representations during encoding. We find evidence for the context retrieval hypothesis central to the CMR model: Task-specific patterns of neural activity are reactivated during memory search, as the participant recalls an item previously associated with a particular task. Furthermore, we find that the fidelity of these task representations during study is related to task-shifting, the serial position of the studied item, and variability in the magnitude of the recency effect across participants. The CMR model suggests that these effects may be related to a central parameter of the model that controls the rate that an internal contextual representation integrates information from the surrounding environment.
Predicting physical time series using dynamic ridge polynomial neural networks.
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.
From invasion to extinction in heterogeneous neural fields.
Bressloff, Paul C
2012-03-26
In this paper, we analyze the invasion and extinction of activity in heterogeneous neural fields. We first consider the effects of spatial heterogeneities on the propagation of an invasive activity front. In contrast to previous studies of front propagation in neural media, we assume that the front propagates into an unstable rather than a metastable zero-activity state. For sufficiently localized initial conditions, the asymptotic velocity of the resulting pulled front is given by the linear spreading velocity, which is determined by linearizing about the unstable state within the leading edge of the front. One of the characteristic features of these so-called pulled fronts is their sensitivity to perturbations inside the leading edge. This means that standard perturbation methods for studying the effects of spatial heterogeneities or external noise fluctuations break down. We show how to extend a partial differential equation method for analyzing pulled fronts in slowly modulated environments to the case of neural fields with slowly modulated synaptic weights. The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding local Hamilton-Jacobi equation. We use steepest descents to derive the Hamilton-Jacobi equation from the original nonlocal neural field equation. In the case of weak synaptic heterogenities, we then use perturbation theory to solve the corresponding Hamilton equations and thus determine the time-dependent wave speed. In the second part of the paper, we investigate how time-dependent heterogenities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic
From invasion to extinction in heterogeneous neural fields.
Bressloff, Paul C
2012-01-01
In this paper, we analyze the invasion and extinction of activity in heterogeneous neural fields. We first consider the effects of spatial heterogeneities on the propagation of an invasive activity front. In contrast to previous studies of front propagation in neural media, we assume that the front propagates into an unstable rather than a metastable zero-activity state. For sufficiently localized initial conditions, the asymptotic velocity of the resulting pulled front is given by the linear spreading velocity, which is determined by linearizing about the unstable state within the leading edge of the front. One of the characteristic features of these so-called pulled fronts is their sensitivity to perturbations inside the leading edge. This means that standard perturbation methods for studying the effects of spatial heterogeneities or external noise fluctuations break down. We show how to extend a partial differential equation method for analyzing pulled fronts in slowly modulated environments to the case of neural fields with slowly modulated synaptic weights. The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding local Hamilton-Jacobi equation. We use steepest descents to derive the Hamilton-Jacobi equation from the original nonlocal neural field equation. In the case of weak synaptic heterogenities, we then use perturbation theory to solve the corresponding Hamilton equations and thus determine the time-dependent wave speed. In the second part of the paper, we investigate how time-dependent heterogenities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic
Neural dynamics and circuit mechanisms of decision-making.
Wang, Xiao-Jing
2012-12-01
In this review, I briefly summarize current neurobiological studies of decision-making that bear on two general themes. The first focuses on the nature of neural representation and dynamics in a decision circuit. Experimental and computational results suggest that ramping-to-threshold in the temporal domain and trajectory of population activity in the state space represent a duality of perspectives on a decision process. Moreover, a decision circuit can display several different dynamical regimes, such as the ramping mode and the jumping mode with distinct defining properties. The second is concerned with the relationship between biologically-based mechanistic models and normative-type models. A fruitful interplay between experiments and these models at different levels of abstraction have enabled investigators to pose increasingly refined questions and gain new insights into the neural basis of decision-making. In particular, recent work on multi-alternative decisions suggests that deviations from rational models of choice behavior can be explained by established neural mechanisms.
Nonlinear identification of process dynamics using neural networks
Parlos, A.G.; Atiya, A.F.; Chong, K.T. . Dept. of Nuclear Engineering); Tsai, W.K. )
1992-01-01
In this paper the nonlinear identification of process dynamics encountered in nuclear power plant components is addressed, in an input-output sense, using artificial neural systems. A hybrid feedforward/feedback neural network, namely, a recurrent multilayer perceptron, is used as the model structure to be identified. The feedforward portion of the network architecture provides its well-known interpolation property, while through recurrency and cross-talk, the local information feedback enables representation of temporal variations in the system nonlinearities. The standard backpropagation learning algorithm is modified, and it is used for the supervised training of the proposed hybrid network. The performance of recurrent multilayer perceptron networks in identifying process dynamics is investigated via the case study of a U-tube steam generator. The response of representative steam generator is predicted using a neural network, and it is compared to the response obtained from a sophisticated computer model based on first principles. The transient responses compare well, although further research is warranted to determine the predictive capabilities of these networks during more severe operational transients and accident scenarios.
Dynamic neural mechanisms underlie race disparities in social cognition.
Cassidy, Brittany S; Krendl, Anne C
2016-05-15
Race disparities in behavior may emerge in several ways, some of which may be independent of implicit bias. To mitigate the pernicious effects of different race disparities for racial minorities, we must understand whether they are rooted in perceptual, affective, or cognitive processing with regard to race perception. We used fMRI to disentangle dynamic neural mechanisms predictive of two separable race disparities that can be obtained from a trustworthiness ratings task. Increased coupling between regions involved in perceptual and affective processing when viewing Black versus White faces predicted less later racial trust disparity, which was related to implicit bias. In contrast, increased functional coupling between regions involved in controlled processing predicted less later disparity in the differentiation of Black versus White faces with regard to perceived trust, which was unrelated to bias. These findings reveal that distinct neural signatures underlie separable race disparities in social cognition that may or may not be related to implicit bias. PMID:26908320
Bio-Inspired Neural Model for Learning Dynamic Models
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Suri, Ronald
2009-01-01
A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.
NASA Astrophysics Data System (ADS)
Touboul, Jonathan
2012-08-01
In this manuscript we analyze the collective behavior of mean-field limits of large-scale, spatially extended stochastic neuronal networks with delays. Rigorously, the asymptotic regime of such systems is characterized by a very intricate stochastic delayed integro-differential McKean-Vlasov equation that remain impenetrable, leaving the stochastic collective dynamics of such networks poorly understood. In order to study these macroscopic dynamics, we analyze networks of firing-rate neurons, i.e. with linear intrinsic dynamics and sigmoidal interactions. In that case, we prove that the solution of the mean-field equation is Gaussian, hence characterized by its two first moments, and that these two quantities satisfy a set of coupled delayed integro-differential equations. These equations are similar to usual neural field equations, and incorporate noise levels as a parameter, allowing analysis of noise-induced transitions. We identify through bifurcation analysis several qualitative transitions due to noise in the mean-field limit. In particular, stabilization of spatially homogeneous solutions, synchronized oscillations, bumps, chaotic dynamics, wave or bump splitting are exhibited and arise from static or dynamic Turing-Hopf bifurcations. These surprising phenomena allow further exploring the role of noise in the nervous system.
Dynamic digital watermark technique based on neural network
NASA Astrophysics Data System (ADS)
Gu, Tao; Li, Xu
2008-04-01
An algorithm of dynamic watermark based on neural network is presented which is more robust against attack of false authentication and watermark-tampered operations contrasting with one watermark embedded method. (1) Five binary images used as watermarks are coded into a binary array. The total number of 0s and 1s is 5*N, every 0 or 1 is enlarged fivefold by information-enlarged technique. N is the original total number of the watermarks' binary bits. (2) Choose the seed image pixel p x,y and its 3×3 vicinities pixel p x-1,y-1,p x-1,y,p x-1,y+1,p x,y-1,p x,y+1,p x+1,y-1,p x+1,y,p x+1,y+1 as one sample space. The p x,y is used as the neural network target and the other eight pixel values are used as neural network inputs. (3) To make the neural network learn the sample space, 5*N pixel values and their closely relevant pixel values are randomly chosen with a password from a color BMP format image and used to train the neural network.(4) A four-layer neural network is constructed to describe the nonlinear mapped relationship between inputs and outputs. (5) One bit from the array is embedded by adjusting the polarity between a chosen pixel value and the output value of the model. (6) One randomizer generates a number to ascertain the counts of watermarks for retrieving. The randomly ascertained watermarks can be retrieved by using the restored neural network outputs value, the corresponding image pixels value, and the restore function without knowing the original image and watermarks (The restored coded-watermark bit=1, if ox,y(restored)>p x,y(reconstructed, else coded-watermark bit =0). The retrieved watermarks are different when extracting each time. The proposed technique can offer more watermarking proofs than one watermark embedded algorithm. Experimental results show that the proposed technique is very robust against some image processing operations and JPEG lossy compression. Therefore, the algorithm can be used to protect the copyright of one important image.
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S; Zhang, Mingming; Durand, Dominique M
2015-12-01
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2-6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5-5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds.
dNSP: a biologically inspired dynamic Neural network approach to Signal Processing.
Cano-Izquierdo, José Manuel; Ibarrola, Julio; Pinzolas, Miguel; Almonacid, Miguel
2008-09-01
The arriving order of data is one of the intrinsic properties of a signal. Therefore, techniques dealing with this temporal relation are required for identification and signal processing tasks. To perform a classification of the signal according with its temporal characteristics, it would be useful to find a feature vector in which the temporal attributes were embedded. The correlation and power density spectrum functions are suitable tools to manage this issue. These functions are usually defined with statistical formulation. On the other hand, in biology there can be found numerous processes in which signals are processed to give a feature vector; for example, the processing of sound by the auditory system. In this work, the dNSP (dynamic Neural Signal Processing) architecture is proposed. This architecture allows representing a time-varying signal by a spatial (thus statical) vector. Inspired by the aforementioned biological processes, the dNSP performs frequency decomposition using an analogical parallel algorithm carried out by simple processing units. The architecture has been developed under the paradigm of a multilayer neural network, where the different layers are composed by units whose activation functions have been extracted from the theory of Neural Dynamic [Grossberg, S. (1988). Nonlinear neural networks principles, mechanisms and architectures. Neural Networks, 1, 17-61]. A theoretical study of the behavior of the dynamic equations of the units and their relationship with some statistical functions allows establishing a parallelism between the unit activations and correlation and power density spectrum functions. To test the capabilities of the proposed approach, several testbeds have been employed, i.e. the frequencial study of mathematical functions. As a possible application of the architecture, a highly interesting problem in the field of automatic control is addressed: the recognition of a controlled DC motor operating state. PMID:18579344
Endothelial cells regulate neural crest and second heart field morphogenesis
Milgrom-Hoffman, Michal; Michailovici, Inbal; Ferrara, Napoleone; Zelzer, Elazar; Tzahor, Eldad
2014-01-01
ABSTRACT Cardiac and craniofacial developmental programs are intricately linked during early embryogenesis, which is also reflected by a high frequency of birth defects affecting both regions. The molecular nature of the crosstalk between mesoderm and neural crest progenitors and the involvement of endothelial cells within the cardio–craniofacial field are largely unclear. Here we show in the mouse that genetic ablation of vascular endothelial growth factor receptor 2 (Flk1) in the mesoderm results in early embryonic lethality, severe deformation of the cardio–craniofacial field, lack of endothelial cells and a poorly formed vascular system. We provide evidence that endothelial cells are required for migration and survival of cranial neural crest cells and consequently for the deployment of second heart field progenitors into the cardiac outflow tract. Insights into the molecular mechanisms reveal marked reduction in Transforming growth factor beta 1 (Tgfb1) along with changes in the extracellular matrix (ECM) composition. Our collective findings in both mouse and avian models suggest that endothelial cells coordinate cardio–craniofacial morphogenesis, in part via a conserved signaling circuit regulating ECM remodeling by Tgfb1. PMID:24996922
Transient Turing patterns in a neural field model.
Elvin, A J; Laing, C R; Roberts, M G
2009-01-01
We investigate Turing bifurcations in a neural field model with one spatial dimension. For some parameter values the resulting Turing patterns are stable, while for others the patterns appear transiently. We show that this difference is due to the relative position in parameter space of the saddle-node bifurcation of a spatially periodic pattern and the Turing bifurcation point. By varying parameters we are able to observe transient patterns whose duration scales in the same way as type-I intermittency. Similar behavior occurs in two spatial dimensions.
Autonomic neural control of dynamic cerebral autoregulation in humans
NASA Technical Reports Server (NTRS)
Zhang, Rong; Zuckerman, Julie H.; Iwasaki, Kenichi; Wilson, Thad E.; Crandall, Craig G.; Levine, Benjamin D.
2002-01-01
BACKGROUND: The purpose of the present study was to determine the role of autonomic neural control of dynamic cerebral autoregulation in humans. METHODS AND RESULTS: We measured arterial pressure and cerebral blood flow (CBF) velocity in 12 healthy subjects (aged 29+/-6 years) before and after ganglion blockade with trimethaphan. CBF velocity was measured in the middle cerebral artery using transcranial Doppler. The magnitude of spontaneous changes in mean blood pressure and CBF velocity were quantified by spectral analysis. The transfer function gain, phase, and coherence between these variables were estimated to quantify dynamic cerebral autoregulation. After ganglion blockade, systolic and pulse pressure decreased significantly by 13% and 26%, respectively. CBF velocity decreased by 6% (P<0.05). In the very low frequency range (0.02 to 0.07 Hz), mean blood pressure variability decreased significantly (by 82%), while CBF velocity variability persisted. Thus, transfer function gain increased by 81%. In addition, the phase lead of CBF velocity to arterial pressure diminished. These changes in transfer function gain and phase persisted despite restoration of arterial pressure by infusion of phenylephrine and normalization of mean blood pressure variability by oscillatory lower body negative pressure. CONCLUSIONS: These data suggest that dynamic cerebral autoregulation is altered by ganglion blockade. We speculate that autonomic neural control of the cerebral circulation is tonically active and likely plays a significant role in the regulation of beat-to-beat CBF in humans.
A complex-valued neural dynamical optimization approach and its stability analysis.
Zhang, Songchuan; Xia, Youshen; Zheng, Weixing
2015-01-01
In this paper, we propose a complex-valued neural dynamical method for solving a complex-valued nonlinear convex programming problem. Theoretically, we prove that the proposed complex-valued neural dynamical approach is globally stable and convergent to the optimal solution. The proposed neural dynamical approach significantly generalizes the real-valued nonlinear Lagrange network completely in the complex domain. Compared with existing real-valued neural networks and numerical optimization methods for solving complex-valued quadratic convex programming problems, the proposed complex-valued neural dynamical approach can avoid redundant computation in a double real-valued space and thus has a low model complexity and storage capacity. Numerical simulations are presented to show the effectiveness of the proposed complex-valued neural dynamical approach.
Binocular rivalry waves in a directionally selective neural field model
NASA Astrophysics Data System (ADS)
Carroll, Samuel R.; Bressloff, Paul C.
2014-10-01
We extend a neural field model of binocular rivalry waves in the visual cortex to incorporate direction selectivity of moving stimuli. For each eye, we consider a one-dimensional network of neurons that respond maximally to a fixed orientation and speed of a grating stimulus. Recurrent connections within each one-dimensional network are taken to be excitatory and asymmetric, where the asymmetry captures the direction and speed of the moving stimuli. Connections between the two networks are taken to be inhibitory (cross-inhibition). As per previous studies, we incorporate slow adaption as a symmetry breaking mechanism that allows waves to propagate. We derive an analytical expression for traveling wave solutions of the neural field equations, as well as an implicit equation for the wave speed as a function of neurophysiological parameters, and analyze their stability. Most importantly, we show that propagation of traveling waves is faster in the direction of stimulus motion than against it, which is in agreement with previous experimental and computational studies.
Derivation of a neural field model from a network of theta neurons.
Laing, Carlo R
2014-07-01
Neural field models are used to study macroscopic spatiotemporal patterns in the cortex. Their derivation from networks of model neurons normally involves a number of assumptions, which may not be correct. Here we present an exact derivation of a neural field model from an infinite network of theta neurons, the canonical form of a type I neuron. We demonstrate the existence of a "bump" solution in both a discrete network of neurons and in the corresponding neural field model.
Dynamic analysis of a general class of winner-take-all competitive neural networks.
Fang, Yuguang; Cohen, Michael A; Kincaid, Thomas G
2010-05-01
This paper studies a general class of dynamical neural networks with lateral inhibition, exhibiting winner-take-all (WTA) behavior. These networks are motivated by a metal-oxide-semiconductor field effect transistor (MOSFET) implementation of neural networks, in which mutual competition plays a very important role. We show that for a fairly general class of competitive neural networks, WTA behavior exists. Sufficient conditions for the network to have a WTA equilibrium are obtained, and rigorous convergence analysis is carried out. The conditions for the network to have the WTA behavior obtained in this paper provide design guidelines for the network implementation and fabrication. We also demonstrate that whenever the network gets into the WTA region, it will stay in that region and settle down exponentially fast to the WTA point. This provides a speeding procedure for the decision making: as soon as it gets into the region, the winner can be declared. Finally, we show that this WTA neural network has a self-resetting property, and a resetting principle is proposed. PMID:20215068
Stronger neural dynamics capture changes in infants’ visual working memory capacity over development
Perone, Sammy; Simmering, Vanessa R.; Spencer, John P.
2012-01-01
Visual working memory (VWM) capacity has been studied extensively in adults, and methodological advances have enabled researchers to probe capacity limits in infancy using a preferential looking paradigm. Evidence suggests that capacity increases rapidly between 6 and 10 months of age. To understand how the VWM system develops, we must understand the relationship between the looking behavior used to study VWM and underlying cognitive processes. We present a dynamic neural field model that captures both real-time and developmental processes underlying performance. Three simulation experiments show how looking is linked to VWM processes during infancy and how developmental changes in performance could arise through increasing neural connectivity. These results provide insight into the sources of capacity limits and VWM development more generally. PMID:22010897
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex.
Enel, Pierre; Procyk, Emmanuel; Quilodran, René; Dominey, Peter Ford
2016-06-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Procyk, Emmanuel; Dominey, Peter Ford
2016-01-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a
Sensorimotor learning biases choice behavior: a learning neural field model for decision making.
Klaes, Christian; Schneegans, Sebastian; Schöner, Gregor; Gail, Alexander
2012-01-01
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for
Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making
Schöner, Gregor; Gail, Alexander
2012-01-01
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for
A reflexive neural network for dynamic biped walking control.
Geng, Tao; Porr, Bernd; Wörgötter, Florentin
2006-05-01
Biped walking remains a difficult problem, and robot models can greatly facilitate our understanding of the underlying biomechanical principles as well as their neuronal control. The goal of this study is to specifically demonstrate that stable biped walking can be achieved by combining the physical properties of the walking robot with a small, reflex-based neuronal network governed mainly by local sensor signals. Building on earlier work (Taga, 1995; Cruse, Kindermann, Schumm, Dean, & Schmitz, 1998), this study shows that human-like gaits emerge without specific position or trajectory control and that the walker is able to compensate small disturbances through its own dynamical properties. The reflexive controller used here has the following characteristics, which are different from earlier approaches: (1) Control is mainly local. Hence, it uses only two signals (anterior extreme angle and ground contact), which operate at the interjoint level. All other signals operate only at single joints. (2) Neither position control nor trajectory tracking control is used. Instead, the approximate nature of the local reflexes on each joint allows the robot mechanics itself (e.g., its passive dynamics) to contribute substantially to the overall gait trajectory computation. (3) The motor control scheme used in the local reflexes of our robot is more straightforward and has more biological plausibility than that of other robots, because the outputs of the motor neurons in our reflexive controller are directly driving the motors of the joints rather than working as references for position or velocity control. As a consequence, the neural controller and the robot mechanics are closely coupled as a neuromechanical system, and this study emphasizes that dynamically stable biped walking gaits emerge from the coupling between neural computation and physical computation. This is demonstrated by different walking experiments using a real robot as well as by a Poincaré map analysis
Neural dynamic optimization for autonomous aerial vehicle trajectory design
NASA Astrophysics Data System (ADS)
Xu, Peng; Verma, Ajay; Mayer, Richard J.
2007-04-01
Online aerial vehicle trajectory design and reshaping are crucial for a class of autonomous aerial vehicles such as reusable launch vehicles in order to achieve flexibility in real-time flying operations. An aerial vehicle is modeled as a nonlinear multi-input-multi-output (MIMO) system. The inputs include the control parameters and current system states that include velocity and position coordinates of the vehicle. The outputs are the new system states. An ideal trajectory control design system generates a series of control commands to achieve a desired trajectory under various disturbances and vehicle model uncertainties including aerodynamic perturbations caused by geometric damage to the vehicle. Conventional approaches suffer from the nonlinearity of the MIMO system, and the high-dimensionality of the system state space. In this paper, we apply a Neural Dynamic Optimization (NDO) based approach to overcome these difficulties. The core of an NDO model is a multilayer perceptron (MLP) neural network, which generates the control parameters online. The inputs of the MLP are the time-variant states of the MIMO systems. The outputs of the MLP and the control parameters will be used by the MIMO to generate new system states. By such a formulation, an NDO model approximates the time-varying optimal feedback solution.
Neural dynamics of change detection in crowded acoustic scenes.
Sohoglu, Ediz; Chait, Maria
2016-02-01
Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection.
Dynamic construction of the neural networks underpinning empathy for pain.
Betti, Viviana; Aglioti, Salvatore Maria
2016-04-01
When people witness or imagine the pain of another person, their nervous system may react as if they were feeling that pain themselves. Early neuroscientific evidence indicates that the firsthand and vicarious experiences of pain share largely overlapping neural structures, which typically correspond to the lateral and medial brain regions that encode the sensory and the affective qualities of pain. Such neural circuitry is highly malleable and allows people to flexibly adjust the empathic behavior depending on social and personal factors. Recent views posit, however, that the brain can be conceptualized as a complex system, in which behavior emerges from the interaction between functionally connected brain regions, organized into large-scale networks. Beyond the classical modular view of the brain, here we suggest that empathic behavior may be understood through a dynamic network-based approach where the cortical circuits associated with the experience of pain flexibly change in order to code self- and other-related emotions and to intrinsically map our mentality to empathetically react to others. PMID:26877105
Bojak, Ingo; Stoyanov, Zhivko V.; Liley, David T. J.
2015-01-01
Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex. PMID:25767438
Neural dynamic programming applied to rotorcraft flight control and reconfiguration
NASA Astrophysics Data System (ADS)
Enns, Russell James
This dissertation introduces a new rotorcraft flight control methodology based on a relatively new form of neural control, neural dynamic programming (NDP). NDP is an on-line learning control scheme that is in its infancy and has only been applied to simple systems, such as those possessing a single control and a handful of states. This dissertation builds on the existing NDP concept to provide a comprehensive control system framework that can perform well as a learning controller for more realistic and practical systems of higher dimension such as helicopters. To accommodate such complex systems, the dissertation introduces the concept of a trim network that is seamlessly integrated into the NDP control structure and is also trained using this structure. This is the first time that neural networks have been applied to the helicopter control problem as a direct form of control without using other controller methodologies to augment the neural controller and without using order reducing simplifications such as axes decoupling. The dissertation focuses on providing a viable alternative helicopter control system design approach rather than providing extensive comparisons among various available controllers. As such, results showing the system's ability to stabilize the helicopter and to perform command tracking, without explicit comparison to other methods, are presented. In this research, design robustness was addressed by performing simulations under various disturbance conditions. All designs were tested using FLYRT, a sophisticated, industrial-scale, nonlinear, validated model of the Apache helicopter. Though illustrated for helicopters, the NDP control system framework should be applicable to general purpose multi-input multi-output (MIMO) control. In addition, this dissertation tackles the helicopter reconfigurable flight control problem, finding control solutions when the aircraft, and in particular its control actuators, are damaged. Such solutions have
Continuum neural dynamics models for visual object identification
NASA Astrophysics Data System (ADS)
Singh, Vijay; Tchernookov, Martin; Nemenman, Ilya
2013-03-01
Visual object identification has remained one of the most challenging problems even after decades of research. Most of the current models of the visual cortex represent neurons as discrete elements in a largely feedforward network arrangement. They are generally very specific in the objects they can identify. We develop a continuum model of recurrent, nonlinear neural dynamics in the primary visual cortex, incorporating connectivity patterns and other experimentally observed features of the cortex. The model has an interesting correspondence to the Landau-DeGennes theory of a nematic liquid crystal in two dimensions. We use collective spatiotemporal excitations of the model cortex as a signal for segmentation of contiguous objects from the background clutter. The model is capable of suppressing clutter in images and filling in occluded elements of object contours, resulting in high-precision, high-recall identification of large objects from cluttered scenes. This research has been partially supported by the ARO grant No. 60704-NS-II.
Nonlinear adaptive trajectory tracking using dynamic neural networks.
Poznyak, A S; Yu, W; Sanchez, E N; Perez, J P
1999-01-01
In this paper the adaptive nonlinear identification and trajectory tracking are discussed via dynamic neural networks. By means of a Lyapunov-like analysis we determine stability conditions for the identification error. Then we analyze the trajectory tracking error by a local optimal controller. An algebraic Riccati equation and a differential one are used for the identification and the tracking error analysis. As our main original contributions, we establish two theorems: the first one gives a bound for the identification error and the second one establishes a bound for the tracking error. We illustrate the effectiveness of these results by two examples: the second-order relay system with multiple isolated equilibrium points and the chaotic system given by Duffing equation. PMID:18252641
Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1997-01-01
A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.
Contextual Novelty Modulates the Neural Dynamics of Reward Anticipation
Bunzeck, Nico; Guitart-Masip, Marc; Dolan, Ray J.; Duzel, Emrah
2011-01-01
We investigated how rapidly the reward predicting properties of visual cues are signaled in the human brain and the extent these reward prediction signals are contextually modifiable. In a magnetoencephalography (MEG) study, we presented participants with fractal visual cues that predicted monetary rewards with different probabilities. These cues were presented in the temporal context of a preceding novel or familiar image of a natural scene. Starting at ~100ms after cue onset, reward probability was signalled in the event-related fields (ERFs) over temporo-occipital sensors and in the power of theta (5-8Hz) and beta (20-30Hz) band oscillations over frontal sensors. While theta decreased with reward probability beta power showed the opposite effect. Thus, in humans anticipatory reward responses are generated rapidly, within 100ms after the onset of reward-predicting cues, which is similar to the timing established in non-human primates. Contextual novelty enhanced the reward anticipation responses in both ERFs and in beta oscillations starting at ~100 ms after cue onset. This very early context effect is compatible with a physiological model that invokes the mediation of a hippocampal-VTA loop according to which novelty modulates neural response properties within the reward circuitry. We conclude that the neural processing of cues that predict future rewards is temporally highly efficient and contextually modifiable. PMID:21900560
Spatial interactions in the superior colliculus predict saccade behavior in a neural field model.
Marino, Robert A; Trappenberg, Thomas P; Dorris, Michael; Munoz, Douglas P
2012-02-01
During natural vision, eye movements are dynamically controlled by the combinations of goal-related top-down (TD) and stimulus-related bottom-up (BU) neural signals that map onto objects or locations of interest in the visual world. In primates, both BU and TD signals converge in many areas of the brain, including the intermediate layers of the superior colliculus (SCi), a midbrain structure that contains a retinotopically coded map for saccades. How TD and BU signals combine or interact within the SCi map to influence saccades remains poorly understood and actively debated. It has been proposed that winner-take-all competition between these signals occurs dynamically within this map to determine the next location for gaze. Here, we examine how TD and BU signals interact spatially within an artificial two-dimensional dynamic winner-take-all neural field model of the SCi to influence saccadic RT (SRT). We measured point images (spatially organized population activity on the SC map) physiologically to inform the TD and BU model parameters. In this model, TD and BU signals interacted nonlinearly within the SCi map to influence SRT via changes to the (1) spatial size or extent of individual signals, (2) peak magnitude of individual signals, (3) total number of competing signals, and (4) the total spatial separation between signals in the visual field. This model reproduced previous behavioral studies of TD and BU influences on SRT and accounted for multiple inconsistencies between them. This is achieved by demonstrating how, under different experimental conditions, the spatial interactions of TD and BU signals can lead to either increases or decreases in SRT. Our results suggest that dynamic winner-take-all modeling with local excitation and distal inhibition in two dimensions accurately reflects both the physiological activity within the SCi map and the behavioral changes in SRT that result from BU and TD manipulations. PMID:21942761
Track and Field Dynamics. Second Edition.
ERIC Educational Resources Information Center
Ecker, Tom
Track and field coaching is considered an art embodying three sciences--physiology, psychology, and dynamics. It is the area of dynamics, the branch of physics that deals with the action of force on bodies, that is central to this book. Although the book does not cover the entire realm of dynamics, the laws and principles that relate directly to…
Bursting dynamics remarkably improve the performance of neural networks on liquid computing.
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2016-10-01
Burst firings are functionally important behaviors displayed by neural circuits, which plays a primary role in reliable transmission of electrical signals for neuronal communication. However, with respect to the computational capability of neural networks, most of relevant studies are based on the spiking dynamics of individual neurons, while burst firing is seldom considered. In this paper, we carry out a comprehensive study to compare the performance of spiking and bursting dynamics on the capability of liquid computing, which is an effective approach for intelligent computation of neural networks. The results show that neural networks with bursting dynamic have much better computational performance than those with spiking dynamics, especially for complex computational tasks. Further analysis demonstrate that the fast firing pattern of bursting dynamics can obviously enhance the efficiency of synaptic integration from pre-neurons both temporally and spatially. This indicates that bursting dynamic can significantly enhance the complexity of network activity, implying its high efficiency in information processing. PMID:27668020
Bursting dynamics remarkably improve the performance of neural networks on liquid computing.
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2016-10-01
Burst firings are functionally important behaviors displayed by neural circuits, which plays a primary role in reliable transmission of electrical signals for neuronal communication. However, with respect to the computational capability of neural networks, most of relevant studies are based on the spiking dynamics of individual neurons, while burst firing is seldom considered. In this paper, we carry out a comprehensive study to compare the performance of spiking and bursting dynamics on the capability of liquid computing, which is an effective approach for intelligent computation of neural networks. The results show that neural networks with bursting dynamic have much better computational performance than those with spiking dynamics, especially for complex computational tasks. Further analysis demonstrate that the fast firing pattern of bursting dynamics can obviously enhance the efficiency of synaptic integration from pre-neurons both temporally and spatially. This indicates that bursting dynamic can significantly enhance the complexity of network activity, implying its high efficiency in information processing.
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data. PMID:26871090
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
Control of Complex Dynamic Systems by Neural Networks
NASA Technical Reports Server (NTRS)
Spall, James C.; Cristion, John A.
1993-01-01
This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.
Autonomic neural control of heart rate during dynamic exercise: revisited
White, Daniel W; Raven, Peter B
2014-01-01
The accepted model of autonomic control of heart rate (HR) during dynamic exercise indicates that the initial increase is entirely attributable to the withdrawal of parasympathetic nervous system (PSNS) activity and that subsequent increases in HR are entirely attributable to increases in cardiac sympathetic activity. In the present review, we sought to re-evaluate the model of autonomic neural control of HR in humans during progressive increases in dynamic exercise workload. We analysed data from both new and previously published studies involving baroreflex stimulation and pharmacological blockade of the autonomic nervous system. Results indicate that the PSNS remains functionally active throughout exercise and that increases in HR from rest to maximal exercise result from an increasing workload-related transition from a 4 : 1 vagal–sympathetic balance to a 4 : 1 sympatho–vagal balance. Furthermore, the beat-to-beat autonomic reflex control of HR was found to be dependent on the ability of the PSNS to modulate the HR as it was progressively restrained by increasing workload-related sympathetic nerve activity. In conclusion: (i) increases in exercise workload-related HR are not caused by a total withdrawal of the PSNS followed by an increase in sympathetic tone; (ii) reciprocal antagonism is key to the transition from vagal to sympathetic dominance, and (iii) resetting of the arterial baroreflex causes immediate exercise-onset reflexive increases in HR, which are parasympathetically mediated, followed by slower increases in sympathetic tone as workloads are increased. PMID:24756637
NASA Astrophysics Data System (ADS)
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
Tracing technologies back in time to their scientific and mathematical origins reveals surprising connections between the pure pursuit of knowledge and the opportunities afforded by that pursuit for new and unexpected applications. For example, Einstein's desire to eliminate the disparity between electricity and magnetism in Maxwell's equations impelled him to develop the special theory of relativity (Einstein 1922)Einstein 1922 p 41 'The advance in method arises from the fact that the electric and magnetic fields lose their separate existences through the relativity of motion. A field which appears to be purely an electric field, judged from one system, has also magnetic field components when judged from another inertial system.'. His conviction that there should be no privileged inertial frame of reference Einstein 1922 p 58 'The possibility of explaining the numerical equality of inertia and gravitation by the unity of their nature gives to the general theory of relativity, according to my conviction, such a superiority over the conceptions of classical mechanics, that all the difficulties encountered must be considered as small in comparison with this progress.' further impelled him to utilize the non-Euclidean geometry originally developed by Riemann and others as a purely hypothetical alternative to classical geometry as the foundation for the general theory of relativity. Nowadays, anyone who depends on a global positioning system—which now includes many people who own smart phones—uses a system that would not work effectively without incorporating corrections from both special and general relativity (Ashby 2003). As another example, G H Hardy famously proclaimed his conviction that his work on number theory, which he pursued for the sheer love of exploring the beauty of mathematical structures, was unlikely to find any practical applications (Hardy 1940)Hardy 1940 pp 135-6 'The general conclusion, surely, stands out plainly enough. If useful knowledge
Connecting mean field models of neural activity to EEG and fMRI data.
Bojak, Ingo; Oostendorp, Thom F; Reid, Andrew T; Kötter, Rolf
2010-06-01
Progress in functional neuroimaging of the brain increasingly relies on the integration of data from complementary imaging modalities in order to improve spatiotemporal resolution and interpretability. However, the usefulness of merely statistical combinations is limited, since neural signal sources differ between modalities and are related non-trivially. We demonstrate here that a mean field model of brain activity can simultaneously predict EEG and fMRI BOLD with proper signal generation and expression. Simulations are shown using a realistic head model based on structural MRI, which includes both dense short-range background connectivity and long-range specific connectivity between brain regions. The distribution of modeled neural masses is comparable to the spatial resolution of fMRI BOLD, and the temporal resolution of the modeled dynamics, importantly including activity conduction, matches the fastest known EEG phenomena. The creation of a cortical mean field model with anatomically sound geometry, extensive connectivity, and proper signal expression is an important first step towards the model-based integration of multimodal neuroimages.
Dynamical response to a stationary tidal field
NASA Astrophysics Data System (ADS)
Landry, Philippe; Poisson, Eric
2015-12-01
We demonstrate that a slowly rotating compact body subjected to a stationary tidal field undergoes a dynamical response, in which the fluid variables and the interior metric vary on the time scale of the rotation period. This dynamical response requires the tidal field to have a gravitomagnetic component generated by external mass currents; the response to a gravitoelectric tidal field is stationary. We confirm that in a calculation carried out to first order in the body's rotation, the exterior geometry bears no trace of this internal dynamics; it remains stationary in spite of the time-dependent interior.
Ca^2+ Dynamics and Propagating Waves in Neural Networks with Excitatory and Inhibitory Neurons.
NASA Astrophysics Data System (ADS)
Bondarenko, Vladimir E.
2008-03-01
Dynamics of neural spikes, intracellular Ca^2+, and Ca^2+ in intracellular stores was investigated both in isolated Chay's neurons and in the neurons coupled in networks. Three types of neural networks were studied: a purely excitatory neural network, with only excitatory (AMPA) synapses; a purely inhibitory neural network with only inhibitory (GABA) synapses; and a hybrid neural network, with both AMPA and GABA synapses. In the hybrid neural network, the ratio of excitatory to inhibitory neurons was 4:1. For each case, we considered two types of connections, ``all-with-all" and 20 connections per neuron. Each neural network contained 100 neurons with randomly distributed connection strengths. In the neural networks with ``all-with-all" connections and AMPA/GABA synapses an increase in average synaptic strength yielded bursting activity with increased/decreased number of spikes per burst. The neural bursts and Ca^2+ transients were synchronous at relatively large connection strengths despite random connection strengths. Simulations of the neural networks with 20 connections per neuron and with only AMPA synapses showed synchronous oscillations, while the neural networks with GABA or hybrid synapses generated propagating waves of membrane potential and Ca^2+ transients.
Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy
2013-01-01
The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.
Temporal dynamics of a homeostatic pathway controlling neural network activity
Bateup, Helen S.; Denefrio, Cassandra L.; Johnson, Caroline A.; Saulnier, Jessica L.; Sabatini, Bernardo L.
2013-01-01
Neurons use a variety of mechanisms to homeostatically regulate neural network activity in order to maintain firing in a bounded range. One such process involves the bi-directional modulation of excitatory synaptic drive in response to chronic changes in network activity. Down-scaling of excitatory synapses in response to high activity requires Arc-dependent endocytosis of glutamate receptors. However, the temporal dynamics and signaling pathways regulating Arc during homeostatic plasticity are not well understood. Here we determine the relative contribution of transcriptional and translational control in the regulation of Arc, the signaling pathways responsible for the activity-dependent production of Arc, and the time course of these signaling events as they relate to the homeostatic adjustment of network activity in hippocampal neurons. We find that an ERK1/2-dependent transcriptional pathway active within 1–2 h of up-regulated network activity induces Arc leading to a restoration of network spiking rates within 12 h. Under basal and low activity conditions, specialized mechanisms are in place to rapidly degrade Arc mRNA and protein such that they have half-lives of less than 1 h. In addition, we find that while mTOR signaling is regulated by network activity on a similar time scale, mTOR-dependent translational control is not a major regulator of Arc production or degradation suggesting that the signaling pathways underlying homeostatic plasticity are distinct from those mediating synapse-specific forms of synaptic depression. PMID:24065881
Neural dynamics for landmark orientation and angular path integration.
Seelig, Johannes D; Jayaraman, Vivek
2015-05-14
Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed Drosophila melanogaster walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body, a toroidal structure in the centre of the fly brain. The neural population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity, a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors, network structures that have been proposed to support the function of navigational brain circuits. PMID:25971509
Dynamical recurrent neural networks--towards environmental time series prediction.
Aussem, A; Murtagh, F; Sarazin, M
1995-06-01
Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meterological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed nowcasting (Murtagh et al. 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy k-nearest neighbors method. PMID:7496587
Neural Dynamics of Learning Sound—Action Associations
McNamara, Adam; Buccino, Giovanni; Menz, Mareike M.; Gläscher, Jan; Wolbers, Thomas; Baumgärtner, Annette; Binkofski, Ferdinand
2008-01-01
A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44) were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI) study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of ‘strategic covert naming’ when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent) response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL) and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI) analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of the
Dynamical recurrent neural networks--towards environmental time series prediction.
Aussem, A; Murtagh, F; Sarazin, M
1995-06-01
Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meterological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed nowcasting (Murtagh et al. 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy k-nearest neighbors method.
Track and Field: Technique Through Dynamics.
ERIC Educational Resources Information Center
Ecker, Tom
This book was designed to aid in applying the laws of dynamics to the sport of track and field, event by event. It begins by tracing the history of the discoveries of the laws of motion and the principles of dynamics, with explanations of commonly used terms derived from the vocabularies of the physical sciences. The principles and laws of…
Oscillatory phase dynamics in neural entrainment underpin illusory percepts of time.
Herrmann, Björn; Henry, Molly J; Grigutsch, Maren; Obleser, Jonas
2013-10-01
Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a direct link to neural oscillations and oscillatory entrainment has not yet been provided. In addition, it has thus far remained unaddressed how context-induced illusory percepts of time are coded for in oscillator models of time perception. To investigate these questions, we used magnetoencephalography and examined the neural oscillatory dynamics that underpin pitch-induced illusory percepts of temporal rate change. Human participants listened to frequency-modulated sounds that varied over time in both modulation rate and pitch, and judged the direction of rate change (decrease vs increase). Our results demonstrate distinct neural mechanisms of rate perception: Modulation rate changes directly affected listeners' rate percept as well as the exact frequency of the neural oscillation. However, pitch-induced illusory rate changes were unrelated to the exact frequency of the neural responses. The rate change illusion was instead linked to changes in neural phase patterns, which allowed for single-trial decoding of percepts. That is, illusory underestimations or overestimations of perceived rate change were tightly coupled to increased intertrial phase coherence and changes in cerebro-acoustic phase lag. The results provide insight on how illusory percepts of time are coded for by neural oscillatory dynamics. PMID:24089487
Information field dynamics for simulation scheme construction
NASA Astrophysics Data System (ADS)
Enßlin, Torsten A.
2013-01-01
Information field dynamics (IFD) is introduced here as a framework to derive numerical schemes for the simulation of physical and other fields without assuming a particular subgrid structure as many schemes do. IFD constructs an ensemble of nonparametric subgrid field configurations from the combination of the data in computer memory, representing constraints on possible field configurations, and prior assumptions on the subgrid field statistics. Each of these field configurations can formally be evolved to a later moment since any differential operator of the dynamics can act on fields living in continuous space. However, these virtually evolved fields need again a representation by data in computer memory. The maximum entropy principle of information theory guides the construction of updated data sets via entropic matching, optimally representing these field configurations at the later time. The field dynamics thereby become represented by a finite set of evolution equations for the data that can be solved numerically. The subgrid dynamics is thereby treated within auxiliary analytic considerations. The resulting scheme acts solely on the data space. It should provide a more accurate description of the physical field dynamics than simulation schemes constructed ad hoc, due to the more rigorous accounting of subgrid physics and the space discretization process. Assimilation of measurement data into an IFD simulation is conceptually straightforward since measurement and simulation data can just be merged. The IFD approach is illustrated using the example of a coarsely discretized representation of a thermally excited classical Klein-Gordon field. This should pave the way towards the construction of schemes for more complex systems like turbulent hydrodynamics.
Information field dynamics for simulation scheme construction.
Ensslin, Torsten A
2013-01-01
Information field dynamics (IFD) is introduced here as a framework to derive numerical schemes for the simulation of physical and other fields without assuming a particular subgrid structure as many schemes do. IFD constructs an ensemble of nonparametric subgrid field configurations from the combination of the data in computer memory, representing constraints on possible field configurations, and prior assumptions on the subgrid field statistics. Each of these field configurations can formally be evolved to a later moment since any differential operator of the dynamics can act on fields living in continuous space. However, these virtually evolved fields need again a representation by data in computer memory. The maximum entropy principle of information theory guides the construction of updated data sets via entropic matching, optimally representing these field configurations at the later time. The field dynamics thereby become represented by a finite set of evolution equations for the data that can be solved numerically. The subgrid dynamics is thereby treated within auxiliary analytic considerations. The resulting scheme acts solely on the data space. It should provide a more accurate description of the physical field dynamics than simulation schemes constructed ad hoc, due to the more rigorous accounting of subgrid physics and the space discretization process. Assimilation of measurement data into an IFD simulation is conceptually straightforward since measurement and simulation data can just be merged. The IFD approach is illustrated using the example of a coarsely discretized representation of a thermally excited classical Klein-Gordon field. This should pave the way towards the construction of schemes for more complex systems like turbulent hydrodynamics.
Dynamic output feedback stabilization for nonlinear systems based on standard neural network models.
Liu, Meiqin
2006-08-01
A neural-model-based control design for some nonlinear systems is addressed. The design approach is to approximate the nonlinear systems with neural networks of which the activation functions satisfy the sector conditions. A novel neural network model termed standard neural network model (SNNM) is advanced for describing this class of approximating neural networks. Full-order dynamic output feedback control laws are then designed for the SNNMs with inputs and outputs to stabilize the closed-loop systems. The control design equations are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms to determine the control signals. It is shown that most neural-network-based nonlinear systems can be transformed into input-output SNNMs to be stabilization synthesized in a unified way. Finally, some application examples are presented to illustrate the control design procedures.
Dynamic Finite Size Effects in Spiking Neural Networks
Buice, Michael A.; Chow, Carson C.
2013-01-01
We investigate the dynamics of a deterministic finite-sized network of synaptically coupled spiking neurons and present a formalism for computing the network statistics in a perturbative expansion. The small parameter for the expansion is the inverse number of neurons in the network. The network dynamics are fully characterized by a neuron population density that obeys a conservation law analogous to the Klimontovich equation in the kinetic theory of plasmas. The Klimontovich equation does not possess well-behaved solutions but can be recast in terms of a coupled system of well-behaved moment equations, known as a moment hierarchy. The moment hierarchy is impossible to solve but in the mean field limit of an infinite number of neurons, it reduces to a single well-behaved conservation law for the mean neuron density. For a large but finite system, the moment hierarchy can be truncated perturbatively with the inverse system size as a small parameter but the resulting set of reduced moment equations that are still very difficult to solve. However, the entire moment hierarchy can also be re-expressed in terms of a functional probability distribution of the neuron density. The moments can then be computed perturbatively using methods from statistical field theory. Here we derive the complete mean field theory and the lowest order second moment corrections for physiologically relevant quantities. Although we focus on finite-size corrections, our method can be used to compute perturbative expansions in any parameter. PMID:23359258
Hellyer, Peter J; Scott, Gregory; Shanahan, Murray; Sharp, David J; Leech, Robert
2015-06-17
Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome.
Hellyer, Peter J.; Scott, Gregory; Shanahan, Murray; Sharp, David J.
2015-01-01
Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome. PMID:26085630
Hellyer, Peter J; Scott, Gregory; Shanahan, Murray; Sharp, David J; Leech, Robert
2015-06-17
Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome. PMID:26085630
Filling the Gap on Developmental Change: Tests of a Dynamic Field Theory of Spatial Cognition
ERIC Educational Resources Information Center
Schutte, Anne R.; Spencer, John P.
2010-01-01
In early childhood, there is a developmental transition in spatial memory biases. Before the transition, children's memory responses are biased toward the midline of a space, while after the transition responses are biased away from midline. The Dynamic Field Theory (DFT) posits that changes in neural interaction and changes in how children…
Magnetic Field Control of Combustion Dynamics
NASA Astrophysics Data System (ADS)
Barmina, I.; Valdmanis, R.; Zake, M.; Kalis, H.; Marinaki, M.; Strautins, U.
2016-08-01
Experimental studies and mathematical modelling of the effects of magnetic field on combustion dynamics at thermo-chemical conversion of biomass are carried out with the aim of providing control of the processes developing in the reaction zone of swirling flame. The joint research of the magnetic field effect on the combustion dynamics includes the estimation of this effect on the formation of the swirling flame dynamics, flame temperature and composition, providing analysis of the magnetic field effects on the flame characteristics. The results of experiments have shown that the magnetic field exerts the influence on the flow velocity components by enhancing a swirl motion in the flame reaction zone with swirl-enhanced mixing of the axial flow of volatiles with cold air swirl, by cooling the flame reaction zone and by limiting the thermo-chemical conversion of volatiles. Mathematical modelling of magnetic field effect on the formation of the flame dynamics confirms that the electromagnetic force, which is induced by the electric current surrounding the flame, leads to field-enhanced increase of flow vorticity by enhancing mixing of the reactants. The magnetic field effect on the flame temperature and rate of reactions leads to conclusion that field-enhanced increase of the flow vorticity results in flame cooling by limiting the chemical conversion of the reactants.
Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.
NASA Astrophysics Data System (ADS)
Sasaki, Hironori
This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically
Précis of Neural organization: structure, function, and dynamics.
Arbib, M A; Erdi, P
2000-08-01
NEURAL ORGANIZATION: Structure, function, and dynamics shows how theory and experiment can supplement each other in an integrated, evolving account of the brain's structure, function, and dynamics. (1) STRUCTURE: Studies of brain function and dynamics build on and contribute to an understanding of many brain regions, the neural circuits that constitute them, and their spatial relations. We emphasize Szentágothai's modular architectonics principle, but also stress the importance of the microcomplexes of cerebellar circuitry and the lamellae of hippocampus. (2) FUNCTION: Control of eye movements, reaching and grasping, cognitive maps, and the roles of vision receive a functional decomposition in terms of schemas. Hypotheses as to how each schema is implemented through the interaction of specific brain regions provide the basis for modeling the overall function by neural networks constrained by neural data. Synthetic PET integrates modeling of primate circuitry with data from human brain imaging. (3) DYNAMICS: Dynamic system theory analyzes spatiotemporal neural phenomena, such as oscillatory and chaotic activity in both single neurons and (often synchronized) neural networks, the self-organizing development and plasticity of ordered neural structures, and learning and memory phenomena associated with synaptic modification. Rhythm generation involves multiple levels of analysis, from intrinsic cellular processes to loops involving multiple brain regions. A variety of rhythms are related to memory functions. The Précis presents a multifaceted case study of the hippocampus. We conclude with the claim that language and other cognitive processes can be fruitfully studied within the framework of neural organization that the authors have charted with John Szentágothai.
Hamiltonian dynamics of the parametrized electromagnetic field
NASA Astrophysics Data System (ADS)
Barbero G, J. Fernando; Margalef-Bentabol, Juan; Villaseñor, Eduardo J. S.
2016-06-01
We study the Hamiltonian formulation for a parametrized electromagnetic field with the purpose of clarifying the interplay between parametrization and gauge symmetries. We use a geometric approach which is tailor-made for theories where embeddings are part of the dynamical variables. Our point of view is global and coordinate free. The most important result of the paper is the identification of sectors in the primary constraint submanifold in the phase space of the model where the number of independent components of the Hamiltonian vector fields that define the dynamics changes. This explains the non-trivial behavior of the system and some of its pathologies.
A Visual Metaphor Describing Neural Dynamics in Schizophrenia
van Beveren, Nico J. M.; de Haan, Lieuwe
2008-01-01
Background In many scientific disciplines the use of a metaphor as an heuristic aid is not uncommon. A well known example in somatic medicine is the ‘defense army metaphor’ used to characterize the immune system. In fact, probably a large part of the everyday work of doctors consists of ‘translating’ scientific and clinical information (i.e. causes of disease, percentage of succes versus risk of side-effects) into information tailored to the needs and capacities of the individual patient. The ability to do so in an effective way is at least partly what makes a clinician a good communicator. Schizophrenia is a severe psychiatric disorder which affects approximately 1% of the population. Over the last two decades a large amount of molecular-biological, imaging and genetic data have been accumulated regarding the biological underpinnings of schizophrenia. However, it remains difficult to understand how the characteristic symptoms of schizophrenia such as hallucinations and delusions are related to disturbances on the molecular-biological level. In general, psychiatry seems to lack a conceptual framework with sufficient explanatory power to link the mental- and molecular-biological domains. Methodology/Principal Findings Here, we present an essay-like study in which we propose to use visualized concepts stemming from the theory on dynamical complex systems as a ‘visual metaphor’ to bridge the mental- and molecular-biological domains in schizophrenia. We first describe a computer model of neural information processing; we show how the information processing in this model can be visualized, using concepts from the theory on complex systems. We then describe two computer models which have been used to investigate the primary theory on schizophrenia, the neurodevelopmental model, and show how disturbed information processing in these two computer models can be presented in terms of the visual metaphor previously described. Finally, we describe the effects of
Dynamic stability conditions for Lotka-Volterra recurrent neural networks with delays.
Yi, Zhang; Tan, K K
2002-07-01
The Lotka-Volterra model of neural networks, derived from the membrane dynamics of competing neurons, have found successful applications in many "winner-take-all" types of problems. This paper studies the dynamic stability properties of general Lotka-Volterra recurrent neural networks with delays. Conditions for nondivergence of the neural networks are derived. These conditions are based on local inhibition of networks, thereby allowing these networks to possess a multistability property. Multistability is a necessary property of a network that will enable important neural computations such as those governing the decision making process. Under these nondivergence conditions, a compact set that globally attracts all the trajectories of a network can be computed explicitly. If the connection weight matrix of a network is symmetric in some sense, and the delays of the network are in L2 space, we can prove that the network will have the property of complete stability.
Pattern recognition in field crickets: concepts and neural evidence.
Kostarakos, Konstantinos; Hedwig, Berthold
2015-01-01
Since decades the acoustic communication behavior of crickets is in the focus of neurobiology aiming to analyze the neural basis of male singing and female phonotactic behavior. For temporal pattern recognition several different concepts have been proposed to elucidate the possible neural mechanisms underlying the tuning of phonotaxis in females. These concepts encompass either some form of a feature detecting mechanism using cross-correlation processing, temporal filter properties of brain neurons or an autocorrelation processing based on a delay-line and coincidence detection mechanism. Current data based on intracellular recordings of auditory brain neurons indicate a sequential processing by excitation and inhibition in a local auditory network within the protocerebrum. The response properties of the brain neurons point towards the concept of an autocorrelation-like mechanism underlying female pattern recognition in which delay-lines by long lasting inhibition may be involved.
Neural dynamics of prediction and surprise in infants.
Kouider, Sid; Long, Bria; Le Stanc, Lorna; Charron, Sylvain; Fievet, Anne-Caroline; Barbosa, Leonardo S; Gelskov, Sofie V
2015-01-01
Prior expectations shape neural responses in sensory regions of the brain, consistent with a Bayesian predictive coding account of perception. Yet, it remains unclear whether such a mechanism is already functional during early stages of development. To address this issue, we study how the infant brain responds to prediction violations using a cross-modal cueing paradigm. We record electroencephalographic responses to expected and unexpected visual events preceded by auditory cues in 12-month-old infants. We find an increased response for unexpected events. However, this effect of prediction error is only observed during late processing stages associated with conscious access mechanisms. In contrast, early perceptual components reveal an amplification of neural responses for predicted relative to surprising events, suggesting that selective attention enhances perceptual processing for expected events. Taken together, these results demonstrate that cross-modal statistical regularities are used to generate predictions that differentially influence early and late neural responses in infants. PMID:26460901
Neural dynamics of prediction and surprise in infants
Kouider, Sid; Long, Bria; Le Stanc, Lorna; Charron, Sylvain; Fievet, Anne-Caroline; Barbosa, Leonardo S.; Gelskov, Sofie V.
2015-01-01
Prior expectations shape neural responses in sensory regions of the brain, consistent with a Bayesian predictive coding account of perception. Yet, it remains unclear whether such a mechanism is already functional during early stages of development. To address this issue, we study how the infant brain responds to prediction violations using a cross-modal cueing paradigm. We record electroencephalographic responses to expected and unexpected visual events preceded by auditory cues in 12-month-old infants. We find an increased response for unexpected events. However, this effect of prediction error is only observed during late processing stages associated with conscious access mechanisms. In contrast, early perceptual components reveal an amplification of neural responses for predicted relative to surprising events, suggesting that selective attention enhances perceptual processing for expected events. Taken together, these results demonstrate that cross-modal statistical regularities are used to generate predictions that differentially influence early and late neural responses in infants. PMID:26460901
Toward direct neural current imaging by resonant mechanisms at ultra-low field.
Kraus, R H; Volegov, P; Matlachov, A; Espy, M
2008-01-01
A variety of techniques have been developed to noninvasively image human brain function that are central to research and clinical applications endeavoring to understand how the brain works and to detect pathology (e.g. epilepsy, schizophrenia, etc.). Current methods can be broadly divided into those that rely on hemodynamic responses as indicators of neural activity (e.g. fMRI, optical, and PET) and methods that measure neural activity directly (e.g. MEG and EEG). The approaches all suffer from poor temporal resolution, poor spatial localization, or indirectly measuring neural activity. It has been suggested that the proton spin population will be altered by neural activity resulting in a measurable effect on the NMR signal that can be imaged by MRI methods. We present here the physical basis and experimental evidence for the resonant interaction between magnetic fields such as those arising from neural activity, with the spin population in ultra-low field (microT) NMR experiments. We demonstrate through the use of current phantoms that, in the case of correlated zero-mean current distributions such as those one might expect to result from neural activity, resonant interactions will produce larger changes in the observed NMR signal than dephasing. The observed resonant interactions reported here might one day form the foundation of a new functional neuroimaging modality ultimately capable of simultaneous direct neural activity and brain anatomy tomography.
Dark-field differential dynamic microscopy.
Bayles, Alexandra V; Squires, Todd M; Helgeson, Matthew E
2016-02-28
Differential dynamic microscopy (DDM) is an emerging technique to measure the ensemble dynamics of colloidal and complex fluid motion using optical microscopy in systems that would otherwise be difficult to measure using other methods. To date, DDM has successfully been applied to linear space invariant imaging modes including bright-field, fluorescence, confocal, polarised, and phase-contrast microscopy to study diverse dynamic phenomena. In this work, we show for the first time how DDM analysis can be extended to dark-field imaging, i.e. a linear space variant (LSV) imaging mode. Specifically, we present a particle-based framework for describing dynamic image correlations in DDM, and use it to derive a correction to the image structure function obtained by DDM that accounts for scatterers with non-homogeneous intensity distributions as they move within the imaging plane. To validate the analysis, we study the Brownian motion of gold nanoparticles, whose plasmonic structure allows for nanometer-scale particles to be imaged under dark-field illumination, in Newtonian liquids. We find that diffusion coefficients of the nanoparticles can be reliably measured by dark-field DDM, even under optically dense concentrations where analysis via multiple-particle tracking microrheology fails. These results demonstrate the potential for DDM analysis to be applied to linear space variant forms of microscopy, providing access to experimental systems unavailable to other imaging modes. PMID:26822331
Dynamic social power modulates neural basis of math calculation
Harada, Tokiko; Bridge, Donna J.; Chiao, Joan Y.
2013-01-01
Both situational (e.g., perceived power) and sustained social factors (e.g., cultural stereotypes) are known to affect how people academically perform, particularly in the domain of mathematics. The ability to compute even simple mathematics, such as addition, relies on distinct neural circuitry within the inferior parietal and inferior frontal lobes, brain regions where magnitude representation and addition are performed. Despite prior behavioral evidence of social influence on academic performance, little is known about whether or not temporarily heightening a person's sense of power may influence the neural bases of math calculation. Here we primed female participants with either high or low power (LP) and then measured neural response while they performed exact and approximate math problems. We found that priming power affected math performance; specifically, females primed with high power (HP) performed better on approximate math calculation compared to females primed with LP. Furthermore, neural response within the left inferior frontal gyrus (IFG), a region previously associated with cognitive interference, was reduced for females in the HP compared to LP group. Taken together, these results indicate that even temporarily heightening a person's sense of social power can increase their math performance, possibly by reducing cognitive interference during math performance. PMID:23390415
Neural Dynamics of Autistic Behaviors: Cognitive, Emotional, and Timing Substrates
ERIC Educational Resources Information Center
Grossberg, Stephen; Seidman, Don
2006-01-01
What brain mechanisms underlie autism, and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the Imbalanced Spectrally Timed Adaptive Resonance Theory (iSTART) model, that proposes how cognitive, emotional, timing, and motor processes that involve brain regions such as the prefrontal and temporal…
Topological field theory of dynamical systems
Ovchinnikov, Igor V.
2012-09-15
Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the 'edge of chaos.' Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.
Confinement dynamics in the reversed field pinch
Schoenberg, K.F.
1988-01-01
The study of basic transport and confinement dynamics is central to the development of the reversed field pinch (RFP) as a confinement concept. Thus, the goal of RFP research is to understand the connection between processes that sustain the RFP configuration and related transport/confinement properties. Recently, new insights into confinement have emerged from a detailed investigation of RFP electron and ion physics. These insights derive from the recognition that both magnetohydrodynamic (MHD) and electron kinetic effects play an important and strongly coupled role in RFP sustainment and confinement dynamics. In this paper, we summarize the results of these studies on the ZT-40M experiment. 8 refs.
Curley, J Lowry; Jennings, Scott R; Moore, Michael J
2011-01-01
Increasingly, patterned cell culture environments are becoming a relevant technique to study cellular characteristics, and many researchers believe in the need for 3D environments to represent in vitro experiments which better mimic in vivo qualities. Studies in fields such as cancer research, neural engineering, cardiac physiology, and cell-matrix interaction have shown cell behavior differs substantially between traditional monolayer cultures and 3D constructs. Hydrogels are used as 3D environments because of their variety, versatility and ability to tailor molecular composition through functionalization. Numerous techniques exist for creation of constructs as cell-supportive matrices, including electrospinning, elastomer stamps, inkjet printing, additive photopatterning, static photomask projection-lithography, and dynamic mask microstereolithography. Unfortunately, these methods involve multiple production steps and/or equipment not readily adaptable to conventional cell and tissue culture methods. The technique employed in this protocol adapts the latter two methods, using a digital micromirror device (DMD) to create dynamic photomasks for crosslinking geometrically specific poly-(ethylene glycol) (PEG) hydrogels, induced through UV initiated free radical polymerization. The resulting "2.5D" structures provide a constrained 3D environment for neural growth. We employ a dual-hydrogel approach, where PEG serves as a cell-restrictive region supplying structure to an otherwise shapeless but cell-permissive self-assembling gel made from either Puramatrix or agarose. The process is a quick simple one step fabrication which is highly reproducible and easily adapted for use with conventional cell culture methods and substrates. Whole tissue explants, such as embryonic dorsal root ganglia (DRG), can be incorporated into the dual hydrogel constructs for experimental assays such as neurite outgrowth. Additionally, dissociated cells can be encapsulated in the
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
Neural Correlates of Dynamically Evolving Interpersonal Ties Predict Prosocial Behavior
Fahrenfort, Johannes J.; van Winden, Frans; Pelloux, Benjamin; Stallen, Mirre; Ridderinkhof, K. Richard
2011-01-01
There is a growing interest for the determinants of human choice behavior in social settings. Upon initial contact, investment choices in social settings can be inherently risky, as the degree to which the other person will reciprocate is unknown. Nevertheless, people have been shown to exhibit prosocial behavior even in one-shot laboratory settings where all interaction has been taken away. A logical step has been to link such behavior to trait empathy-related neurobiological networks. However, as a social interaction unfolds, the degree of uncertainty with respect to the expected payoff of choice behavior may change as a function of the interaction. Here we attempt to capture this factor. We show that the interpersonal tie one develops with another person during interaction – rather than trait empathy – motivates investment in a public good that is shared with an anonymous interaction partner. We examined how individual differences in trait empathy and interpersonal ties modulate neural responses to imposed monetary sharing. After, but not before interaction in a public good game, sharing prompted activation of neural systems associated with reward (striatum), empathy (anterior insular cortex and anterior cingulate cortex) as well as altruism, and social significance [posterior superior temporal sulcus (pSTS)]. Although these activations could be linked to both empathy and interpersonal ties, only tie-related pSTS activation predicted prosocial behavior during subsequent interaction, suggesting a neural substrate for keeping track of social relevance. PMID:22403524
A neural network dynamics that resembles protein evolution
NASA Astrophysics Data System (ADS)
Ferrán, Edgardo A.; Ferrara, Pascual
1992-06-01
We use neutral networks to classify proteins according to their sequence similarities. A network composed by 7 × 7 neurons, was trained with the Kohonen unsupervised learning algorithm using, as inputs, matrix patterns derived from the bipeptide composition of cytochrome c proteins belonging to 76 different species. As a result of the training, the network self-organized the activation of its neurons into topologically ordered maps, wherein phylogenetically related sequences were positioned close to each other. The evolution of the topological map during learning, in a representative computational experiment, roughly resembles the way in which one species evolves into several others. For instance, sequences corresponding to vertebrates, initially grouped together into one neuron, were placed in a contiguous zone of the final neural map, with sequences of fishes, amphibia, reptiles, birds and mammals associated to different neurons. Some apparent wrong classifications are due to the fact that some proteins have a greater degree of sequence identity than the one expected by phylogenetics. In the final neural map, each synaptic vector may be considered as the pattern corresponding to the ancestor of all the proteins that are attached to that neuron. Although it may be also tempting to link real time with learning epochs and to use this relationship to calibrate the molecular evolutionary clock, this is not correct because the evolutionary time schedule obtained with the neural network depends highly on the discrete way in which the winner neighborhood is decreased during learning.
Quantum perceptron over a field and neural network architecture selection in a quantum computer.
da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa
2016-04-01
In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. PMID:26878722
Quantum perceptron over a field and neural network architecture selection in a quantum computer.
da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa
2016-04-01
In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator.
Empirical modeling ENSO dynamics with complex-valued artificial neural networks
NASA Astrophysics Data System (ADS)
Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry
2016-04-01
The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Dynamics of photoelectrons in a magnetic field
NASA Astrophysics Data System (ADS)
Bracher, Christian; Kramer, Tobias; Delos, John B.
2006-05-01
Near-threshold photodetachment from negative atomic ions provides a virtually pointlike source of electrons, and is ideally suited to study electron dynamics in externally applied electric and magnetic fields. These fields govern the motion of the emitted electron wave, and lead to characteristic modulations both in the total photocurrent and in the spatial electron distribution. These changes have been predicted and observed in an electric field environment (photodetachment microscopy). Here, we examine the effects of a purely magnetic field on the photodetachment cross sections. Theoretical predictions for the electron distribution reveal a surprising wealth of structure that is currently only partly understood. We present numerical and analytical results, and give a semiclassical interpretation of the observed features where possible.
Nonequilibrium dynamics of emergent field configurations
NASA Astrophysics Data System (ADS)
Howell, Rafael Cassidy
The processes by which nonlinear physical systems approach thermal equilibrium is of great importance in many areas of science. Central to this is the mechanism by which energy is transferred between the many degrees of freedom comprising these systems. With this in mind, in this research the nonequilibrium dynamics of nonperturbative fluctuations within Ginzburg-Landau models are investigated. In particular, two questions are addressed. In both cases the system is initially prepared in one of two minima of a double-well potential. First, within the context of a (2 + 1) dimensional field theory, we investigate whether emergent spatio-temporal coherent structures play a dynamcal role in the equilibration of the field. We find that the answer is sensitive to the initial temperature of the system. At low initial temperatures, the dynamics are well approximated with a time-dependent mean-field theory. For higher temperatures, the strong nonlinear coupling between the modes in the field does give rise to the synchronized emergence of coherent spatio-temporal configurations, identified with oscillons. These are long-lived coherent field configurations characterized by their persistent oscillatory behavior at their core. This initial global emergence is seen to be a consequence of resonant behavior in the long wavelength modes in the system. A second question concerns the emergence of disorder in a highly viscous system modeled by a (3 + 1) dimensional field theory. An integro-differential Boltzmann equation is derived to model the thermal nucleation of precursors of one phase within the homogeneous background. The fraction of the volume populated by these precursors is computed as a function of temperature. This model is capable of describing the onset of percolation, characterizing the approach to criticality (i.e. disorder). It also provides a nonperturbative correction to the critical temperature based on the nonequilibrium dynamics of the system.
Robustness analysis of uncertain dynamical neural networks with multiple time delays.
Senan, Sibel
2015-10-01
This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results.
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min
2013-08-01
Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the
The earth's gravity field and ocean dynamics
NASA Technical Reports Server (NTRS)
Mather, R. S.
1978-01-01
An analysis of the signal-to-noise ratio of the best gravity field available shows that a basis exists for the recovery of the dominant parameters of the quasi-stationary sea surface topography. Results obtained from the analysis of GEOS-3 show that it is feasible to recover the quasi-stationary dynamic sea surface topography as a function of wavelength. The gravity field models required for synoptic ocean circulation modeling are less exacting in that constituents affecting radial components of orbital position need not be known through shorter wavelengths.
On Mean Field Limits for Dynamical Systems
NASA Astrophysics Data System (ADS)
Boers, Niklas; Pickl, Peter
2016-07-01
We present a purely probabilistic proof of propagation of molecular chaos for N-particle systems in dimension 3 with interaction forces scaling like 1/\\vert q\\vert ^{3λ - 1} with λ smaller but close to one and cut-off at q = N^{-1/3}. The proof yields a Gronwall estimate for the maximal distance between exact microscopic and approximate mean-field dynamics. This can be used to show weak convergence of the one-particle marginals to solutions of the respective mean-field equation without cut-off in a quantitative way. Our results thus lead to a derivation of the Vlasov equation from the microscopic N-particle dynamics with force term arbitrarily close to the physically relevant Coulomb- and gravitational forces.
Modeling emotional dynamics : currency versus field.
Sallach, D .L.; Decision and Information Sciences; Univ. of Chicago
2008-08-01
Randall Collins has introduced a simplified model of emotional dynamics in which emotional energy, heightened and focused by interaction rituals, serves as a common denominator for social exchange: a generic form of currency, except that it is active in a far broader range of social transactions. While the scope of this theory is attractive, the specifics of the model remain unconvincing. After a critical assessment of the currency theory of emotion, a field model of emotion is introduced that adds expressiveness by locating emotional valence within its cognitive context, thereby creating an integrated orientation field. The result is a model which claims less in the way of motivational specificity, but is more satisfactory in modeling the dynamic interaction between cognitive and emotional orientations at both individual and social levels.
Bosonic Dynamical Mean-Field Theory
NASA Astrophysics Data System (ADS)
Snoek, Michiel; Hofstetter, Walter
2013-02-01
We derive the bosonic dynamical mean-field equations for bosonic atoms in optical lattices with arbitrary lattice geometry. The equations are presented as a systematic expansion in 1/z, z being the number of lattice neighbours. Hence the theory is applicable in sufficiently high-dimensional lattices. We apply the method to a two-component mixture, for which a rich phase diagram with spin order is revealed.
Quantum-classical dynamics of wave fields.
Sergi, Alessandro
2007-02-21
An approach to the quantum-classical mechanics of phase space dependent operators, which has been proposed recently, is remodeled as a formalism for wave fields. Such wave fields obey a system of coupled nonlinear equations that can be written by means of a suitable non-Hamiltonian bracket. As an example, the theory is applied to the relaxation dynamics of the spin-boson model. In the adiabatic limit, a good agreement with calculations performed by the operator approach is obtained. Moreover, the theory proposed in this paper can take nonadiabatic effects into account without resorting to surface-hopping approximations. Hence, the results obtained follow qualitatively those of previous surface-hopping calculations and increase by a factor of (at least) 2, the time length over which nonadiabatic dynamics can be propagated with small statistical errors. Moreover, it is worth to note that the dynamics of quantum-classical wave fields proposed here is a straightforward non-Hamiltonian generalization of the formalism for nonlinear quantum mechanics that Weinberg introduced recently.
The neural dynamics of reward value and risk coding in the human orbitofrontal cortex.
Li, Yansong; Vanni-Mercier, Giovanna; Isnard, Jean; Mauguière, François; Dreher, Jean-Claude
2016-04-01
The orbitofrontal cortex is known to carry information regarding expected reward, risk and experienced outcome. Yet, due to inherent limitations in lesion and neuroimaging methods, the neural dynamics of these computations has remained elusive in humans. Here, taking advantage of the high temporal definition of intracranial recordings, we characterize the neurophysiological signatures of the intact orbitofrontal cortex in processing information relevant for risky decisions. Local field potentials were recorded from the intact orbitofrontal cortex of patients suffering from drug-refractory partial epilepsy with implanted depth electrodes as they performed a probabilistic reward learning task that required them to associate visual cues with distinct reward probabilities. We observed three successive signals: (i) around 400 ms after cue presentation, the amplitudes of the local field potentials increased with reward probability; (ii) a risk signal emerged during the late phase of reward anticipation and during the outcome phase; and (iii) an experienced value signal appeared at the time of reward delivery. Both the medial and lateral orbitofrontal cortex encoded risk and reward probability while the lateral orbitofrontal cortex played a dominant role in coding experienced value. The present study provides the first evidence from intracranial recordings that the human orbitofrontal cortex codes reward risk both during late reward anticipation and during the outcome phase at a time scale of milliseconds. Our findings offer insights into the rapid mechanisms underlying the ability to learn structural relationships from the environment. PMID:26811252
The neural dynamics of reward value and risk coding in the human orbitofrontal cortex.
Li, Yansong; Vanni-Mercier, Giovanna; Isnard, Jean; Mauguière, François; Dreher, Jean-Claude
2016-04-01
The orbitofrontal cortex is known to carry information regarding expected reward, risk and experienced outcome. Yet, due to inherent limitations in lesion and neuroimaging methods, the neural dynamics of these computations has remained elusive in humans. Here, taking advantage of the high temporal definition of intracranial recordings, we characterize the neurophysiological signatures of the intact orbitofrontal cortex in processing information relevant for risky decisions. Local field potentials were recorded from the intact orbitofrontal cortex of patients suffering from drug-refractory partial epilepsy with implanted depth electrodes as they performed a probabilistic reward learning task that required them to associate visual cues with distinct reward probabilities. We observed three successive signals: (i) around 400 ms after cue presentation, the amplitudes of the local field potentials increased with reward probability; (ii) a risk signal emerged during the late phase of reward anticipation and during the outcome phase; and (iii) an experienced value signal appeared at the time of reward delivery. Both the medial and lateral orbitofrontal cortex encoded risk and reward probability while the lateral orbitofrontal cortex played a dominant role in coding experienced value. The present study provides the first evidence from intracranial recordings that the human orbitofrontal cortex codes reward risk both during late reward anticipation and during the outcome phase at a time scale of milliseconds. Our findings offer insights into the rapid mechanisms underlying the ability to learn structural relationships from the environment.
Molecular dynamics in high electric fields
NASA Astrophysics Data System (ADS)
Apostol, M.; Cune, L. C.
2016-06-01
Molecular rotation spectra, generated by the coupling of the molecular electric-dipole moments to an external time-dependent electric field, are discussed in a few particular conditions which can be of some experimental interest. First, the spherical-pendulum molecular model is reviewed, with the aim of introducing an approximate method which consists in the separation of the azimuthal and zenithal motions. Second, rotation spectra are considered in the presence of a static electric field. Two particular cases are analyzed, corresponding to strong and weak fields. In both cases the classical motion of the dipoles consists of rotations and vibrations about equilibrium positions; this motion may exhibit parametric resonances. For strong fields a large macroscopic electric polarization may appear. This situation may be relevant for polar matter (like pyroelectrics, ferroelectrics), or for heavy impurities embedded in a polar solid. The dipolar interaction is analyzed in polar condensed matter, where it is shown that new polarization modes appear for a spontaneous macroscopic electric polarization (these modes are tentatively called "dipolons"); one of the polarization modes is related to parametric resonances. The extension of these considerations to magnetic dipoles is briefly discussed. The treatment is extended to strong electric fields which oscillate with a high frequency, as those provided by high-power lasers. It is shown that the effect of such fields on molecular dynamics is governed by a much weaker, effective, renormalized, static electric field.
Magnetization dynamics using ultrashort magnetic field pulses
NASA Astrophysics Data System (ADS)
Tudosa, Ioan
Very short and well shaped magnetic field pulses can be generated using ultra-relativistic electron bunches at Stanford Linear Accelerator. These fields of several Tesla with duration of several picoseconds are used to study the response of magnetic materials to a very short excitation. Precession of a magnetic moment by 90 degrees in a field of 1 Tesla takes about 10 picoseconds, so we explore the range of fast switching of the magnetization by precession. Our experiments are in a region of magnetic excitation that is not yet accessible by other methods. The current table top experiments can generate fields longer than 100 ps and with strength of 0.1 Tesla only. Two types of magnetic were used, magnetic recording media and model magnetic thin films. Information about the magnetization dynamics is extracted from the magnetic patterns generated by the magnetic field. The shape and size of these patterns are influenced by the dissipation of angular momentum involved in the switching process. The high-density recording media, both in-plane and perpendicular type, shows a pattern which indicates a high spin momentum dissipation. The perpendicular magnetic recording media was exposed to multiple magnetic field pulses. We observed an extended transition region between switched and non-switched areas indicating a stochastic switching behavior that cannot be explained by thermal fluctuations. The model films consist of very thin crystalline Fe films on GaAs. Even with these model films we see an enhanced dissipation compared to ferromagnetic resonance studies. The magnetic patterns show that damping increases with time and it is not a constant as usually assumed in the equation describing the magnetization dynamics. The simulation using the theory of spin-wave scattering explains only half of the observed damping. An important feature of this theory is that the spin dissipation is time dependent and depends on the large angle between the magnetization and the magnetic
Classification data mining method based on dynamic RBF neural networks
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping
2009-04-01
With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.
From behavior to neural dynamics: An integrated theory of attention
Buschman, Timothy J.; Kastner, Sabine
2015-01-01
The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one’s current behavior. We refer to these mechanisms as ‘attention’. Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain’s large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits and neurons. We then integrate these disparate results into a unified theory of attention. PMID:26447577
NASA Astrophysics Data System (ADS)
Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer
2016-04-01
In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.
Direct imaging of neural currents using ultra-low field magnetic resonance techniques
Volegov, Petr L.; Matlashov, Andrei N.; Mosher, John C.; Espy, Michelle A.; Kraus, Jr., Robert H.
2009-08-11
Using resonant interactions to directly and tomographically image neural activity in the human brain using magnetic resonance imaging (MRI) techniques at ultra-low field (ULF), the present inventors have established an approach that is sensitive to magnetic field distributions local to the spin population in cortex at the Larmor frequency of the measurement field. Because the Larmor frequency can be readily manipulated (through varying B.sub.m), one can also envision using ULF-DNI to image the frequency distribution of the local fields in cortex. Such information, taken together with simultaneous acquisition of MEG and ULF-NMR signals, enables non-invasive exploration of the correlation between local fields induced by neural activity in cortex and more `distant` measures of brain activity such as MEG and EEG.
ERIC Educational Resources Information Center
Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo
2010-01-01
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…
A Neural Network Model of the Structure and Dynamics of Human Personality
ERIC Educational Resources Information Center
Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.
2010-01-01
We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…
Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno
2008-01-01
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631
Dynamical similarity of geomagnetic field reversals.
Valet, Jean-Pierre; Fournier, Alexandre; Courtillot, Vincent; Herrero-Bervera, Emilio
2012-10-01
No consensus has been reached so far on the properties of the geomagnetic field during reversals or on the main features that might reveal its dynamics. A main characteristic of the reversing field is a large decrease in the axial dipole and the dominant role of non-dipole components. Other features strongly depend on whether they are derived from sedimentary or volcanic records. Only thermal remanent magnetization of lava flows can capture faithful records of a rapidly varying non-dipole field, but, because of episodic volcanic activity, sequences of overlying flows yield incomplete records. Here we show that the ten most detailed volcanic records of reversals can be matched in a very satisfactory way, under the assumption of a common duration, revealing common dynamical characteristics. We infer that the reversal process has remained unchanged, with the same time constants and durations, at least since 180 million years ago. We propose that the reversing field is characterized by three successive phases: a precursory event, a 180° polarity switch and a rebound. The first and third phases reflect the emergence of the non-dipole field with large-amplitude secular variation. They are rarely both recorded at the same site owing to the rapidly changing field geometry and last for less than 2,500 years. The actual transit between the two polarities does not last longer than 1,000 years and might therefore result from mechanisms other than those governing normal secular variation. Such changes are too brief to be accurately recorded by most sediments. PMID:23038471
Mean Field Analysis of Stochastic Neural Network Models with Synaptic Depression
NASA Astrophysics Data System (ADS)
Yasuhiko Igarashi,; Masafumi Oizumi,; Masato Okada,
2010-08-01
We investigated the effects of synaptic depression on the macroscopic behavior of stochastic neural networks. Dynamical mean field equations were derived for such networks by taking the average of two stochastic variables: a firing-state variable and a synaptic variable. In these equations, the average product of thesevariables is decoupled as the product of their averages because the two stochastic variables are independent. We proved the independence of these two stochastic variables assuming that the synaptic weight Jij is of the order of 1/N with respect to the number of neurons N. Using these equations, we derived macroscopic steady-state equations for a network with uniform connections and for a ring attractor network with Mexican hat type connectivity and investigated the stability of the steady-state solutions. An oscillatory uniform state was observed in the network with uniform connections owing to a Hopf instability. For the ring network, high-frequency perturbations were shown not to affect system stability. Two mechanisms destabilize the inhomogeneous steady state, leading to two oscillatory states. A Turing instability leads to a rotating bump state, while a Hopf instability leads to an oscillatory bump state, which was previously unreported. Various oscillatory states take place in a network with synaptic depression depending on the strength of the interneuron connections.
Real-time automated EEG tracking of brain states using neural field theory.
Abeysuriya, R G; Robinson, P A
2016-01-30
A real-time fitting system is developed and used to fit the predictions of an established physiologically-based neural field model to electroencephalographic spectra, yielding a trajectory in a physiological parameter space that parametrizes intracortical, intrathalamic, and corticothalamic feedbacks as the arousal state evolves continuously over time. This avoids traditional sleep/wake staging (e.g., using Rechtschaffen-Kales stages), which is fundamentally limited because it forces classification of continuous dynamics into a few discrete categories that are neither physiologically informative nor individualized. The classification is also subject to substantial interobserver disagreement because traditional staging relies in part on subjective evaluations. The fitting routine objectively and robustly tracks arousal parameters over the course of a full night of sleep, and runs in real-time on a desktop computer. The system developed here supersedes discrete staging systems by representing arousal states in terms of physiology, and provides an objective measure of arousal state which solves the problem of interobserver disagreement. Discrete stages from traditional schemes can be expressed in terms of model parameters for backward compatibility with prior studies. PMID:26523766
Stability of bumps in piecewise smooth neural fields with nonlinear adaptation
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Bressloff, Paul C.
2010-06-01
We study the linear stability of stationary bumps in piecewise smooth neural fields with local negative feedback in the form of synaptic depression or spike frequency adaptation. The continuum dynamics is described in terms of a nonlocal integrodifferential equation, in which the integral kernel represents the spatial distribution of synaptic weights between populations of neurons whose mean firing rate is taken to be a Heaviside function of local activity. Discontinuities in the adaptation variable associated with a bump solution means that bump stability cannot be analyzed by constructing the Evans function for a network with a sigmoidal gain function and then taking the high-gain limit. In the case of synaptic depression, we show that linear stability can be formulated in terms of solutions to a system of pseudo-linear equations. We thus establish that sufficiently strong synaptic depression can destabilize a bump that is stable in the absence of depression. These instabilities are dominated by shift perturbations that evolve into traveling pulses. In the case of spike frequency adaptation, we show that for a wide class of perturbations the activity and adaptation variables decouple in the linear regime, thus allowing us to explicitly determine stability in terms of the spectrum of a smooth linear operator. We find that bumps are always unstable with respect to this class of perturbations, and destabilization of a bump can result in either a traveling pulse or a spatially localized breather.
DYNAMICAL FIELD LINE CONNECTIVITY IN MAGNETIC TURBULENCE
Ruffolo, D.; Matthaeus, W. H.
2015-06-20
Point-to-point magnetic connectivity has a stochastic character whenever magnetic fluctuations cause a field line random walk, but this can also change due to dynamical activity. Comparing the instantaneous magnetic connectivity from the same point at two different times, we provide a nonperturbative analytic theory for the ensemble average perpendicular displacement of the magnetic field line, given the power spectrum of magnetic fluctuations. For simplicity, the theory is developed in the context of transverse turbulence, and is numerically evaluated for the noisy reduced MHD model. Our formalism accounts for the dynamical decorrelation of magnetic fluctuations due to wave propagation, local nonlinear distortion, random sweeping, and convection by a bulk wind flow relative to the observer. The diffusion coefficient D{sub X} of the time-differenced displacement becomes twice the usual field line diffusion coefficient D{sub x} at large time displacement t or large distance z along the mean field (corresponding to a pair of uncorrelated random walks), though for a low Kubo number (in the quasilinear regime) it can oscillate at intermediate values of t and z. At high Kubo number the dynamical decorrelation decays mainly from the nonlinear term and D{sub X} tends monotonically toward 2D{sub x} with increasing t and z. The formalism and results presented here are relevant to a variety of astrophysical processes, such as electron transport and heating patterns in coronal loops and the solar transition region, changing magnetic connection to particle sources near the Sun or at a planetary bow shock, and thickening of coronal hole boundaries.
Plant Pathogen Population Dynamics in Potato Fields
Morgan, G. D.; Stevenson, W. R.; MacGuidwin, A. E.; Kelling, K. A.; Binning, L. K.; Zhu, J.
2002-01-01
Modern technologies incorporating Geographic Information Systems (GIS), Global Positioning Systems (GPS), remote sensing, and geostatistics provide unique opportunities to advance ecological understanding of pests across a landscape. Increased knowledge of the population dynamics of plant pathogens will promote management strategies, such as site-specific management, and cultural practices minimizing the introduction and impact of plant pathogens. The population dynamics of Alternaria solani, Verticillium dahliae, and Pratylenchus penetrans were investigated in commercial potato fields. A 0.5-ha diamond grid-sampling scheme was georeferenced, and all disease ratings and nematode samples were taken at these grid points. Percent disease severity was rated weekly, and P. penetrans densities were quantified 4 weeks after potato emergence. Spatial statistics and interpolation methods were used to identify the spatial distribution and population dynamics of each pathogen. Interpolated maps and aerial imagery identified A. solani intra-season progression across the fields as the potato crop matured. Late-season nitrogen application reduced A. solani severity. The spatial distributions of V. dahliae and P. penetrans were spatially correlated. PMID:19265932
Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.
Yu, T; Sejnowski, T J; Cauwenberghs, G
2011-10-01
We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M.
1996-12-31
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.`s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the {open_quotes}brainstate{close_quotes} using logs as predicters and core permeability as {open_quotes}truth{close_quotes}. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M. )
1996-01-01
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.'s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the [open quotes]brainstate[close quotes] using logs as predicters and core permeability as [open quotes]truth[close quotes]. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Nonequilibrium dynamical mean-field theory
NASA Astrophysics Data System (ADS)
Freericks, James
2007-03-01
Dynamical mean-field theory (DMFT) is establishing itself as one of the most powerful approaches to the quantum many-body problem in strongly correlated electron materials. Recently, the formalism has been generalized to study nonequilibrium problems [1,2], such as the evolution of Bloch oscillations in a material that changes from a diffusive metal to a Mott insulator [2,3]. Using a real-time formalism on the Kadanoff-Baym-Keldysh contour, the DMFT algorithm can be generalized to the case of systems that are not time-translation invariant. The computational algorithm has a parallel implementation with essentially a linear scale up when running on thousands of processors. Results on the decay of Bloch oscillations, their change of character within the Mott insulator, and movies on how electrons redistribute themselves due to their response to an external electrical field will be presented. In addition to solid-state applications, this work also applies to the behavior of mixtures of light and heavy cold atoms in optical lattices. [1] V. M. Turkowski and J. K. Freericks, Spectral moment sum rules for strongly correlated electrons in time-dependent electric fields, Phys. Rev. B 075108 (2006); Erratum, Phys. Rev. B 73, 209902(E) (2006). [2] J. K. Freericks, V. M. Turkowski , and V. Zlati'c, Nonlinear response of strongly correlated materials to large electric fields, in Proceedings of the HPCMP Users Group Conference 2006, Denver, CO, June 26--29, 2006 edited by D. E. Post (IEEE Computer Society, Los Alamitos, CA, 2006), to appear. [3] J. K. Freericks, V. M. Turkowski, and V. Zlati'c, Nonequilibrium dynamical mean-field theory, submitted to Phys. Rev. Lett. cond-mat//0607053.
Discrete neural dynamic programming in wheeled mobile robot control
NASA Astrophysics Data System (ADS)
Hendzel, Zenon; Szuster, Marcin
2011-05-01
In this paper we propose a discrete algorithm for a tracking control of a two-wheeled mobile robot (WMR), using an advanced Adaptive Critic Design (ACD). We used Dual-Heuristic Programming (DHP) algorithm, that consists of two parametric structures implemented as Neural Networks (NNs): an actor and a critic, both realized in a form of Random Vector Functional Link (RVFL) NNs. In the proposed algorithm the control system consists of the DHP adaptive critic, a PD controller and a supervisory term, derived from the Lyapunov stability theorem. The supervisory term guaranties a stable realization of a tracking movement in a learning phase of the adaptive critic structure and robustness in face of disturbances. The discrete tracking control algorithm works online, uses the WMR model for a state prediction and does not require a preliminary learning. Verification has been conducted to illustrate the performance of the proposed control algorithm, by a series of experiments on the WMR Pioneer 2-DX.
Dynamic Neural Processing of Linguistic Cues Related to Death
Ma, Yina; Qin, Jungang; Han, Shihui
2013-01-01
Behavioral studies suggest that humans evolve the capacity to cope with anxiety induced by the awareness of death’s inevitability. However, the neurocognitive processes that underlie online death-related thoughts remain unclear. Our recent functional MRI study found that the processing of linguistic cues related to death was characterized by decreased neural activity in human insular cortex. The current study further investigated the time course of neural processing of death-related linguistic cues. We recorded event-related potentials (ERP) to death-related, life-related, negative-valence, and neutral-valence words in a modified Stroop task that required color naming of words. We found that the amplitude of an early frontal/central negativity at 84–120 ms (N1) decreased to death-related words but increased to life-related words relative to neutral-valence words. The N1 effect associated with death-related and life-related words was correlated respectively with individuals’ pessimistic and optimistic attitudes toward life. Death-related words also increased the amplitude of a frontal/central positivity at 124–300 ms (P2) and of a frontal/central positivity at 300–500 ms (P3). However, the P2 and P3 modulations were observed for both death-related and negative-valence words but not for life-related words. The ERP results suggest an early inverse coding of linguistic cues related to life and death, which is followed by negative emotional responses to death-related information. PMID:23840787
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. PMID:26881999
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies.
Dynamic control of ROV`s making use of the neural network concept
Ooi, Tadashi; Yoshida, Yuki; Takahashi, Yoshiaki; Kidoushi, Hideki
1994-12-31
An attempt is made to combine the classical controller with the concept of neural network, the result of which is a control system that they have named the Robust Adaptive Neural-net Controller (RANC). The RANC identifies the dynamic characteristics of the remotely operated vehicle (ROV) including its ambient environment involving cyclic disturbances such as forces induced by waves, and organizes automatically an optimized controller. A tank experiment is described in which the RANC is set to maintain a model ROV at a prescribed depth of water under artificially generated wave disturbance.
Study of the neural dynamics for understanding communication in terms of complex hetero systems.
Tsuda, Ichiro; Yamaguchi, Yoko; Hashimoto, Takashi; Okuda, Jiro; Kawasaki, Masahiro; Nagasaka, Yasuo
2015-01-01
The purpose of the research project was to establish a new research area named "neural information science for communication" by elucidating its neural mechanism. The research was performed in collaboration with applied mathematicians in complex-systems science and experimental researchers in neuroscience. The project included measurements of brain activity during communication with or without languages and analyses performed with the help of extended theories for dynamical systems and stochastic systems. The communication paradigm was extended to the interactions between human and human, human and animal, human and robot, human and materials, and even animal and animal.
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance.
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. PMID:26456201
The neural selection and control of saccades by the frontal eye field.
Schall, Jeffrey D
2002-01-01
Recent research has provided new insights into the neural processes that select the target for and control the production of a shift of gaze. Being a key node in the network that subserves visual processing and saccade production, the frontal eye field (FEF) has been an effective area in which to monitor these processes. Certain neurons in the FEF signal the location of conspicuous or meaningful stimuli that may be the targets for saccades. Other neurons control whether and when the gaze shifts. The existence of distinct neural processes for visual selection and saccade production is necessary to explain the flexibility of visually guided behaviour. PMID:12217175
Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica
2012-05-30
The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release.
NASA Astrophysics Data System (ADS)
Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.
2013-12-01
Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.
Recurrent neural nets as dynamical Boolean systems with application to associative memory.
Watta, P B; Wang, K; Hassoun, M H
1997-01-01
Discrete-time/discrete-state recurrent neural networks are analyzed from a dynamical Boolean systems point of view in order to devise new analytic and design methods for the class of both single and multilayer recurrent artificial neural networks. With the proposed dynamical Boolean systems analysis, we are able to formulate necessary and sufficient conditions for network stability which are more general than the well-known but restrictive conditions for the class of single layer networks: (1) symmetric weight matrix with (2) positive diagonal and (3) asynchronous update. In terms of design, we use a dynamical Boolean systems analysis to construct a high performance associative memory. With this Boolean memory, we can guarantee that all fundamental memories are stored, and also guarantee the size of the basin of attraction for each fundamental memory.
Neural population dynamics in human motor cortex during movements in people with ALS
Pandarinath, Chethan; Gilja, Vikash; Blabe, Christine H; Nuyujukian, Paul; Sarma, Anish A; Sorice, Brittany L; Eskandar, Emad N; Hochberg, Leigh R; Henderson, Jaimie M; Shenoy, Krishna V
2015-01-01
The prevailing view of motor cortex holds that motor cortical neural activity represents muscle or movement parameters. However, recent studies in non-human primates have shown that neural activity does not simply represent muscle or movement parameters; instead, its temporal structure is well-described by a dynamical system where activity during movement evolves lawfully from an initial pre-movement state. In this study, we analyze neuronal ensemble activity in motor cortex in two clinical trial participants diagnosed with Amyotrophic Lateral Sclerosis (ALS). We find that activity in human motor cortex has similar dynamical structure to that of non-human primates, indicating that human motor cortex contains a similar underlying dynamical system for movement generation. Clinical trial registration: NCT00912041. DOI: http://dx.doi.org/10.7554/eLife.07436.001 PMID:26099302
Degradation Prediction Model Based on a Neural Network with Dynamic Windows
Zhang, Xinghui; Xiao, Lei; Kang, Jianshe
2015-01-01
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873
Large Deviations, Dynamics and Phase Transitions in Large Stochastic and Disordered Neural Networks
NASA Astrophysics Data System (ADS)
Cabana, Tanguy; Touboul, Jonathan
2013-10-01
Neuronal networks are characterized by highly heterogeneous connectivity, and this disorder was recently related experimentally to qualitative properties of the network. The motivation of this paper is to mathematically analyze the role of these disordered connectivities on the large-scale properties of neuronal networks. To this end, we analyze here large-scale limit behaviors of neural networks including, for biological relevance, multiple populations, random connectivities and interaction delays. Due to the randomness of the connectivity, usual mean-field methods (e.g. coupling) cannot be applied, but, similarly to studies developed for spin glasses, we will show that the sequences of empirical measures satisfy a large deviation principle, and converge towards a self-consistent non-Markovian process. From a mathematical viewpoint, the proof differs from previous works in that we are working in infinite-dimensional spaces (interaction delays) and consider multiple cell types. The limit obtained formally characterizes the macroscopic behavior of the network. We propose a dynamical systems approach in order to address the qualitative nature of the solutions of these very complex equations, and apply this methodology to three instances in order to show how non-centered coefficients, interaction delays and multiple populations networks are affected by disorder levels. We identify a number of phase transitions in such systems upon changes in delays, connectivity patterns and dispersion, and particularly focus on the emergence of non-equilibrium states involving synchronized oscillations.
Dynamical complexity in the C.elegans neural network
NASA Astrophysics Data System (ADS)
Antonopoulos, C. G.; Fokas, A. S.; Bountis, T. C.
2016-09-01
We model the neuronal circuit of the C.elegans soil worm in terms of a Hindmarsh-Rose system of ordinary differential equations, dividing its circuit into six communities which are determined via the Walktrap and Louvain methods. Using the numerical solution of these equations, we analyze important measures of dynamical complexity, namely synchronicity, the largest Lyapunov exponent, and the ΦAR auto-regressive integrated information theory measure. We show that ΦAR provides a useful measure of the information contained in the C.elegans brain dynamic network. Our analysis reveals that the C.elegans brain dynamic network generates more information than the sum of its constituent parts, and that attains higher levels of integrated information for couplings for which either all its communities are highly synchronized, or there is a mixed state of highly synchronized and desynchronized communities.
Xu, Bin; Yang, Chenguang; Pan, Yongping
2015-10-01
This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design.
Xu, Bin; Yang, Chenguang; Pan, Yongping
2015-10-01
This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design. PMID:26259222
Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances
Fiete, Ila R.; Seung, H. Sebastian
2006-07-28
We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source.
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Neural network assisted inverse dynamic guidance for terminally constrained entry flight.
Zhou, Hao; Rahman, Tawfiqur; Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws.
Neural dynamics of autistic behaviors: cognitive, emotional, and timing substrates.
Grossberg, Stephen; Seidman, Don
2006-07-01
What brain mechanisms underlie autism, and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the Imbalanced Spectrally Timed Adaptive Resonance Theory (iSTART) model, that proposes how cognitive, emotional, timing, and motor processes that involve brain regions such as the prefrontal and temporal cortex, amygdala, hippocampus, and cerebellum may interact to create and perpetuate autistic symptoms. These model processes were originally developed to explain data concerning how the brain controls normal behaviors. The iSTART model shows how autistic behavioral symptoms may arise from prescribed breakdowns in these brain processes, notably a combination of underaroused emotional depression in the amygdala and related affective brain regions, learning of hyperspecific recognition categories in the temporal and prefrontal cortices, and breakdowns of adaptively timed attentional and motor circuits in the hippocampal system and cerebellum. The model clarifies how malfunctions in a subset of these mechanisms can, through a systemwide vicious circle of environmentally mediated feedback, cause and maintain problems with them all.
Dynamical Field Line Connectivity in Magnetic Turbulence
NASA Astrophysics Data System (ADS)
Ruffolo, D. J.; Matthaeus, W. H.
2014-12-01
Point-to-point magnetic connectivity has a stochastic character whenever magnetic fluctuations cause a field line random walk, with observable manifestations such as dropouts of solar energetic particles and upstream events at Earth's bow shock. This can also change due to dynamical activity. Comparing the instantaneous magnetic connectivity to the same point at two different times, we provide a nonperturbative analytic theory for the ensemble average perpendicular displacement of the magnetic field line, given the power spectrum of magnetic fluctuations. For simplicity, the theory is developed in the context of transverse turbulence, and is numerically evaluated for two specific models: reduced magnetohydrodynanmics (RMHD), a quasi-two dimensional model of anisotropic turbulence that is applicable to low-beta plasmas, and two-dimensional (2D) plus slab turbulence, which is a good parameterization for solar wind turbulence. We take into account the dynamical decorrelation of magnetic fluctuations due to wave propagation, nonlinear distortion, random sweeping, and convection by a bulk wind flow relative to the observer. The mean squared time-differenced displacement increases with time and with parallel distance, becoming twice the field line random walk displacement at long distances and/or times, corresponding to a pair of uncorrelated random walks. These results are relevant to a variety of astrophysical processes, such as electron transport and heating patterns in coronal loops and the solar transition region, changing magnetic connection to particle sources near the Sun or at a planetary bow shock, and thickening of coronal hole boundaries. Partially supported by the Thailand Research Fund, the US NSF (AGS-1063439 and SHINE AGS-1156094), NASA (Heliophysics Theory NNX11AJ44G), and by the Solar Probe Plus Project through the ISIS Theory team.
Dynamic expression of LIM cofactors in the developing mouse neural tube.
Ostendorff, Heather P; Tursun, Baris; Cornils, Kerstin; Schlüter, Anne; Drung, Alexander; Güngör, Cenap; Bach, Ingolf
2006-03-01
The developmental regulation of LIM homeodomain transcription factors (LIM-HD) by the LIM domain-binding cofactors CLIM/Ldb/NLI and RLIM has been demonstrated. Whereas CLIM cofactors are thought to be required for at least some of the in vivo functions of LIM-HD proteins, the ubiquitin ligase RLIM functions as a negative regulator by its ability to target CLIM cofactors for proteasomal degradation. In this report, we have investigated and compared the protein expression of both factors in the developing mouse neural tube. We co-localize both proteins in many tissues and, although widely expressed, we detect high levels of both cofactors in specific neural tube regions, e.g., in the ventral neural tube, where motor neurons reside. The mostly ubiquitous distribution of RLIM- and CLIM-encoding mRNA differs from the more specific expression of both cofactors at the protein level, indicating post-transcriptional regulation. Furthermore, we show that both cofactors not only co-localize with each other but also with Isl and Lhx3 LIM-HD proteins in developing ventral neural tube neurons. Our results demonstrate the dynamic expression of cofactors participating in the regulation of LIM-HD proteins during the development of the neural tube in mice and suggest additional post-transcriptional regulation in the nuclear LIM-HD protein network.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-05-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states.
Nie, Xiaobing; Zheng, Wei Xing
2016-03-01
This paper addresses the problem of coexistence and dynamical behaviors of multiple equilibria for competitive neural networks. First, a general class of discontinuous nonmonotonic piecewise linear activation functions is introduced for competitive neural networks. Then based on the fixed point theorem and theory of strict diagonal dominance matrix, it is shown that under some conditions, such n -neuron competitive neural networks can have 5(n) equilibria, among which 3(n) equilibria are locally stable and the others are unstable. More importantly, it is revealed that the neural networks with the discontinuous activation functions introduced in this paper can have both more total equilibria and locally stable equilibria than the ones with other activation functions, such as the continuous Mexican-hat-type activation function and discontinuous two-level activation function. Furthermore, the 3(n) locally stable equilibria given in this paper are located in not only saturated regions, but also unsaturated regions, which is different from the existing results on multistability of neural networks with multiple level activation functions. A simulation example is provided to illustrate and validate the theoretical findings.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-01-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313
Dynamic nuclear polarization at high magnetic fields
Maly, Thorsten; Debelouchina, Galia T.; Bajaj, Vikram S.; Hu, Kan-Nian; Joo, Chan-Gyu; Mak–Jurkauskas, Melody L.; Sirigiri, Jagadishwar R.; van der Wel, Patrick C. A.; Herzfeld, Judith; Temkin, Richard J.; Griffin, Robert G.
2009-01-01
Dynamic nuclear polarization (DNP) is a method that permits NMR signal intensities of solids and liquids to be enhanced significantly, and is therefore potentially an important tool in structural and mechanistic studies of biologically relevant molecules. During a DNP experiment, the large polarization of an exogeneous or endogeneous unpaired electron is transferred to the nuclei of interest (I) by microwave (μw) irradiation of the sample. The maximum theoretical enhancement achievable is given by the gyromagnetic ratios (γe/γl), being ∼660 for protons. In the early 1950s, the DNP phenomenon was demonstrated experimentally, and intensively investigated in the following four decades, primarily at low magnetic fields. This review focuses on recent developments in the field of DNP with a special emphasis on work done at high magnetic fields (≥5 T), the regime where contemporary NMR experiments are performed. After a brief historical survey, we present a review of the classical continuous wave (cw) DNP mechanisms—the Overhauser effect, the solid effect, the cross effect, and thermal mixing. A special section is devoted to the theory of coherent polarization transfer mechanisms, since they are potentially more efficient at high fields than classical polarization schemes. The implementation of DNP at high magnetic fields has required the development and improvement of new and existing instrumentation. Therefore, we also review some recent developments in μw and probe technology, followed by an overview of DNP applications in biological solids and liquids. Finally, we outline some possible areas for future developments. PMID:18266416
Dayanidhi, Sudarshan; Kutch, Jason J.
2013-01-01
Developmental improvements in dynamic manipulation abilities are typically attributed to neural maturation, such as myelination of corticospinal pathways, neuronal pruning, and synaptogenesis. However the contributions from changes in the peripheral motor system are less well understood. Here we investigated whether there are developmental changes in muscle activation-contraction dynamics and whether these changes contribute to improvements in dynamic manipulation in humans. We compared pinch strength, dynamic manipulation ability, and contraction time of the first dorsal interosseous muscle in typically developing preadolescent, adolescent, and young adults. Both strength and dynamic manipulation ability increased with age (p < 0.0001 and p < 0.00001, respectively). Surprisingly, adults had a 33% lower muscle contraction time compared with preadolescents (p < 0.01), and contraction time showed a significant (p < 0.005) association with dynamic manipulation abilities. Whereas decreases in muscle contraction time during development have been reported in the animal literature, our finding, to our knowledge, is the first report of this phenomenon in humans and the first finding of its association with manipulation. Consequently, the changes in the muscle contractile properties could be an important complement to neural maturation in the development of dynamic manipulation. These findings have important implications for understanding central and peripheral contributors to deficits in manipulation in atypical development, such as in children with cerebral palsy. PMID:24048835
Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Robust fault detection of wind energy conversion systems based on dynamic neural networks.
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.
Robust fault detection of wind energy conversion systems based on dynamic neural networks.
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Magnetic field perturbation of neural recording and stimulating microelectrodes
NASA Astrophysics Data System (ADS)
Martinez-Santiesteban, Francisco M.; Swanson, Scott D.; Noll, Douglas C.; Anderson, David J.
2007-04-01
To improve the overall temporal and spatial resolution of brain mapping techniques, in animal models, some attempts have been reported to join electrophysiological methods with functional magnetic resonance imaging (fMRI). However, little attention has been paid to the image artefacts produced by the microelectrodes that compromise the anatomical or functional information of those studies. This work presents a group of simulations and MR images that show the limitations of wire microelectrodes and the potential advantages of silicon technology, in terms of image quality, in MRI environments. Magnetic field perturbations are calculated using a Fourier-based method for platinum (Pt) and tungsten (W) microwires as well as two different silicon technologies. We conclude that image artefacts produced by microelectrodes are highly dependent not only on the magnetic susceptibility of the materials used but also on the size, shape and orientation of the electrodes with respect to the main magnetic field. In addition silicon microelectrodes present better MRI characteristics than metallic microelectrodes. However, metallization layers added to silicon materials can adversely affect the quality of MR images. Therefore only those silicon microelectrodes that minimize the amount of metallic material can be considered MR-compatible and therefore suitable for possible simultaneous fMRI and electrophysiological studies. High resolution gradient echo images acquired at 2 T (TR/TE = 100/15 ms, voxel size = 100 × 100 × 100 µm3) of platinum-iridium (Pt-Ir, 90%-10%) and tungsten microwires show a complete signal loss that covers a volume significantly larger than the actual volume occupied by the microelectrodes: roughly 400 times larger for Pt-Ir and 180 for W, at the tip of the microelectrodes. Similar MR images of a single-shank silicon microelectrode only produce a partial volume effect on the voxels occupied by the probe with less than 50% of signal loss.
Dynamic functional integration of distinct neural empathy systems
2014-01-01
Recent evidence points to two separate systems for empathy: a vicarious sharing emotional system that supports our ability to share emotions and mental states and a cognitive system that involves cognitive understanding of the perspective of others. Several recent models offer new evidence regarding the brain regions involved in these systems, but no study till date has examined how regions within each system dynamically interact. The study by Raz et al. in this issue of Social, Cognitive, & Affective Neuroscience is among the first to use a novel approach of functional magnetic resonance imaging analysis of fluctuations in network cohesion while an individual is experiencing empathy. Their results substantiate the approach positing two empathy mechanisms and, more broadly, demonstrate how dynamic analysis of emotions can further our understanding of social behavior. PMID:23956080
Dynamic functional integration of distinct neural empathy systems.
Shamay-Tsoory, Simone G
2014-01-01
Recent evidence points to two separate systems for empathy: a vicarious sharing emotional system that supports our ability to share emotions and mental states and a cognitive system that involves cognitive understanding of the perspective of others. Several recent models offer new evidence regarding the brain regions involved in these systems, but no study till date has examined how regions within each system dynamically interact. The study by Raz et al. in this issue of Social, Cognitive, & Affective Neuroscience is among the first to use a novel approach of functional magnetic resonance imaging analysis of fluctuations in network cohesion while an individual is experiencing empathy. Their results substantiate the approach positing two empathy mechanisms and, more broadly, demonstrate how dynamic analysis of emotions can further our understanding of social behavior. PMID:23956080
Dynamic modeling of physical phenomena for PRAs using neural networks
Benjamin, A.S.; Brown, N.N.; Paez, T.L.
1998-04-01
In most probabilistic risk assessments, there is a set of accident scenarios that involves the physical responses of a system to environmental challenges. Examples include the effects of earthquakes and fires on the operability of a nuclear reactor safety system, the effects of fires and impacts on the safety integrity of a nuclear weapon, and the effects of human intrusions on the transport of radionuclides from an underground waste facility. The physical responses of the system to these challenges can be quite complex, and their evaluation may require the use of detailed computer codes that are very time consuming to execute. Yet, to perform meaningful probabilistic analyses, it is necessary to evaluate the responses for a large number of variations in the input parameters that describe the initial state of the system, the environments to which it is exposed, and the effects of human interaction. Because the uncertainties of the system response may be very large, it may also be necessary to perform these evaluations for various values of modeling parameters that have high uncertainties, such as material stiffnesses, surface emissivities, and ground permeabilities. The authors have been exploring the use of artificial neural networks (ANNs) as a means for estimating the physical responses of complex systems to phenomenological events such as those cited above. These networks are designed as mathematical constructs with adjustable parameters that can be trained so that the results obtained from the networks will simulate the results obtained from the detailed computer codes. The intent is for the networks to provide an adequate simulation of the detailed codes over a significant range of variables while requiring only a small fraction of the computer processing time required by the detailed codes. This enables the authors to integrate the physical response analyses into the probabilistic models in order to estimate the probabilities of various responses.
Neural network architecture for cognitive navigation in dynamic environments.
Villacorta-Atienza, José Antonio; Makarov, Valeri A
2013-12-01
Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution. PMID:24805224
Neural network architecture for cognitive navigation in dynamic environments.
Villacorta-Atienza, José Antonio; Makarov, Valeri A
2013-12-01
Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution.
Dynamical analysis in scalar field cosmology
NASA Astrophysics Data System (ADS)
Paliathanasis, Andronikos; Tsamparlis, Michael; Basilakos, Spyros; Barrow, John D.
2015-06-01
We give a general method to find exact cosmological solutions for scalar-field dark energy in the presence of perfect fluids. We use the existence of invariant transformations for the Wheeler De Witt (WdW) equation. We show that the existence of a point transformation under which the WdW equation is invariant is equivalent to the existence of conservation laws for the field equations, which indicates the existence of analytical solutions. We extend previous work by providing exact solutions for the Hubble parameter and the effective dark-energy equation of state parameter for cosmologies containing a combination of perfect fluid and a scalar field whose self-interaction potential is a power of hyperbolic functions. We find solutions explicitly when the perfect fluid is radiation or cold dark matter and determine the effects of nonzero spatial curvature. Using the Planck 2015 data, we determine the evolution of the effective equation of state of the dark energy. Finally, we study the global dynamics using dimensionless variables. We find that if the current cosmological model is Liouville integrable (admits conservation laws) then there is a unique stable point which describes the de-Sitter phase of the universe.
Field-driven dynamics of nematic microcapillaries
NASA Astrophysics Data System (ADS)
Khayyatzadeh, Pouya; Fu, Fred; Abukhdeir, Nasser Mohieddin
2015-12-01
Polymer-dispersed liquid-crystal (PDLC) composites long have been a focus of study for their unique electro-optical properties which have resulted in various applications such as switchable (transparent or translucent) windows. These composites are manufactured using desirable "bottom-up" techniques, such as phase separation of a liquid-crystal-polymer mixture, which enable production of PDLC films at very large scales. LC domains within PDLCs are typically spheroidal, as opposed to rectangular for an LCD panel, and thus exhibit substantially different behavior in the presence of an external field. The fundamental difference between spheroidal and rectangular nematic domains is that the former results in the presence of nanoscale orientational defects in LC order while the latter does not. Progress in the development and optimization of PDLC electro-optical properties has progressed at a relatively slow pace due to this increased complexity. In this work, continuum simulations are performed in order to capture the complex formation and electric field-driven switching dynamics of approximations of PDLC domains. Using a simplified elliptic cylinder (microcapillary) geometry as an approximation of spheroidal PDLC domains, the effects of geometry (aspect ratio), surface anchoring, and external field strength are studied through the use of the Landau-de Gennes model of the nematic LC phase.
Altered temporal dynamics of neural adaptation in the aging human auditory cortex.
Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas
2016-09-01
Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging. PMID:27459921
Lin, Yang-Yin; Chang, Jyh-Yeong; Lin, Chin-Teng
2013-02-01
This paper presents a novel recurrent fuzzy neural network, called an interactively recurrent self-evolving fuzzy neural network (IRSFNN), for prediction and identification of dynamic systems. The recurrent structure in an IRSFNN is formed as an external loops and internal feedback by feeding the rule firing strength of each rule to others rules and itself. The consequent part in the IRSFNN is composed of a Takagi-Sugeno-Kang (TSK) or functional-link-based type. The proposed IRSFNN employs a functional link neural network (FLNN) to the consequent part of fuzzy rules for promoting the mapping ability. Unlike a TSK-type fuzzy neural network, the FLNN in the consequent part is a nonlinear function of input variables. An IRSFNNs learning starts with an empty rule base and all of the rules are generated and learned online through a simultaneous structure and parameter learning. An on-line clustering algorithm is effective in generating fuzzy rules. The consequent update parameters are derived by a variable-dimensional Kalman filter algorithm. The premise and recurrent parameters are learned through a gradient descent algorithm. We test the IRSFNN for the prediction and identification of dynamic plants and compare it to other well-known recurrent FNNs. The proposed model obtains enhanced performance results.
Dynamic neural networking as a basis for plasticity in the control of heart rate.
Kember, G; Armour, J A; Zamir, M
2013-01-21
A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. PMID:23041448
Dynamic neural networking as a basis for plasticity in the control of heart rate.
Kember, G; Armour, J A; Zamir, M
2013-01-21
A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network.
Dynamic recurrent neural networks for stable adaptive control of wing rock motion
NASA Astrophysics Data System (ADS)
Kooi, Steven Boon-Lam
Wing rock is a self-sustaining limit cycle oscillation (LCO) which occurs as the result of nonlinear coupling between the dynamic response of the aircraft and the unsteady aerodynamic forces. In this thesis, dynamic recurrent RBF (Radial Basis Function) network control methodology is proposed to control the wing rock motion. The concept based on the properties of the Presiach hysteresis model is used in the design of dynamic neural networks. The structure and memory mechanism in the Preisach model is analogous to the parallel connectivity and memory formation in the RBF neural networks. The proposed dynamic recurrent neural network has a feature for adding or pruning the neurons in the hidden layer according to the growth criteria based on the properties of ensemble average memory formation of the Preisach model. The recurrent feature of the RBF network deals with the dynamic nonlinearities and endowed temporal memories of the hysteresis model. The control of wing rock is a tracking problem, the trajectory starts from non-zero initial conditions and it tends to zero as time goes to infinity. In the proposed neural control structure, the recurrent dynamic RBF network performs identification process in order to approximate the unknown non-linearities of the physical system based on the input-output data obtained from the wing rock phenomenon. The design of the RBF networks together with the network controllers are carried out in discrete time domain. The recurrent RBF networks employ two separate adaptation schemes where the RBF's centre and width are adjusted by the Extended Kalman Filter in order to give a minimum networks size, while the outer networks layer weights are updated using the algorithm derived from Lyapunov stability analysis for the stable closed loop control. The issue of the robustness of the recurrent RBF networks is also addressed. The effectiveness of the proposed dynamic recurrent neural control methodology is demonstrated through simulations to
Machine Learning for Dynamical Mean Field Theory
NASA Astrophysics Data System (ADS)
Arsenault, Louis-Francois; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Littlewood, P. B.; Millis, Andy
2014-03-01
Machine Learning (ML), an approach that infers new results from accumulated knowledge, is in use for a variety of tasks ranging from face and voice recognition to internet searching and has recently been gaining increasing importance in chemistry and physics. In this talk, we investigate the possibility of using ML to solve the equations of dynamical mean field theory which otherwise requires the (numerically very expensive) solution of a quantum impurity model. Our ML scheme requires the relation between two functions: the hybridization function describing the bare (local) electronic structure of a material and the self-energy describing the many body physics. We discuss the parameterization of the two functions for the exact diagonalization solver and present examples, beginning with the Anderson Impurity model with a fixed bath density of states, demonstrating the advantages and the pitfalls of the method. DOE contract DE-AC02-06CH11357.
Dynamic transitions among multiple oscillators of synchronized bursts in cultured neural networks
NASA Astrophysics Data System (ADS)
Hoan Kim, June; Heo, Ryoun; Choi, Joon Ho; Lee, Kyoung J.
2014-04-01
Synchronized neural bursts are a salient dynamic feature of biological neural networks, having important roles in brain functions. This report investigates the deterministic nature behind seemingly random temporal sequences of inter-burst intervals generated by cultured networks of cortical cells. We found that the complex sequences were an intricate patchwork of several noisy ‘burst oscillators’, whose periods covered a wide dynamic range, from a few tens of milliseconds to tens of seconds. The transition from one type of oscillator to another favored a particular passage, while the dwelling time between two neighboring transitions followed an exponential distribution showing no memory. With different amounts of bicuculline or picrotoxin application, we could also terminate the oscillators, generate new ones or tune their periods.
Pattwell, Siobhan S.; Liston, Conor; Jing, Deqiang; Ninan, Ipe; Yang, Rui R.; Witztum, Jonathan; Murdock, Mitchell H.; Dincheva, Iva; Bath, Kevin G.; Casey, B. J.; Deisseroth, Karl; Lee, Francis S.
2016-01-01
Fear can be highly adaptive in promoting survival, yet it can also be detrimental when it persists long after a threat has passed. Flexibility of the fear response may be most advantageous during adolescence when animals are prone to explore novel, potentially threatening environments. Two opposing adolescent fear-related behaviours—diminished extinction of cued fear and suppressed expression of contextual fear—may serve this purpose, but the neural basis underlying these changes is unknown. Using microprisms to image prefrontal cortical spine maturation across development, we identify dynamic BLA-hippocampal-mPFC circuit reorganization associated with these behavioural shifts. Exploiting this sensitive period of neural development, we modified existing behavioural interventions in an age-specific manner to attenuate adolescent fear memories persistently into adulthood. These findings identify novel strategies that leverage dynamic neurodevelopmental changes during adolescence with the potential to extinguish pathological fears implicated in anxiety and stress-related disorders. PMID:27215672
Pattwell, Siobhan S; Liston, Conor; Jing, Deqiang; Ninan, Ipe; Yang, Rui R; Witztum, Jonathan; Murdock, Mitchell H; Dincheva, Iva; Bath, Kevin G; Casey, B J; Deisseroth, Karl; Lee, Francis S
2016-01-01
Fear can be highly adaptive in promoting survival, yet it can also be detrimental when it persists long after a threat has passed. Flexibility of the fear response may be most advantageous during adolescence when animals are prone to explore novel, potentially threatening environments. Two opposing adolescent fear-related behaviours-diminished extinction of cued fear and suppressed expression of contextual fear-may serve this purpose, but the neural basis underlying these changes is unknown. Using microprisms to image prefrontal cortical spine maturation across development, we identify dynamic BLA-hippocampal-mPFC circuit reorganization associated with these behavioural shifts. Exploiting this sensitive period of neural development, we modified existing behavioural interventions in an age-specific manner to attenuate adolescent fear memories persistently into adulthood. These findings identify novel strategies that leverage dynamic neurodevelopmental changes during adolescence with the potential to extinguish pathological fears implicated in anxiety and stress-related disorders. PMID:27215672
Field-induced superdiffusion and dynamical heterogeneity.
Gradenigo, Giacomo; Bertin, Eric; Biroli, Giulio
2016-06-01
By analyzing two kinetically constrained models of supercooled liquids we show that the anomalous transport of a driven tracer observed in supercooled liquids is another facet of the phenomenon of dynamical heterogeneity. We focus on the Fredrickson-Andersen and the Bertin-Bouchaud-Lequeux models. By numerical simulations and analytical arguments we demonstrate that the violation of the Stokes-Einstein relation and the field-induced superdiffusion observed during a long preasymptotic regime have the same physical origin: while a fraction of probes do not move, others jump repeatedly because they are close to local mobile regions. The anomalous fluctuations observed out of equilibrium in the presence of a pulling force ε,σ_{x}^{2}(t)=〈x_{ε}^{2}(t)〉-〈x_{ε}(t)〉^{2}∼t^{3/2}, which are accompanied by the asymptotic decay α_{ε}(t)∼t^{-1/2} of the non-Gaussian parameter from nontrivial values to zero, are due to the splitting of the probes population in the two (mobile and immobile) groups and to dynamical correlations, a mechanism expected to happen generically in supercooled liquids. PMID:27415189
Field-induced superdiffusion and dynamical heterogeneity
NASA Astrophysics Data System (ADS)
Gradenigo, Giacomo; Bertin, Eric; Biroli, Giulio
2016-06-01
By analyzing two kinetically constrained models of supercooled liquids we show that the anomalous transport of a driven tracer observed in supercooled liquids is another facet of the phenomenon of dynamical heterogeneity. We focus on the Fredrickson-Andersen and the Bertin-Bouchaud-Lequeux models. By numerical simulations and analytical arguments we demonstrate that the violation of the Stokes-Einstein relation and the field-induced superdiffusion observed during a long preasymptotic regime have the same physical origin: while a fraction of probes do not move, others jump repeatedly because they are close to local mobile regions. The anomalous fluctuations observed out of equilibrium in the presence of a pulling force ɛ ,σx2(t ) =
Neural dynamics of psychotherapy: what modeling might tell us about us.
Aleksandrowicz, Ana Maria C; Levine, Daniel S
2005-01-01
A neural network theory is proposed for some of the effects of verbal psychotherapy in individuals who are not seriously disturbed but seeking to function more effectively. The network theories are built on a combination of the supervised ARTMAP network and competitive attractor dynamics. The modeling exercise leads to some guidelines for psychotherapists that involve both cognitive and emotional reinforcement in a climate closer to skill learning than to medical treatment.
Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy.
Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S; Yuste, Rafael; Ahrens, Misha B
2016-03-01
Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning-removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416×832×160 μm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain. PMID:26974063
Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy
Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S.; Yuste, Rafael; Ahrens, Misha B.
2016-01-01
Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning—removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416 × 832 × 160 µm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain. PMID:26974063
Voytek, Bradley; Knight, Robert T.
2015-01-01
Perception, cognition, and social interaction depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at multiple timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. In this paper we begin from the perspective that successful interregional communication relies upon the transient synchronization between distinct low frequency (<80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. From this, we construct a theoretical framework for dynamic network communication, arguing that these networks reflect a balance between oscillatory coupling and local population spiking activity, and that these two levels of activity interact. We theorize that when oscillatory coupling is too strong, spike timing within the local neuronal population becomes too synchronous; when oscillatory coupling is too weak, spike timing is too disorganized. Each results in specific disruptions to neural communication. These alterations in communication dynamics may underlie cognitive changes associated with healthy development and aging, in addition to neurological and psychiatric disorders. A number of neurological and psychiatric disorders—including Parkinson’s disease, autism, depression, schizophrenia, and anxiety—are associated with abnormalities in oscillatory activity. Although aging, psychiatric and neurological disease, and experience differ in the biological changes to structural grey or white matter, neurotransmission, and gene expression, our framework suggests that any resultant cognitive and behavioral changes in normal or disordered states, or their treatment, is a product of how these physical processes affect dynamic network communication. PMID:26005114
Neural-network predictive control for nonlinear dynamic systems with time-delay.
Huang, Jin-Quan; Lewis, F L
2003-01-01
A new recurrent neural-network predictive feedback control structure for a class of uncertain nonlinear dynamic time-delay systems in canonical form is developed and analyzed. The dynamic system has constant input and feedback time delays due to a communications channel. The proposed control structure consists of a linearized subsystem local to the controlled plant and a remote predictive controller located at the master command station. In the local linearized subsystem, a recurrent neural network with on-line weight tuning algorithm is employed to approximate the dynamics of the time-delay-free nonlinear plant. No linearity in the unknown parameters is required. No preliminary off-line weight learning is needed. The remote controller is a modified Smith predictor that provides prediction and maintains the desired tracking performance; an extra robustifying term is needed to guarantee stability. Rigorous stability proofs are given using Lyapunov analysis. The result is an adaptive neural net compensation scheme for unknown nonlinear systems with time delays. A simulation example is provided to demonstrate the effectiveness of the proposed control strategy.
NASA Astrophysics Data System (ADS)
Gao, Shigen; Dong, Hairong; Lyu, Shihang; Ning, Bin
2016-07-01
This paper studies decentralised neural adaptive control of a class of interconnected nonlinear systems, each subsystem is in the presence of input saturation and external disturbance and has independent system order. Using a novel truncated adaptation design, dynamic surface control technique and minimal-learning-parameters algorithm, the proposed method circumvents the problems of 'explosion of complexity' and 'dimension curse' that exist in the traditional backstepping design. Comparing to the methodology that neural weights are online updated in the controllers, only one scalar needs to be updated in the controllers of each subsystem when dealing with unknown systematic dynamics. Radial basis function neural networks (NNs) are used in the online approximation of unknown systematic dynamics. It is proved using Lyapunov stability theory that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded. The tracking errors of each subsystems, the amplitude of NN approximation residuals and external disturbances can be attenuated to arbitrarily small by tuning proper design parameters. Simulation results are given to demonstrate the effectiveness of the proposed method.
Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network
NASA Astrophysics Data System (ADS)
Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao
2009-10-01
A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.
Voytek, Bradley; Knight, Robert T
2015-06-15
Perception, cognition, and social interaction depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at multiple timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. In this article, we begin from the perspective that successful interregional communication relies upon the transient synchronization between distinct low-frequency (<80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. From this, we construct a theoretical framework for dynamic network communication, arguing that these networks reflect a balance between oscillatory coupling and local population spiking activity and that these two levels of activity interact. We theorize that when oscillatory coupling is too strong, spike timing within the local neuronal population becomes too synchronous; when oscillatory coupling is too weak, spike timing is too disorganized. Each results in specific disruptions to neural communication. These alterations in communication dynamics may underlie cognitive changes associated with healthy development and aging, in addition to neurological and psychiatric disorders. A number of neurological and psychiatric disorders-including Parkinson's disease, autism, depression, schizophrenia, and anxiety-are associated with abnormalities in oscillatory activity. Although aging, psychiatric and neurological disease, and experience differ in the biological changes to structural gray or white matter, neurotransmission, and gene expression, our framework suggests that any resultant cognitive and behavioral changes in normal or disordered states or their treatment are a product of how these physical processes affect dynamic network communication.
Neural dynamics in response to binary taste mixtures
Katz, Donald B.
2013-01-01
Taste stimuli encountered in the natural environment are usually combinations of multiple tastants. Although a great deal is known about how neurons in the taste system respond to single taste stimuli in isolation, less is known about how the brain deals with such mixture stimuli. Here, we probe the responses of single neurons in primary gustatory cortex (GC) of awake rats to an array of taste stimuli including 100% citric acid (100 mM), 100% sodium chloride (100 mM), 100% sucrose (100 mM), and a range of binary mixtures (90/10, 70/30, 50/50, 30/70, and 10/90%). We tested for the presence of three different hypothetical response patterns: 1) responses varying monotonically as a function of concentration of sucrose (or acid) in the mixture (the “monotonic” pattern); 2) responses increasing or decreasing as a function of degree of mixture of the stimulus (the “mixture” pattern); and 3) responses that change abruptly from being similar to one pure taste to being similar the other (the “categorical” pattern). Our results demonstrate the presence of both monotonic and mixture patterns within responses of GC neurons. Specifically, further analysis (that included the presentation of 50 mM sucrose and citric acid) made it clear that mixture suppression reliably precedes a palatability-related pattern. The temporal dynamics of the emergence of the palatability-related pattern parallel the temporal dynamics of the emergence of preference behavior for the same mixtures as measured by a brief access test. We saw no evidence of categorical coding. PMID:23365178
The effects of noise on binocular rivalry waves: a stochastic neural field model
NASA Astrophysics Data System (ADS)
Webber, Matthew A.; Bressloff, Paul C.
2013-03-01
We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave.
Learning Topology and Dynamics of Large Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
She, Yiyuan; He, Yuejia; Wu, Dapeng
2014-11-01
Large-scale recurrent networks have drawn increasing attention recently because of their capabilities in modeling a large variety of real-world phenomena and physical mechanisms. This paper studies how to identify all authentic connections and estimate system parameters of a recurrent network, given a sequence of node observations. This task becomes extremely challenging in modern network applications, because the available observations are usually very noisy and limited, and the associated dynamical system is strongly nonlinear. By formulating the problem as multivariate sparse sigmoidal regression, we develop simple-to-implement network learning algorithms, with rigorous convergence guarantee in theory, for a variety of sparsity-promoting penalty forms. A quantile variant of progressive recurrent network screening is proposed for efficient computation and allows for direct cardinality control of network topology in estimation. Moreover, we investigate recurrent network stability conditions in Lyapunov's sense, and integrate such stability constraints into sparse network learning. Experiments show excellent performance of the proposed algorithms in network topology identification and forecasting.
Neural processing of dynamic emotional facial expressions in psychopaths.
Decety, Jean; Skelly, Laurie; Yoder, Keith J; Kiehl, Kent A
2014-02-01
Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, 80 incarcerated adult males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent functional magnetic resonance imaging (fMRI) scanning while viewing dynamic facial expressions of fear, sadness, happiness, and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, and superior temporal sulcus (STS)) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex (OFC)), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness, and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex (vmPFC), regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions. PMID:24359488
Neural dynamics for landmark orientation and angular path integration
Seelig, Johannes D.; Jayaraman, Vivek
2015-01-01
Summary Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed flies walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body — a toroidal structure in the center of the fly brain. The population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity — a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors — network structures proposed to support the function of navigational brain circuits. PMID:25971509
Modeling the motor cortex: Optimality, recurrent neural networks, and spatial dynamics.
Tanaka, Hirokazu
2016-03-01
Specialization of motor function in the frontal lobe was first discovered in the seminal experiments by Fritsch and Hitzig and subsequently by Ferrier in the 19th century. It is, however, ironical that the functional and computational role of the motor cortex still remains unresolved. A computational understanding of the motor cortex equals to understanding what movement variables the motor neurons represent (movement representation problem) and how such movement variables are computed through the interaction with anatomically connected areas (neural computation problem). Electrophysiological experiments in the 20th century demonstrated that the neural activities in motor cortex correlated with a number of motor-related and cognitive variables, thereby igniting the controversy over movement representations in motor cortex. Despite substantial experimental efforts, the overwhelming complexity found in neural activities has impeded our understanding of how movements are represented in the motor cortex. Recent progresses in computational modeling have rekindled this controversy in the 21st century. Here, I review the recent developments in computational models of the motor cortex, with a focus on optimality models, recurrent neural network models and spatial dynamics models. Although individual models provide consistent pictures within their domains, our current understanding about functions of the motor cortex is still fragmented.
Caldesmon regulates actin dynamics to influence cranial neural crest migration in Xenopus.
Nie, Shuyi; Kee, Yun; Bronner-Fraser, Marianne
2011-09-01
Caldesmon (CaD) is an important actin modulator that associates with actin filaments to regulate cell morphology and motility. Although extensively studied in cultured cells, there is little functional information regarding the role of CaD in migrating cells in vivo. Here we show that nonmuscle CaD is highly expressed in both premigratory and migrating cranial neural crest cells of Xenopus embryos. Depletion of CaD with antisense morpholino oligonucleotides causes cranial neural crest cells to migrate a significantly shorter distance, prevents their segregation into distinct migratory streams, and later results in severe defects in cartilage formation. Demonstrating specificity, these effects are rescued by adding back exogenous CaD. Interestingly, CaD proteins with mutations in the Ca(2+)-calmodulin-binding sites or ErK/Cdk1 phosphorylation sites fail to rescue the knockdown phenotypes, whereas mutation of the PAK phosphorylation site is able to rescue them. Analysis of neural crest explants reveals that CaD is required for the dynamic arrangements of actin and, thus, for cell shape changes and process formation. Taken together, these results suggest that the actin-modulating activity of CaD may underlie its critical function and is regulated by distinct signaling pathways during normal neural crest migration. PMID:21795398
Neural networks with dynamical synapses: From mixed-mode oscillations and spindles to chaos
NASA Astrophysics Data System (ADS)
Lee, K.; Goltsev, A. V.; Lopes, M. A.; Mendes, J. F. F.
2013-01-01
Understanding of short-term synaptic depression (STSD) and other forms of synaptic plasticity is a topical problem in neuroscience. Here we study the role of STSD in the formation of complex patterns of brain rhythms. We use a cortical circuit model of neural networks composed of irregular spiking excitatory and inhibitory neurons having type 1 and 2 excitability and stochastic dynamics. In the model, neurons form a sparsely connected network and their spontaneous activity is driven by random spikes representing synaptic noise. Using simulations and analytical calculations, we found that if the STSD is absent, the neural network shows either asynchronous behavior or regular network oscillations depending on the noise level. In networks with STSD, changing parameters of synaptic plasticity and the noise level, we observed transitions to complex patters of collective activity: mixed-mode and spindle oscillations, bursts of collective activity, and chaotic behavior. Interestingly, these patterns are stable in a certain range of the parameters and separated by critical boundaries. Thus, the parameters of synaptic plasticity can play a role of control parameters or switchers between different network states. However, changes of the parameters caused by a disease may lead to dramatic impairment of ongoing neural activity. We analyze the chaotic neural activity by use of the 0-1 test for chaos (Gottwald, G. & Melbourne, I., 2004) and show that it has a collective nature.
Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim
2015-01-01
The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.
An implantable wireless neural interface for recording cortical circuit dynamics in moving primates
NASA Astrophysics Data System (ADS)
Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto
2013-04-01
Objective. Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims and those living with severe neuromotor disease. Such systems must be chronically safe, durable and effective. Approach. We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based microelectrode array via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1 Hz to 7.8 kHz, 200× gain) and multiplexed by a custom application specific integrated circuit, digitized and then packaged for transmission. The neural data (24 Mbps) were transmitted by a wireless data link carried on a frequency-shift-key-modulated signal at 3.2 and 3.8 GHz to a receiver 1 m away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7 h continuous operation between recharge via an inductive transcutaneous wireless power link at 2 MHz. Main results. Device verification and early validation were performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance. We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight into how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile
An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates
Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto
2013-01-01
Objective Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims, and those living with severe neuromotor disease. Such systems must be chronically safe, durable, and effective. Approach We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous, and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based MEA via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1Hz to 7.8kHz, ×200 gain) and multiplexed by a custom application specific integrated circuit, digitized, and then packaged for transmission. The neural data (24 Mbps) was transmitted by a wireless data link carried on an frequency shift key modulated signal at 3.2GHz and 3.8GHz to a receiver 1 meter away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7-hour continuous operation between recharge via an inductive transcutaneous wireless power link at 2MHz. Main results Device verification and early validation was performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight on how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile patient use, have
Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes.
Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice; Tamietto, Marco
2014-11-01
The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion.
Neural response to natural stimuli: maximizing information to find the receptive fields
NASA Astrophysics Data System (ADS)
Sharpee, T.; Rust, N.; Bialek, William
2002-03-01
From olfaction to vision and audition, there is an increasing need, and a growing number of experiments, that study response of sensory neurons to natural stimuli. The fact that statistically simple stimuli, such as white noise, are often not effective in activating high-level neurons provides an indication that neural response cannot always be understood as a combination of responses to simple signals. However, rigorous statistical analysis of neural responses in terms of receptive fields is limited to correlated Gaussian inputs. Here we propose to look at the mutual information between an ensemble of stimuli and sequence of elicited neural responses as a function of direction in a stimulus space. By maximizing information first as a function of one angle, and then iteratively including new directions into a set of directions along which information is calculated and optimized, it is possible to find the relevant subspace that determines the probability of generating a response, provided that the number of relevant directions, or receptive fields, is small. Since the dimension of relevant subspace is much smaller than that of the overall stimulus space, it becomes experimentally feasible to map out the neuron's input/output function.
Neural network simulation of soil NO3 dynamic under potato crop system
NASA Astrophysics Data System (ADS)
Goulet-Fortin, Jérôme; Morais, Anne; Anctil, François; Parent, Léon-Étienne; Bolinder, Martin
2013-04-01
Nitrate leaching is a major issue in sandy soils intensively cropped to potato. Modelling could test and improve management practices, particularly as regard to the optimal N application rates. Lack of input data is an important barrier for the application of classical process-based models to predict soil NO3 content (SNOC) and NO3 leaching (NOL). Alternatively, data driven models such as neural networks (NN) could better take into account indicators of spatial soil heterogeneity and plant growth pattern such as the leaf area index (LAI), hence reducing the amount of soil information required. The first objective of this study was to evaluate NN and hybrid models to simulate SNOC in the 0-40 cm soil layer considering inter-annual variations, spatial soil heterogeneity and differential N application rates. The second objective was to evaluate the same methodology to simulate seasonal NOL dynamic at 1 m deep. To this aim, multilayer perceptrons with different combinations of driving meteorological variables, functions of the LAI and state variables of external deterministic models have been trained and evaluated. The state variables from external models were: drainage estimated by the CLASS model and the soil temperature estimated by an ICBM subroutine. Results of SNOC simulations were compared to field data collected between 2004 and 2011 at several experimental plots under potato cropping systems in Québec, Eastern Canada. Results of NOL simulation were compared to data obtained in 2012 from 11 suction lysimeters installed in 2 experimental plots under potato cropping systems in the same region. The most performing model for SNOC simulation was obtained using a 4-input hybrid model composed of 1) cumulative LAI, 2) cumulative drainage, 3) soil temperature and 4) day of year. The most performing model for NOL simulation was obtained using a 5-input NN model composed of 1) N fertilization rate at spring, 2) LAI, 3) cumulative rainfall, 4) the day of year and 5) the
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884
Žigman, Mihaela; Laumann-Lipp, Nico; Titus, Tom; Postlethwait, John; Moens, Cecilia B.
2014-01-01
Hox genes are classically ascribed to function in patterning the anterior-posterior axis of bilaterian animals; however, their role in directing molecular mechanisms underlying morphogenesis at the cellular level remains largely unstudied. We unveil a non-classical role for the zebrafish hoxb1b gene, which shares ancestral functions with mammalian Hoxa1, in controlling progenitor cell shape and oriented cell division during zebrafish anterior hindbrain neural tube morphogenesis. This is likely distinct from its role in cell fate acquisition and segment boundary formation. We show that, without affecting major components of apico-basal or planar cell polarity, Hoxb1b regulates mitotic spindle rotation during the oriented neural keel symmetric mitoses that are required for normal neural tube lumen formation in the zebrafish. This function correlates with a non-cell-autonomous requirement for Hoxb1b in regulating microtubule plus-end dynamics in progenitor cells in interphase. We propose that Hox genes can influence global tissue morphogenesis by control of microtubule dynamics in individual cells in vivo. PMID:24449840
Zigman, Mihaela; Laumann-Lipp, Nico; Titus, Tom; Postlethwait, John; Moens, Cecilia B
2014-02-01
Hox genes are classically ascribed to function in patterning the anterior-posterior axis of bilaterian animals; however, their role in directing molecular mechanisms underlying morphogenesis at the cellular level remains largely unstudied. We unveil a non-classical role for the zebrafish hoxb1b gene, which shares ancestral functions with mammalian Hoxa1, in controlling progenitor cell shape and oriented cell division during zebrafish anterior hindbrain neural tube morphogenesis. This is likely distinct from its role in cell fate acquisition and segment boundary formation. We show that, without affecting major components of apico-basal or planar cell polarity, Hoxb1b regulates mitotic spindle rotation during the oriented neural keel symmetric mitoses that are required for normal neural tube lumen formation in the zebrafish. This function correlates with a non-cell-autonomous requirement for Hoxb1b in regulating microtubule plus-end dynamics in progenitor cells in interphase. We propose that Hox genes can influence global tissue morphogenesis by control of microtubule dynamics in individual cells in vivo.
Araújo, Rui
2006-09-01
Mobile robots must be able to build their own maps to navigate in unknown worlds. Expanding a previously proposed method based on the fuzzy ART neural architecture (FARTNA), this paper introduces a new online method for learning maps of unknown dynamic worlds. For this purpose the new Prune-able fuzzy adaptive resonance theory neural architecture (PAFARTNA) is introduced. It extends the FARTNA self-organizing neural network with novel mechanisms that provide important dynamic adaptation capabilities. Relevant PAFARTNA properties are formulated and demonstrated. A method is proposed for the perception of object removals, and then integrated with PAFARTNA. The proposed methods are integrated into a navigation architecture. With the new navigation architecture the mobile robot is able to navigate in changing worlds, and a degree of optimality is maintained, associated to a shortest path planning approach implemented in real-time over the underlying global world model. Experimental results obtained with a Nomad 200 robot are presented demonstrating the feasibility and effectiveness of the proposed methods. PMID:17001984
Araújo, Rui
2006-09-01
Mobile robots must be able to build their own maps to navigate in unknown worlds. Expanding a previously proposed method based on the fuzzy ART neural architecture (FARTNA), this paper introduces a new online method for learning maps of unknown dynamic worlds. For this purpose the new Prune-able fuzzy adaptive resonance theory neural architecture (PAFARTNA) is introduced. It extends the FARTNA self-organizing neural network with novel mechanisms that provide important dynamic adaptation capabilities. Relevant PAFARTNA properties are formulated and demonstrated. A method is proposed for the perception of object removals, and then integrated with PAFARTNA. The proposed methods are integrated into a navigation architecture. With the new navigation architecture the mobile robot is able to navigate in changing worlds, and a degree of optimality is maintained, associated to a shortest path planning approach implemented in real-time over the underlying global world model. Experimental results obtained with a Nomad 200 robot are presented demonstrating the feasibility and effectiveness of the proposed methods.
Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1997-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Kwong, C. K.; Fung, K. Y.; Jiang, Huimin; Chan, K. Y.
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884
Simplified dynamic models of grass field ecosystem
NASA Astrophysics Data System (ADS)
Zeng, Qingcun; Zeng, Xiaodong; Lu, Peisheng
1994-12-01
Some simplified dynamic models of grass field ecosystem are developed and investigated. The maximum simplified one consists of two variables, living grass biomass and soil wetness. The analyses of such models show that there exists only desert regime without grasses if the precipitation p is less than a critical value p c ; the grass biomass continuously depends on p if the interaction between grass biomass and the soil wetness is weak, but the strong interaction results in the bifurcation of grass biomass in the vicinity of p c : the grass biomass is rich as p > p c , but it becomes desertification as p
NASA Astrophysics Data System (ADS)
Bruton, Christopher Patrick
Earthquakes and seismicity have long been used to monitor volcanoes. In addition to the time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new sets of waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia. The procedure can successfully distinguish between slab and volcanic events at Uturuncu, between events from four different volcanoes in the Katmai region, and between volcano-tectonic and long-period events at Spurr. Average recall and overall accuracy were greater than 80% in all three cases.
Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control
Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind
2016-01-01
There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system. PMID:26829673
The Emergent Executive: A Dynamic Field Theory of the Development of Executive Function
Buss, Aaron T.; Spencer, John P.
2015-01-01
A dynamic neural field (DNF) model is presented which provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function—the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 4-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. In the DNF model, rule-use and behavioral flexibility come about through a form of dimensional attention which modulates activity within different cortical fields tuned to specific feature dimensions. In particular, we capture developmental change by increasing the strength of excitatory and inhibitory neural interactions in the dimensional attention system as well as refining the connectivity between this system and the feature-specific cortical fields. Note that although this enables the model to effectively switch tasks, the dimensional attention system does not ‘know’ the details of task-specific performance. Rather, correct performance emerges as a property of system-wide neural interactions. We show how this captures children's behavior in quantitative detail across 12 versions of the DCCS task. Moreover, we successfully test a set of novel predictions with 3-year-old children from a version of the task not explained by other theories. PMID:24818836
Optimal system size for complex dynamics in random neural networks near criticality
Wainrib, Gilles; García del Molino, Luis Carlos
2013-12-15
In this article, we consider a model of dynamical agents coupled through a random connectivity matrix, as introduced by Sompolinsky et al. [Phys. Rev. Lett. 61(3), 259–262 (1988)] in the context of random neural networks. When system size is infinite, it is known that increasing the disorder parameter induces a phase transition leading to chaotic dynamics. We observe and investigate here a novel phenomenon in the sub-critical regime for finite size systems: the probability of observing complex dynamics is maximal for an intermediate system size when the disorder is close enough to criticality. We give a more general explanation of this type of system size resonance in the framework of extreme values theory for eigenvalues of random matrices.
A Neural Network Model to Learn Multiple Tasks under Dynamic Environments
NASA Astrophysics Data System (ADS)
Tsumori, Kenji; Ozawa, Seiichi
When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.
Nonlinear systems identification and control via dynamic multitime scales neural networks.
Fu, Zhi-Jun; Xie, Wen-Fang; Han, Xuan; Luo, Wei-Dong
2013-11-01
This paper deals with the adaptive nonlinear identification and trajectory tracking via dynamic multilayer neural network (NN) with different timescales. Two NN identifiers are proposed for nonlinear systems identification via dynamic NNs with different timescales including both fast and slow phenomenon. The first NN identifier uses the output signals from the actual system for the system identification. In the second NN identifier, all the output signals from nonlinear system are replaced with the state variables of the NNs. The online identification algorithms for both NN identifier parameters are proposed using Lyapunov function and singularly perturbed techniques. With the identified NN models, two indirect adaptive NN controllers for the nonlinear systems containing slow and fast dynamic processes are developed. For both developed adaptive NN controllers, the trajectory errors are analyzed and the stability of the systems is proved. Simulation results show that the controller based on the second identifier has better performance than that of the first identifier.
Neely, Kristina A; Coombes, Stephen A; Planetta, Peggy J; Vaillancourt, David E
2013-03-01
A central topic in sensorimotor neuroscience is the static-dynamic dichotomy that exists throughout the nervous system. Previous work examining motor unit synchronization reports that the activation strategy and timing of motor units differ for static and dynamic tasks. However, it remains unclear whether segregated or overlapping blood-oxygen-level-dependent (BOLD) activity exists in the brain for static and dynamic motor control. This study compared the neural circuits associated with the production of static force to those associated with the production of dynamic force pulses. To that end, healthy young adults (n = 17) completed static and dynamic precision grip force tasks during functional magnetic resonance imaging (fMRI). Both tasks activated core regions within the visuomotor network, including primary and sensory motor cortices, premotor cortices, multiple visual areas, putamen, and cerebellum. Static force was associated with unique activity in a right-lateralized cortical network including inferior parietal lobe, ventral premotor cortex, and dorsolateral prefrontal cortex. In contrast, dynamic force was associated with unique activity in left-lateralized and midline cortical regions, including supplementary motor area, superior parietal lobe, fusiform gyrus, and visual area V3. These findings provide the first neuroimaging evidence supporting a lateralized pattern of brain activity for the production of static and dynamic precision grip force.
Dynamics of polymers: A mean-field theory
Fredrickson, Glenn H.; Orland, Henri
2014-02-28
We derive a general mean-field theory of inhomogeneous polymer dynamics; a theory whose form has been speculated and widely applied, but not heretofore derived. Our approach involves a functional integral representation of a Martin-Siggia-Rose (MSR) type description of the exact many-chain dynamics. A saddle point approximation to the generating functional, involving conditions where the MSR action is stationary with respect to a collective density field ρ and a conjugate MSR response field ϕ, produces the desired dynamical mean-field theory. Besides clarifying the proper structure of mean-field theory out of equilibrium, our results have implications for numerical studies of polymer dynamics involving hybrid particle-field simulation techniques such as the single-chain in mean-field method.
Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.
Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen
2014-01-01
Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos.
NASA Astrophysics Data System (ADS)
Poza, Jesús; Gómez, Carlos; García, María; Corralejo, Rebeca; Fernández, Alberto; Hornero, Roberto
2014-04-01
Objective. Current diagnostic guidelines encourage further research for the development of novel Alzheimer's disease (AD) biomarkers, especially in its prodromal form (i.e. mild cognitive impairment, MCI). Magnetoencephalography (MEG) can provide essential information about AD brain dynamics; however, only a few studies have addressed the characterization of MEG in incipient AD. Approach. We analyzed MEG rhythms from 36 AD patients, 18 MCI subjects and 27 controls, introducing a new wavelet-based parameter to quantify their dynamical properties: the wavelet turbulence. Main results. Our results suggest that AD progression elicits statistically significant regional-dependent patterns of abnormalities in the neural activity (p < 0.05), including a progressive loss of irregularity, variability, symmetry and Gaussianity. Furthermore, the highest accuracies to discriminate AD and MCI subjects from controls were 79.4% and 68.9%, whereas, in the three-class setting, the accuracy reached 67.9%. Significance. Our findings provide an original description of several dynamical properties of neural activity in early AD and offer preliminary evidence that the proposed methodology is a promising tool for assessing brain changes at different stages of dementia.
Wide-field feedback neurons dynamically tune early visual processing.
Tuthill, John C; Nern, Aljoscha; Rubin, Gerald M; Reiser, Michael B
2014-05-21
An important strategy for efficient neural coding is to match the range of cellular responses to the distribution of relevant input signals. However, the structure and relevance of sensory signals depend on behavioral state. Here, we show that behavior modifies neural activity at the earliest stages of fly vision. We describe a class of wide-field neurons that provide feedback to the most peripheral layer of the Drosophila visual system, the lamina. Using in vivo patch-clamp electrophysiology, we found that lamina wide-field neurons respond to low-frequency luminance fluctuations. Recordings in flying flies revealed that the gain and frequency tuning of wide-field neurons change during flight, and that these effects are mimicked by the neuromodulator octopamine. Genetically silencing wide-field neurons increased behavioral responses to slow-motion stimuli. Together, these findings identify a cell type that is gated by behavior to enhance neural coding by subtracting low-frequency signals from the inputs to motion detection circuits. PMID:24853944
The temporal derivative of expected utility: a neural mechanism for dynamic decision-making.
Zhang, Xian; Hirsch, Joy
2013-01-15
Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, the neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. PMID:22963852
Wöstmann, Malte; Herrmann, Björn; Wilsch, Anna; Obleser, Jonas
2015-01-28
Speech comprehension in multitalker situations is a notorious real-life challenge, particularly for older listeners. Younger listeners exploit stimulus-inherent acoustic detail, but are they also actively predicting upcoming information? And further, how do older listeners deal with acoustic and predictive information? To understand the neural dynamics of listening difficulties and according listening strategies, we contrasted neural responses in the alpha-band (∼10 Hz) in younger (20-30 years, n = 18) and healthy older (60-70 years, n = 20) participants under changing task demands in a two-talker paradigm. Electroencephalograms were recorded while humans listened to two spoken digits against a distracting talker and decided whether the second digit was smaller or larger. Acoustic detail (temporal fine structure) and predictiveness (the degree to which the first digit predicted the second) varied orthogonally. Alpha power at widespread scalp sites decreased with increasing acoustic detail (during target digit presentation) but also with increasing predictiveness (in-between target digits). For older compared with younger listeners, acoustic detail had a stronger impact on task performance and alpha power modulation. This suggests that alpha dynamics plays an important role in the changes in listening behavior that occur with age. Last, alpha power variations resulting from stimulus manipulations (of acoustic detail and predictiveness) as well as task-independent overall alpha power were related to subjective listening effort. The present data show that alpha dynamics is a promising neural marker of individual difficulties as well as age-related changes in sensation, perception, and comprehension in complex communication situations.
McDermott, Timothy J.; Badura-Brack, Amy S.; Becker, Katherine M.; Ryan, Tara J.; Khanna, Maya M.; Heinrichs-Graham, Elizabeth; Wilson, Tony W.
2016-01-01
Background Posttraumatic stress disorder (PTSD) is associated with executive functioning deficits, including disruptions in working memory. In this study, we examined the neural dynamics of working memory processing in veterans with PTSD and a matched healthy control sample using magnetoencephalography (MEG). Methods Our sample of recent combat veterans with PTSD and demographically matched participants without PTSD completed a working memory task during a 306-sensor MEG recording. The MEG data were preprocessed and transformed into the time-frequency domain. Significant oscillatory brain responses were imaged using a beamforming approach to identify spatiotemporal dynamics. Results Fifty-one men were included in our analyses: 27 combat veterans with PTSD and 24 controls. Across all participants, a dynamic wave of neural activity spread from posterior visual cortices to left frontotemporal regions during encoding, consistent with a verbal working memory task, and was sustained throughout maintenance. Differences related to PTSD emerged during early encoding, with patients exhibiting stronger α oscillatory responses than controls in the right inferior frontal gyrus (IFG). Differences spread to the right supramarginal and temporal cortices during later encoding where, along with the right IFG, they persisted throughout the maintenance period. Limitations This study focused on men with combat-related PTSD using a verbal working memory task. Future studies should evaluate women and the impact of various traumatic experiences using diverse tasks. Conclusion Posttraumatic stress disorder is associated with neurophysiological abnormalities during working memory encoding and maintenance. Veterans with PTSD engaged a bilateral network, including the inferior prefrontal cortices and supramarginal gyri. Right hemispheric neural activity likely reflects compensatory processing, as veterans with PTSD work to maintain accurate performance despite known cognitive deficits
NASA Astrophysics Data System (ADS)
Yu, Yiqun; Koller, Josef; Jordanova, Vania K.; Zaharia, Sorin G.; Friedel, Reinhard W.; Morley, Steven K.; Chen, Yue; Baker, Daniel; Reeves, Geoffrey D.; Spence, Harlan E.
2014-03-01
We expanded our previous work on L* neural networks that used empirical magnetic field models as the underlying models by applying and extending our technique to drift shells calculated from a physics-based magnetic field model. While empirical magnetic field models represent an average, statistical magnetospheric state, the RAM-SCB model, a first-principles magnetically self-consistent code, computes magnetic fields based on fundamental equations of plasma physics. Unlike the previous L* neural networks that include McIlwain L and mirror point magnetic field as part of the inputs, the new L* neural network only requires solar wind conditions and the Dst index, allowing for an easier preparation of input parameters. This new neural network is compared against those previously trained networks and validated by the tracing method in the International Radiation Belt Environment Modeling (IRBEM) library. The accuracy of all L* neural networks with different underlying magnetic field models is evaluated by applying the electron phase space density (PSD)-matching technique derived from the Liouville's theorem to the Van Allen Probes observations. Results indicate that the uncertainty in the predicted L* is statistically (75%) below 0.7 with a median value mostly below 0.2 and the median absolute deviation around 0.15, regardless of the underlying magnetic field model. We found that such an uncertainty in the calculated L* value can shift the peak location of electron phase space density (PSD) profile by 0.2 RE radially but with its shape nearly preserved.
Sree Hari Rao, V; Phaneendra, Bh R.M.
1999-04-01
In this article, a model describing the activation dynamics of bidirectional associative memory (BAM) neural networks involving transmission delays was considered. The concept of BAM networks employed in this work is improved and it includes the earlier notions known in the literature and is applied to a wider class of networks. Further, we introduced a new notion, as a measure of restoring stability and termed it as a dead zone. In this article, the influence of the presence of dead zones on the global asymptotic stability of the equilibrium pattern was investigated. Existence and uniqueness of an equilibrium pattern under fairly general and easily verifiable conditions were also established.
Power system dynamic security enhancement using artificial neural networks and energy margin
Momoh, J.A.; Effiong, C.B.
1996-11-01
A framework for dynamic security enhancement based on area-wise preventive control is proposed. The power system is partitioned into areas for stability evaluation using the transient energy margin. Area vulnerability is evaluated based on the sensitivity of the energy margin w.r.t. controls in the given areas of the system. The areas of the system which contribute significantly to instability are labeled critical or weak areas and preventive control is applied in those areas of the system. The final control application is achieved by the use of artificial neural network (ANN) to compute the control inputs.
Context dependence of spectro-temporal receptive fields with implications for neural coding.
Eggermont, Jos J
2011-01-01
The spectro-temporal receptive field (STRF) is frequently used to characterize the linear frequency-time filter properties of the auditory system up to the neuron recorded from. STRFs are extremely stimulus dependent, reflecting the strong non-linearities in the auditory system. Changes in the STRF with stimulus type (tonal, noise-like, vocalizations), sound level and spectro-temporal sound density are reviewed here. Effects on STRF shape of task and attention are also briefly reviewed. Models to account for these changes, potential improvements to STRF analysis, and implications for neural coding are discussed. PMID:20123121
Recognition with self-control in neural networks
NASA Astrophysics Data System (ADS)
Lewenstein, Maciej; Nowak, Andrzej
1989-10-01
We present a theory of fully connected neural networks that incorporates mechanisms of dynamical self-control of recognition process. Using a functional integral technique, we formulate mean-field dynamics for such systems.
An energy-efficient, dynamic voltage scaling neural stimulator for a proprioceptive prosthesis.
Williams, Ian; Constandinou, Timothy G
2013-04-01
This paper presents an 8 channel energy-efficient neural stimulator for generating charge-balanced asymmetric pulses. Power consumption is reduced by implementing a fully-integrated DC-DC converter that uses a reconfigurable switched capacitor topology to provide 4 output voltages for Dynamic Voltage Scaling (DVS). DC conversion efficiencies of up to 82% are achieved using integrated capacitances of under 1 nF and the DVS approach offers power savings of up to 50% compared to the front end of a typical current controlled neural stimulator. A novel charge balancing method is implemented which has a low level of accuracy on a single pulse and a much higher accuracy over a series of pulses. The method used is robust to process and component variation and does not require any initial or ongoing calibration. Measured results indicate that the charge imbalance is typically between 0.05%-0.15% of charge injected for a series of pulses. Ex-vivo experiments demonstrate the viability in using this circuit for neural activation. The circuit has been implemented in a commercially-available 0.18 μm HV CMOS technology and occupies a core die area of approximately 2.8 mm(2) for an 8 channel implementation. PMID:23853295
An energy-efficient, dynamic voltage scaling neural stimulator for a proprioceptive prosthesis.
Williams, Ian; Constandinou, Timothy G
2013-04-01
This paper presents an 8 channel energy-efficient neural stimulator for generating charge-balanced asymmetric pulses. Power consumption is reduced by implementing a fully-integrated DC-DC converter that uses a reconfigurable switched capacitor topology to provide 4 output voltages for Dynamic Voltage Scaling (DVS). DC conversion efficiencies of up to 82% are achieved using integrated capacitances of under 1 nF and the DVS approach offers power savings of up to 50% compared to the front end of a typical current controlled neural stimulator. A novel charge balancing method is implemented which has a low level of accuracy on a single pulse and a much higher accuracy over a series of pulses. The method used is robust to process and component variation and does not require any initial or ongoing calibration. Measured results indicate that the charge imbalance is typically between 0.05%-0.15% of charge injected for a series of pulses. Ex-vivo experiments demonstrate the viability in using this circuit for neural activation. The circuit has been implemented in a commercially-available 0.18 μm HV CMOS technology and occupies a core die area of approximately 2.8 mm(2) for an 8 channel implementation.
Adaptive Neural Control of Pure-Feedback Nonlinear Time-Delay Systems via Dynamic Surface Technique.
Min Wang; Xiaoping Liu; Peng Shi
2011-12-01
This paper is concerned with robust stabilization problem for a class of nonaffine pure-feedback systems with unknown time-delay functions and perturbed uncertainties. Novel continuous packaged functions are introduced in advance to remove unknown nonlinear terms deduced from perturbed uncertainties and unknown time-delay functions, which avoids the functions with control law to be approximated by radial basis function (RBF) neural networks. This technique combining implicit function and mean value theorems overcomes the difficulty in controlling the nonaffine pure-feedback systems. Dynamic surface control (DSC) is used to avoid "the explosion of complexity" in the backstepping design. Design difficulties from unknown time-delay functions are overcome using the function separation technique, the Lyapunov-Krasovskii functionals, and the desirable property of hyperbolic tangent functions. RBF neural networks are employed to approximate desired virtual controls and desired practical control. Under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced significantly, and semiglobal uniform ultimate boundedness of all of the signals in the closed-loop system is guaranteed. Simulation studies are given to demonstrate the effectiveness of the proposed design scheme.
Unified description of the dynamics of quintessential scalar fields
Ureña-López, L. Arturo
2012-03-01
Using the dynamical system approach, we describe the general dynamics of cosmological scalar fields in terms of critical points and heteroclinic lines. It is found that critical points describe the initial and final states of the scalar field dynamics, but that heteroclinic lines give a more complete description of the evolution in between the critical points. In particular, the heteroclinic line that departs from the (saddle) critical point of perfect fluid-domination is the representative path in phase space of quintessence fields that may be viable dark energy candidates. We also discuss the attractor properties of the heteroclinic lines, and their importance for the description of thawing and freezing fields.
Impaired neural processing of dynamic faces in left-onset Parkinson's disease.
Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Sehm, Bernhard; Kotz, Sonja A
2016-02-01
Parkinson's disease (PD) affects patients beyond the motor domain. According to previous evidence, one mechanism that may be impaired in the disease is face processing. However, few studies have investigated this process at the neural level in PD. Moreover, research using dynamic facial displays rather than static pictures is scarce, but highly warranted due to the higher ecological validity of dynamic stimuli. In the present study we aimed to investigate how PD patients process emotional and non-emotional dynamic face stimuli at the neural level using event-related potentials. Since the literature has revealed a predominantly right-lateralized network for dynamic face processing, we divided the group into patients with left (LPD) and right (RPD) motor symptom onset (right versus left cerebral hemisphere predominantly affected, respectively). Participants watched short video clips of happy, angry, and neutral expressions and engaged in a shallow gender decision task in order to avoid confounds of task difficulty in the data. In line with our expectations, the LPD group showed significant face processing deficits compared to controls. While there were no group differences in early, sensory-driven processing (fronto-central N1 and posterior P1), the vertex positive potential, which is considered the fronto-central counterpart of the face-specific posterior N170 component, had a reduced amplitude and delayed latency in the LPD group. This may indicate disturbances of structural face processing in LPD. Furthermore, the effect was independent of the emotional content of the videos. In contrast, static facial identity recognition performance in LPD was not significantly different from controls, and comprehensive testing of cognitive functions did not reveal any deficits in this group. We therefore conclude that PD, and more specifically the predominant right-hemispheric affection in left-onset PD, is associated with impaired processing of dynamic facial expressions
Bouchard, Kristofer E.; Brainard, Michael S.
2016-01-01
Predicting future events is a critical computation for both perception and behavior. Despite the essential nature of this computation, there are few studies demonstrating neural activity that predicts specific events in learned, probabilistic sequences. Here, we test the hypotheses that the dynamics of internally generated neural activity are predictive of future events and are structured by the learned temporal–sequential statistics of those events. We recorded neural activity in Bengalese finch sensory-motor area HVC in response to playback of sequences from individuals’ songs, and examined the neural activity that continued after stimulus offset. We found that the strength of response to a syllable in the sequence depended on the delay at which that syllable was played, with a maximal response when the delay matched the intersyllable gap normally present for that specific syllable during song production. Furthermore, poststimulus neural activity induced by sequence playback resembled the neural response to the next syllable in the sequence when that syllable was predictable, but not when the next syllable was uncertain. Our results demonstrate that the dynamics of internally generated HVC neural activity are predictive of the learned temporal–sequential structure of produced song and that the strength of this prediction is modulated by uncertainty. PMID:27506786
Gas dynamics in strong centrifugal fields
Bogovalov, S.V.; Kislov, V.A.; Tronin, I.V.
2015-03-10
Dynamics of waves generated by scopes in gas centrifuges (GC) for isotope separation is considered. The centrifugal acceleration in the GC reaches values of the order of 106g. The centrifugal and Coriolis forces modify essentially the conventional sound waves. Three families of the waves with different polarisation and dispersion exist in these conditions. Dynamics of the flow in the model GC Iguasu is investigated numerically. Comparison of the results of the numerical modelling of the wave dynamics with the analytical predictions is performed. New phenomena of the resonances in the GC is found. The resonances occur for the waves polarized along the rotational axis having the smallest dumping due to the viscosity.
Quantum analysis applied to thermo field dynamics on dissipative systems
Hashizume, Yoichiro; Okamura, Soichiro; Suzuki, Masuo
2015-03-10
Thermo field dynamics is one of formulations useful to treat statistical mechanics in the scheme of field theory. In the present study, we discuss dissipative thermo field dynamics of quantum damped harmonic oscillators. To treat the effective renormalization of quantum dissipation, we use the Suzuki-Takano approximation. Finally, we derive a dissipative von Neumann equation in the Lindbrad form. In the present treatment, we can easily obtain the initial damping shown previously by Kubo.
Zhou Weihang; Chen Zhanghai; Zhang Bo; Yu, C. H.; Lu Wei; Shen, S. C.
2010-07-09
We report magnetic field control of the quantum chaotic dynamics of hydrogen analogues in an anisotropic solid state environment. The chaoticity of the system dynamics was quantified by means of energy level statistics. We analyzed the magnetic field dependence of the statistical distribution of the impurity energy levels and found a smooth transition between the Poisson limit and the Wigner limit, i.e., transition between regular Poisson and fully chaotic Wigner dynamics. The effect of the crystal field anisotropy on the quantum chaotic dynamics, which manifests itself in characteristic transitions between regularity and chaos for different field orientations, was demonstrated.
Chapin, Heather; Jantzen, Kelly; Kelso, J A Scott; Steinberg, Fred; Large, Edward
2010-12-16
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
Chapin, Heather; Jantzen, Kelly; Scott Kelso, J. A.; Steinberg, Fred; Large, Edward
2010-01-01
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies. PMID:21179549
Herz, A; Sulzer, B; Kühn, R; van Hemmen, J L
1989-01-01
According to Hebb's postulate for learning, information presented to a neural net during a learning session is stored in synaptic efficacies. Long-term potentiation occurs only if the postsynaptic neuron becomes active in a time window set up by the presynaptic one. We carefully interpret and mathematically implement the Hebb rule so as to handle both stationary and dynamic objects such as single patterns and cycles. Since the natural dynamics contains a rather broad distribution of delays, the key idea is to incorporate these delays in the learning session. As theory and numerical simulations show, the resulting procedure is surprisingly robust and faithful. It also turns out the pure Hebbian learning is by selection: the network produces synaptic representations that are selected according to their resonance with the input percepts.
Dynamics of recurrent neural networks with delayed unreliable synapses: metastable clustering.
Friedrich, Johannes; Kinzel, Wolfgang
2009-08-01
The influence of unreliable synapses on the dynamic properties of a neural network is investigated for a homogeneous integrate-and-fire network with delayed inhibitory synapses. Numerical and analytical calculations show that the network relaxes to a state with dynamic clusters of identical size which permanently exchange neurons. We present analytical results for the number of clusters and their distribution of firing times which are determined by the synaptic properties. The number of possible configurations increases exponentially with network size. In addition to states with a maximal number of clusters, metastable ones with a smaller number of clusters survive for an exponentially large time scale. An externally excited cluster survives for some time, too, thus clusters may encode information.
Stošovic, Miona V Andrejevic; Litovski, Vanco B
2013-11-01
Simulation is indispensable during the design of many biomedical prostheses that are based on fundamental electrical and electronic actions. However, simulation necessitates the use of adequate models. The main difficulties related to the modeling of such devices are their nonlinearity and dynamic behavior. Here we report the application of recurrent artificial neural networks for modeling of a nonlinear, two-terminal circuit equivalent to a specific implantable hearing device. The method is general in the sense that any nonlinear dynamic two-terminal device or circuit may be modeled in the same way. The model generated was successfully used for simulation and optimization of a driver (operational amplifier)-transducer ensemble. This confirms our claim that in addition to the proper design and optimization of the hearing actuator, optimization in the electronic domain, at the electronic driver circuit-to-actuator interface, should take place in order to achieve best performance of the complete hearing aid.
Investigation of neural-net based control strategies for improved power system dynamic performance
Sobajic, D.J.
1995-12-31
The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net base system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.
NASA Astrophysics Data System (ADS)
Liu, Derong; Huang, Yuzhu; Wang, Ding; Wei, Qinglai
2013-09-01
In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.
Subthreshold Dynamics and Its Effect on Signal Transduction in a Neural System
NASA Astrophysics Data System (ADS)
Wang, Yuqing; Wang, Z.; Wang, Wei
1998-10-01
Subthreshold dynamics and its effect on signal transduction in a neural system are studied by using the Hindmarsh-Rose neuron model. Under a periodic stimulation, as the constant bias of the stimulus increases, the neuron exhibits subthreshold periodic and subthreshold chaotic responses, suprathreshold chaotic firing of spikes, and mode-locked firing. The phase diagram of the system is obtained. The dynamic behavior obtained is in agreement with experiments on the squid giant axon. In particular, the subthreshold periodic oscillatory state is related to a number of experimental results, such as those found in the neurons of the inferior olivary nucleus. More importantly, we also find that subthreshold chaotic responses play a role analogous to the internal deterministic noise, and can enhance weak signal transduction via a mechanism similar to stochastic resonance.
Complex dynamics of a delayed discrete neural network of two nonidentical neurons
Chen, Yuanlong; Huang, Tingwen; Huang, Yu
2014-03-15
In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291–303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415–432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869–1878 (2013)]. We also give some numeric simulations to verify our theoretical results.
Interregional neural synchrony has similar dynamics during spontaneous and stimulus-driven states
Ghuman, Avniel Singh; van den Honert, Rebecca N.; Martin, Alex
2013-01-01
Assessing the correspondence between spontaneous and stimulus-driven neural activity can reveal intrinsic properties of the brain. Recent studies have demonstrated that many large-scale functional networks have a similar spatial structure during spontaneous and stimulus-driven states. However, it is unknown whether the temporal dynamics of network activity are also similar across these states. Here we demonstrate that, in the human brain, interhemispheric coupling of somatosensory regions is preferentially synchronized in the high beta frequency band (~20–30 Hz) in response to somatosensory stimulation and interhemispheric coupling of auditory cortices is preferentially synchronized in the alpha frequency band (~7–12 Hz) in response to auditory stimulation. Critically, these stimulus-driven synchronization frequencies were also selective to these interregional interactions during spontaneous activity. This similarity between stimulus-driven and spontaneous states suggests that frequency-specific oscillatory dynamics are intrinsic to the interactions between the nodes of these brain networks. PMID:23512004
Traveling pulses in a stochastic neural field model of direction selectivity.
Bressloff, Paul C; Wilkerson, Jeremy
2012-01-01
We analyze the effects of extrinsic noise on traveling pulses in a neural field model of direction selectivity. The model consists of a one-dimensional scalar neural field with an asymmetric weight distribution consisting of an offset Mexican hat function. We first show how, in the absence of any noise, the system supports spontaneously propagating traveling pulses that can lock to externally moving stimuli. Using a separation of time-scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the wave from its uniformly translating position at long time-scales, and fluctuations in the wave profile around its instantaneous position at short time-scales. In the case of freely propagating pulses, the wandering is characterized by pure Brownian motion, whereas in the case of stimulus-locked pulses, it is given by an Ornstein-Uhlenbeck process. This establishes that stimulus-locked pulses are more robust to noise. PMID:23181018
Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations
2013-01-01
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328
On parsing the neural code in the prefrontal cortex of primates using principal dynamic modes.
Marmarelis, V Z; Shin, D C; Song, D; Hampson, R E; Deadwyler, S A; Berger, T W
2014-06-01
Nonlinear modeling of multi-input multi-output (MIMO) neuronal systems using Principal Dynamic Modes (PDMs) provides a novel method for analyzing the functional connectivity between neuronal groups. This paper presents the PDM-based modeling methodology and initial results from actual multi-unit recordings in the prefrontal cortex of non-human primates. We used the PDMs to analyze the dynamic transformations of spike train activity from Layer 2 (input) to Layer 5 (output) of the prefrontal cortex in primates performing a Delayed-Match-to-Sample task. The PDM-based models reduce the complexity of representing large-scale neural MIMO systems that involve large numbers of neurons, and also offer the prospect of improved biological/physiological interpretation of the obtained models. PDM analysis of neuronal connectivity in this system revealed "input-output channels of communication" corresponding to specific bands of neural rhythms that quantify the relative importance of these frequency-specific PDMs across a variety of different tasks. We found that behavioral performance during the Delayed-Match-to-Sample task (correct vs. incorrect outcome) was associated with differential activation of frequency-specific PDMs in the prefrontal cortex.
Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.
Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng
2016-02-01
This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures.
Neural pathways in processing of sexual arousal: a dynamic causal modeling study.
Seok, J-W; Park, M-S; Sohn, J-H
2016-09-01
Three decades of research have investigated brain processing of visual sexual stimuli with neuroimaging methods. These researchers have found that sexual arousal stimuli elicit activity in a broad neural network of cortical and subcortical brain areas that are known to be associated with cognitive, emotional, motivational and physiological components. However, it is not completely understood how these neural systems integrate and modulated incoming information. Therefore, we identify cerebral areas whose activations were correlated with sexual arousal using event-related functional magnetic resonance imaging and used the dynamic causal modeling method for searching the effective connectivity about the sexual arousal processing network. Thirteen heterosexual males were scanned while they passively viewed alternating short trials of erotic and neutral pictures on a monitor. We created a subset of seven models based on our results and previous studies and selected a dominant connectivity model. Consequently, we suggest a dynamic causal model of the brain processes mediating the cognitive, emotional, motivational and physiological factors of human male sexual arousal. These findings are significant implications for the neuropsychology of male sexuality. PMID:27278664
Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.
Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng
2016-02-01
This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures. PMID:26595929
Dynamical properties of neural network model for working memory with Hodgkin-Huxley neurons
NASA Astrophysics Data System (ADS)
Omori, Toshiaki; Horiguchi, Tsuyoshi
2004-03-01
We propose a neural network model of working memory with one-compartmental neurons and investigate its dynamical properties. We assume that the model consists of excitatory neurons and inhibitory neurons; all the neurons are connected to each other. The excitatory neurons are distinguished as several groups of selective neurons and one group of non-selective neurons. The selective neurons are assumed to form subpopulations in which each selective neuron belongs to only one of subpopulations. The non-selective neurons are assumed not to form any subpopulation. Synaptic strengths between neurons within a subpopulation are assumed to be potentiated. By the numerical simulations, persistent firing of neurons in a subpopulation emerges; the persistent firing corresponds to the retention of memory as one of the functions of working memory. We find that the strength of external input and the strength of N-methyl- D-aspartate synapse are important factors for dynamical behaviors of the network; for example, if we enhance the strength of the external input to a subpopulation while the persistent firing is occurring in other subpopulation, the persistent firing occurs in the subpopulation or is sustained against the external input. These results reveal that the neural network as for the function of the working memory is controlled by the neuromodulation and the external stimuli within the proposed model. We also find that the persistent time of firing of the selective neurons shows a kind of phase transition as a function of the degree of potentiation of synapses.
Dynamic neural networks based on-line identification and control of high performance motor drives
NASA Technical Reports Server (NTRS)
Rubaai, Ahmed; Kotaru, Raj
1995-01-01
In the automated and high-tech industries of the future, there wil be a need for high performance motor drives both in the low-power range and in the high-power range. To meet very straight demands of tracking and regulation in the two quadrants of operation, advanced control technologies are of a considerable interest and need to be developed. In response a dynamics learning control architecture is developed with simultaneous on-line identification and control. the feature of the proposed approach, to efficiently combine the dual task of system identification (learning) and adaptive control of nonlinear motor drives into a single operation is presented. This approach, therefore, not only adapts to uncertainties of the dynamic parameters of the motor drives but also learns about their inherent nonlinearities. In fact, most of the neural networks based adaptive control approaches in use have an identification phase entirely separate from the control phase. Because these approaches separate the identification and control modes, it is not possible to cope with dynamic changes in a controlled process. Extensive simulation studies have been conducted and good performance was observed. The robustness characteristics of neuro-controllers to perform efficiently in a noisy environment is also demonstrated. With this initial success, the principal investigator believes that the proposed approach with the suggested neural structure can be used successfully for the control of high performance motor drives. Two identification and control topologies based on the model reference adaptive control technique are used in this present analysis. No prior knowledge of load dynamics is assumed in either topology while the second topology also assumes no knowledge of the motor parameters.
Utilizing neural networks in magnetic media modeling and field computation: A review
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2013-01-01
Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper. PMID:25685531
Utilizing neural networks in magnetic media modeling and field computation: A review.
Adly, Amr A; Abd-El-Hafiz, Salwa K
2014-11-01
Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.
ERIC Educational Resources Information Center
Barca, Laura; Cornelissen, Piers; Simpson, Michael; Urooj, Uzma; Woods, Will; Ellis, Andrew W.
2011-01-01
Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the…
Heterogeneous mean field for neural networks with short-term plasticity
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2014-08-01
We report about the main dynamical features of a model of leaky integrate-and-fire excitatory neurons with short-term plasticity defined on random massive networks. We investigate the dynamics by use of a heterogeneous mean-field formulation of the model that is able to reproduce dynamical phases characterized by the presence of quasisynchronous events. This formulation allows one to solve also the inverse problem of reconstructing the in-degree distribution for different network topologies from the knowledge of the global activity field. We study the robustness of this inversion procedure by providing numerical evidence that the in-degree distribution can be recovered also in the presence of noise and disorder in the external currents. Finally, we discuss the validity of the heterogeneous mean-field approach for sparse networks with a sufficiently large average in-degree.
Optical vortex behavior in dynamic speckle fields.
Kirkpatrick, Sean J; Khaksari, Kosar; Thomas, Dennis; Duncan, Donald D
2012-05-01
The dynamic behavior of phase singularities, or optical vortices, in the pseudo-phase representation of dynamic speckle patterns is investigated. Sequences of band-limited, dynamic speckle patterns with predetermined Gaussian decorrelation behavior were generated, and the pseudo-phase realizations of the individual speckle patterns were calculated via a two-dimensional Hilbert transform algorithm. Singular points in the pseudo-phase representation are identified by calculating the local topological charge as determined by convolution of the pseudo-phase representations with a series of 2×2 nabla filters. The spatial locations of the phase singularities are tracked over all frames of the speckle sequences, and recorded in three-dimensional space (x,y,f), where f is frame number in the sequence. The behavior of the phase singularities traces 'vortex trails' which are representative of the speckle dynamics. Slowly decorrelating speckle patterns results in long, relatively straight vortex trails, while rapidly decorrelating speckle patterns results in tortuous, relatively short vortex trails. Optical vortex analysis such as described herein can be used as a descriptor of biological activity, flow, and motion.
Muroski, Megan E; Morshed, Ramin A; Cheng, Yu; Vemulkar, Tarun; Mansell, Rhodri; Han, Yu; Zhang, Lingjiao; Aboody, Karen S; Cowburn, Russell P; Lesniak, Maciej S
2016-01-01
Stem cells have recently garnered attention as drug and particle carriers to sites of tumors, due to their natural ability to track to the site of interest. Specifically, neural stem cells (NSCs) have demonstrated to be a promising candidate for delivering therapeutics to malignant glioma, a primary brain tumor that is not curable by current treatments, and inevitably fatal. In this article, we demonstrate that NSCs are able to internalize 2 μm magnetic discs (SD), without affecting the health of the cells. The SD can then be remotely triggered in an applied 1 T rotating magnetic field to deliver a payload. Furthermore, we use this NSC-SD delivery system to deliver the SD themselves as a therapeutic agent to mechanically destroy glioma cells. NSCs were incubated with the SD overnight before treatment with a 1T rotating magnetic field to trigger the SD release. The potential timed release effects of the magnetic particles were tested with migration assays, confocal microscopy and immunohistochemistry for apoptosis. After the magnetic field triggered SD release, glioma cells were added and allowed to internalize the particles. Once internalized, another dose of the magnetic field treatment was administered to trigger mechanically induced apoptotic cell death of the glioma cells by the rotating SD. We are able to determine that NSC-SD and magnetic field treatment can achieve over 50% glioma cell death when loaded at 50 SD/cell, making this a promising therapeutic for the treatment of glioma.
Muroski, Megan E.; Morshed, Ramin A.; Cheng, Yu; Vemulkar, Tarun; Mansell, Rhodri; Han, Yu; Zhang, Lingjiao; Aboody, Karen S.; Cowburn, Russell P.; Lesniak, Maciej S.
2016-01-01
Stem cells have recently garnered attention as drug and particle carriers to sites of tumors, due to their natural ability to track to the site of interest. Specifically, neural stem cells (NSCs) have demonstrated to be a promising candidate for delivering therapeutics to malignant glioma, a primary brain tumor that is not curable by current treatments, and inevitably fatal. In this article, we demonstrate that NSCs are able to internalize 2 μm magnetic discs (SD), without affecting the health of the cells. The SD can then be remotely triggered in an applied 1 T rotating magnetic field to deliver a payload. Furthermore, we use this NSC-SD delivery system to deliver the SD themselves as a therapeutic agent to mechanically destroy glioma cells. NSCs were incubated with the SD overnight before treatment with a 1T rotating magnetic field to trigger the SD release. The potential timed release effects of the magnetic particles were tested with migration assays, confocal microscopy and immunohistochemistry for apoptosis. After the magnetic field triggered SD release, glioma cells were added and allowed to internalize the particles. Once internalized, another dose of the magnetic field treatment was administered to trigger mechanically induced apoptotic cell death of the glioma cells by the rotating SD. We are able to determine that NSC-SD and magnetic field treatment can achieve over 50% glioma cell death when loaded at 50 SD/cell, making this a promising therapeutic for the treatment of glioma. PMID:26734932
Static and dynamical Meissner force fields
NASA Technical Reports Server (NTRS)
Weinberger, B. R.; Lynds, L.; Hull, J. R.; Mulcahy, T. M.
1991-01-01
The coupling between copper-based high temperature superconductors (HTS) and magnets is represented by a force field. Zero-field cooled experiments were performed with several forms of superconductors: 1) cold-pressed sintered cylindrical disks; 2) small particles fixed in epoxy polymers; and 3) small particles suspended in hydrocarbon waxes. Using magnets with axial field symmetries, direct spatial force measurements in the range of 0.1 to 10(exp 4) dynes were performed with an analytical balance and force constants were obtained from mechanical vibrational resonances. Force constants increase dramatically with decreasing spatial displacement. The force field displays a strong temperature dependence between 20 and 90 K and decreases exponentially with increasing distance of separation. Distinct slope changes suggest the presence of B-field and temperature-activated processes that define the forces. Hysteresis measurements indicated that the magnitude of force scales roughly with the volume fraction of HTS in composite structures. Thus, the net force resulting from the field interaction appears to arise from regions as small or smaller than the grain size and does not depend on contiguous electron transport over large areas. Results of these experiments are discussed.
Park, Gibeom; Tani, Jun
2015-12-01
The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human.
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S.; Agarwal, Dev P.
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data. PMID:26366169
Radial Basis Function Based Neural Network for Motion Detection in Dynamic Scenes.
Huang, Shih-Chia; Do, Ben-Hsiang
2014-01-01
Motion detection, the process which segments moving objects in video streams, is the first critical process and plays an important role in video surveillance systems. Dynamic scenes are commonly encountered in both indoor and outdoor situations and contain objects such as swaying trees, spouting fountains, rippling water, moving curtains, and so on. However, complete and accurate motion detection in dynamic scenes is often a challenging task. This paper presents a novel motion detection approach based on radial basis function artificial neural networks to accurately detect moving objects not only in dynamic scenes but also in static scenes. The proposed method involves two important modules: a multibackground generation module and a moving object detection module. The multibackground generation module effectively generates a flexible probabilistic model through an unsupervised learning process to fulfill the property of either dynamic background or static background. Next, the moving object detection module achieves complete and accurate detection of moving objects by only processing blocks that are highly likely to contain moving objects. This is accomplished by two procedures: the block alarm procedure and the object extraction procedure. The detection results of our method were evaluated by qualitative and quantitative comparisons with other state-of-the-art methods based on a wide range of natural video sequences. The overall results show that the proposed method substantially outperforms existing methods with Similarity and F1 accuracy rates of 69.37% and 65.50%, respectively. PMID:24108721
Wang, Yanan; Geng, Xinyi; Huang, Yongzhi; Wang, Shouyan
2016-02-01
The dysfunction of subthalamic nucleus is the main cause of Parkinson's disease. Local field potentials in human subthalamic nucleus contain rich physiological information. The present study aimed to quantify the oscillatory and dynamic characteristics of local field potentials of subthalamic nucleus, and their modulation by the medication therapy for Parkinson's disease. The subthalamic nucleus local field potentials were recorded from patients with Parkinson's disease at the states of on and off medication. The oscillatory features were characterised with the power spectral analysis. Furthermore, the dynamic features were characterised with time-frequency analysis and the coefficient of variation measure of the time-variant power at each frequency. There was a dominant peak at low beta-band with medication off. The medication significantly suppressed the low beta component and increased the theta component. The amplitude fluctuation of neural oscillations was measured by the coefficient of variation. The coefficient of variation in 4-7 Hz and 60-66 Hz was increased by medication. These effects proved that medication had significant modulation to subthalamic nucleus neural oscillatory synchronization and dynamic features. The subthalamic nucleus neural activities tend towards stable state under medication. The findings would provide quantitative biomarkers for studying the mechanisms of Parkinson's disease and clinical treatments of medication or deep brain stimulation. PMID:27382739
Nonequilibrium Dynamical Mean-Field Theory for Bosonic Lattice Models
NASA Astrophysics Data System (ADS)
Strand, Hugo U. R.; Eckstein, Martin; Werner, Philipp
2015-01-01
We develop the nonequilibrium extension of bosonic dynamical mean-field theory and a Nambu real-time strong-coupling perturbative impurity solver. In contrast to Gutzwiller mean-field theory and strong-coupling perturbative approaches, nonequilibrium bosonic dynamical mean-field theory captures not only dynamical transitions but also damping and thermalization effects at finite temperature. We apply the formalism to quenches in the Bose-Hubbard model, starting from both the normal and the Bose-condensed phases. Depending on the parameter regime, one observes qualitatively different dynamical properties, such as rapid thermalization, trapping in metastable superfluid or normal states, as well as long-lived or strongly damped amplitude oscillations. We summarize our results in nonequilibrium "phase diagrams" that map out the different dynamical regimes.
Sang, Dong; Lv, Bin; He, Huiguang; He, Jiping; Wang, Feiyue
2010-01-01
In this work, we took the analysis of neural interaction based on the data recorded from the motor cortex of a monkey, when it was trained to complete multi-targets reach-to-grasp tasks. As a recently proved effective tool, Dynamic Bayesian Network (DBN) was applied to model and infer interactions of dependence between neurons. In the results, the gained networks of neural interactions, which correspond to different tasks with different directions and orientations, indicated that the target information was not encoded in simple ways by neuronal networks. We also explored the difference of neural interactions between delayed period and peri-movement period during reach-to-grasp task. We found that the motor control process always led to relatively more complex neural interaction networks than the plan thinking process. PMID:21096882
Boreland, B; Clement, G; Kunze, H
2015-08-01
After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ± 1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship.
Boreland, B; Clement, G; Kunze, H
2015-08-01
After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ± 1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship. PMID:25984696
NASA Astrophysics Data System (ADS)
Park, Choongseok; Worth, Robert M.; Rubchinsky, Leonid L.
2011-04-01
Synchronous oscillatory dynamics is frequently observed in the human brain. We analyze the fine temporal structure of phase-locking in a realistic network model and match it with the experimental data from Parkinsonian patients. We show that the experimentally observed intermittent synchrony can be generated just by moderately increased coupling strength in the basal ganglia circuits due to the lack of dopamine. Comparison of the experimental and modeling data suggest that brain activity in Parkinson's disease resides in the large boundary region between synchronized and nonsynchronized dynamics. Being on the edge of synchrony may allow for easy formation of transient neuronal assemblies.
Dynamics of coupled vortices in perpendicular field
Jain, Shikha; Novosad, Valentyn Fradin, Frank Y.; Pearson, John E.; Bader, Samuel D.
2014-02-24
We explore the coupling mechanism of two magnetic vortices in the presence of a perpendicular bias field by pre-selecting the polarity combinations using the resonant-spin-ordering approach. First, out of the four vortex polarity combinations (two of which are degenerate), three stable core polarity states are achieved by lifting the degeneracy of one of the states. Second, the response of the stiffness constant for the vortex pair (similar polarity) in perpendicular bias is found to be asymmetric around the zero field, in contrast to the response obtained from a single vortex core. Finally, the collective response of the system for antiparallel core polarities is symmetric around zero bias. The vortex core whose polarization is opposite to the bias field dominates the response.
Exploring scalar field dynamics with Gaussian processes
Nair, Remya; Jhingan, Sanjay; Jain, Deepak E-mail: sanjay.jhingan@gmail.com
2014-01-01
The origin of the accelerated expansion of the Universe remains an unsolved mystery in Cosmology. In this work we consider a spatially flat Friedmann-Robertson-Walker (FRW) Universe with non-relativistic matter and a single scalar field contributing to the energy density of the Universe. Properties of this scalar field, like potential, kinetic energy, equation of state etc. are reconstructed from Supernovae and BAO data using Gaussian processes. We also reconstruct energy conditions and kinematic variables of expansion, such as the jerk and the slow roll parameter. We find that the reconstructed scalar field variables and the kinematic quantities are consistent with a flat ΛCDM Universe. Further, we find that the null energy condition is satisfied for the redshift range of the Supernovae data considered in the paper, but the strong energy condition is violated.
NASA Astrophysics Data System (ADS)
Wei, Xile; Si, Kaili; Yi, Guosheng; Wang, Jiang; Lu, Meili
2016-07-01
In this paper, we use a reduced two-compartment neuron model to investigate the interaction between extracellular subthreshold electric field and synchrony in small world networks. It is observed that network synchronization is closely related to the strength of electric field and geometric properties of the two-compartment model. Specifically, increasing the electric field induces a gradual improvement in network synchrony, while increasing the geometric factor results in an abrupt decrease in synchronization of network. In addition, increasing electric field can make the network become synchronous from asynchronous when the geometric parameter is set to a given value. Furthermore, it is demonstrated that network synchrony can also be affected by the firing frequency and dynamical bifurcation feature of single neuron. These results highlight the effect of weak field on network synchrony from the view of biophysical model, which may contribute to further understanding the effect of electric field on network activity.
Multi-bump solutions in a neural field model with external inputs
NASA Astrophysics Data System (ADS)
Ferreira, Flora; Erlhagen, Wolfram; Bicho, Estela
2016-07-01
We study the conditions for the formation of multiple regions of high activity or "bumps" in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to N-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N > 2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.
The magnetic field of Mars - Implications from gas dynamic modeling
NASA Astrophysics Data System (ADS)
Russell, C. T.; Luhmann, J. G.; Spreiter, J. R.; Stahara, S. S.
1984-05-01
On January 21, 1972, the Mars 3 spacecraft observed a variation in the magnetic field during its periapsis passage over the dayside of Mars that was suggestive of entry into a Martian magnetosphere. Original data and trajectory of the spacecraft have been obtained (Dolginov, 1983) and an attempt is made to simulate the observed variation of the magnetic field by using a gas dynamic simulation. In the gas dynamic model a flow field is generated and this flow field is used to carry the interplanetary magnetic field through the Martian magnetosheath. The independence of the flow field and magnetic field calculation makes it possible to converge rapidly on an IMF orientation that would result in a magnetic variation similar to that observed by Mars 3. There appears to be no need to invoke an entry into a Martian magnetosphere to explain these observations.
The magnetic field of Mars - Implications from gas dynamic modeling
NASA Technical Reports Server (NTRS)
Russell, C. T.; Luhmann, J. G.; Spreiter, J. R.; Stahara, S. S.
1984-01-01
On January 21, 1972, the Mars 3 spacecraft observed a variation in the magnetic field during its periapsis passage over the dayside of Mars that was suggestive of entry into a Martian magnetosphere. Original data and trajectory of the spacecraft have been obtained (Dolginov, 1983) and an attempt is made to simulate the observed variation of the magnetic field by using a gas dynamic simulation. In the gas dynamic model a flow field is generated and this flow field is used to carry the interplanetary magnetic field through the Martian magnetosheath. The independence of the flow field and magnetic field calculation makes it possible to converge rapidly on an IMF orientation that would result in a magnetic variation similar to that observed by Mars 3. There appears to be no need to invoke an entry into a Martian magnetosphere to explain these observations.
Brownian dynamics of charged particles in a constant magnetic field
Hou, L. J.; Piel, A.; Miskovic, Z. L.; Shukla, P. K.
2009-05-15
Numerical algorithms are proposed for simulating the Brownian dynamics of charged particles in an external magnetic field, taking into account the Brownian motion of charged particles, damping effect, and the effect of magnetic field self-consistently. Performance of these algorithms is tested in terms of their accuracy and long-time stability by using a three-dimensional Brownian oscillator model with constant magnetic field. Step-by-step recipes for implementing these algorithms are given in detail. It is expected that these algorithms can be directly used to study particle dynamics in various dispersed systems in the presence of a magnetic field, including polymer solutions, colloidal suspensions, and, particularly, complex (dusty) plasmas. The proposed algorithms can also be used as thermostat in the usual molecular dynamics simulation in the presence of magnetic field.
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Cutler, Lynn; Meyer, Glenn; Lam, Tony; Vaziri, Parshaw
1990-01-01
Computer-assisted, 3-dimensional reconstructions of macular receptive fields and of their linkages into a neural network have revealed new information about macular functional organization. Both type I and type II hair cells are included in the receptive fields. The fields are rounded, oblong, or elongated, but gradations between categories are common. Cell polarizations are divergent. Morphologically, each calyx of oblong and elongated fields appears to be an information processing site. Intrinsic modulation of information processing is extensive and varies with the kind of field. Each reconstructed field differs in detail from every other, suggesting that an element of randomness is introduced developmentally and contributes to endorgan adaptability.
Conserved moments in nonequilibrium field dynamics
Mineev-Weinstein, M.B.; Alexander, F.J.
1995-06-01
We demonstrate with the example of Cahn-Hilliard dynamics that the macroscopic kinetics of first-order phase transitions exhibits an infinite number of constants of motion. Moreover, this results holds in any space dimension for a broad class of nonequilibrium processes whose macroscopic behavior is governed by equations of the form {partial_derivative}{phi}/{partial_derivative}t = LW({phi}), where {phi} is an {open_quotes}order parameter,{close_quotes} W is an arbitrary function of {phi}, and L is a linear Hermitian operator. We speculate on the implications of this result.
NASA Astrophysics Data System (ADS)
Martin, R. F., Jr.; Holland, D. L.; Svetich, J.
2014-12-01
We consider dynamical signatures of ion motion that discriminate between a current sheet magnetic field reversal and a magnetic neutral line field. These two related dynamical systems have been studied previously as chaotic scattering systems with application to the Earth's magnetotail. Both systems exhibit chaotic scattering over a wide range of parameter values. The structure and properties of their respective phase spaces have been used to elucidate potential dynamical signatures that affect spacecraft measured ion distributions. In this work we consider the problem of discrimination between these two magnetic structures using charged particle dynamics. For example we show that signatures based on the well known energy resonance in the current sheet field provide good discrimination since the resonance is not present in the neutral line case. While both fields can lead to fractal exit region structuring, their characteristics are different and also may provide some field discrimination. Application to magnetotail field and particle parameters will be presented
Self: an adaptive pressure arising from self-organization, chaotic dynamics, and neural Darwinism.
Bruzzo, Angela Alessia; Vimal, Ram Lakhan Pandey
2007-12-01
In this article, we establish a model to delineate the emergence of "self" in the brain making recourse to the theory of chaos. Self is considered as the subjective experience of a subject. As essential ingredients of subjective experiences, our model includes wakefulness, re-entry, attention, memory, and proto-experiences. The stability as stated by chaos theory can potentially describe the non-linear function of "self" as sensitive to initial conditions and can characterize it as underlying order from apparently random signals. Self-similarity is discussed as a latent menace of a pathological confusion between "self" and "others". Our test hypothesis is that (1) consciousness might have emerged and evolved from a primordial potential or proto-experience in matter, such as the physical attractions and repulsions experienced by electrons, and (2) "self" arises from chaotic dynamics, self-organization and selective mechanisms during ontogenesis, while emerging post-ontogenically as an adaptive pressure driven by both volume and synaptic-neural transmission and influencing the functional connectivity of neural nets (structure).
Neural substrates and behavioral profiles of romantic jealousy and its temporal dynamics
Sun, Yan; Yu, Hongbo; Chen, Jie; Liang, Jie; Lu, Lin; Zhou, Xiaolin; Shi, Jie
2016-01-01
Jealousy is not only a way of experiencing love but also a stabilizer of romantic relationships, although morbid romantic jealousy is maladaptive. Being engaged in a formal romantic relationship can tune one’s romantic jealousy towards a specific target. Little is known about how the human brain processes romantic jealousy by now. Here, by combining scenario-based imagination and functional MRI, we investigated the behavioral and neural correlates of romantic jealousy and their development across stages (before vs. after being in a formal relationship). Romantic jealousy scenarios elicited activations primarily in the basal ganglia (BG) across stages, and were significantly higher after the relationship was established in both the behavioral rating and BG activation. The intensity of romantic jealousy was related to the intensity of romantic happiness, which mainly correlated with ventral medial prefrontal cortex activation. The increase in jealousy across stages was associated with the tendency for interpersonal aggression. These results bridge the gap between the theoretical conceptualization of romantic jealousy and its neural correlates and shed light on the dynamic changes in jealousy. PMID:27273024
Neural substrates and behavioral profiles of romantic jealousy and its temporal dynamics.
Sun, Yan; Yu, Hongbo; Chen, Jie; Liang, Jie; Lu, Lin; Zhou, Xiaolin; Shi, Jie
2016-01-01
Jealousy is not only a way of experiencing love but also a stabilizer of romantic relationships, although morbid romantic jealousy is maladaptive. Being engaged in a formal romantic relationship can tune one's romantic jealousy towards a specific target. Little is known about how the human brain processes romantic jealousy by now. Here, by combining scenario-based imagination and functional MRI, we investigated the behavioral and neural correlates of romantic jealousy and their development across stages (before vs. after being in a formal relationship). Romantic jealousy scenarios elicited activations primarily in the basal ganglia (BG) across stages, and were significantly higher after the relationship was established in both the behavioral rating and BG activation. The intensity of romantic jealousy was related to the intensity of romantic happiness, which mainly correlated with ventral medial prefrontal cortex activation. The increase in jealousy across stages was associated with the tendency for interpersonal aggression. These results bridge the gap between the theoretical conceptualization of romantic jealousy and its neural correlates and shed light on the dynamic changes in jealousy. PMID:27273024
The neural circuit and synaptic dynamics underlying perceptual decision-making
NASA Astrophysics Data System (ADS)
Liu, Feng
2015-03-01
Decision-making with several choice options is central to cognition. To elucidate the neural mechanisms of multiple-choice motion discrimination, we built a continuous recurrent network model to represent a local circuit in the lateral intraparietal area (LIP). The network is composed of pyramidal cells and interneurons, which are directionally tuned. All neurons are reciprocally connected, and the synaptic connectivity strength is heterogeneous. Specifically, we assume two types of inhibitory connectivity to pyramidal cells: opposite-feature and similar-feature inhibition. The model accounted for both physiological and behavioral data from monkey experiments. The network is endowed with slow excitatory reverberation, which subserves the buildup and maintenance of persistent neural activity, and predominant feedback inhibition, which underlies the winner-take-all competition and attractor dynamics. The opposite-feature and opposite-feature inhibition have different effects on decision-making, and only their combination allows for a categorical choice among 12 alternatives. Together, our work highlights the importance of structured synaptic inhibition in multiple-choice decision-making processes.
Coordinated three-dimensional motion of the head and torso by dynamic neural networks.
Kim, J; Hemami, H
1998-01-01
The problem of trajectory tracking control of a three dimensional (3D) model of the human upper torso and head is considered. The torso and the head are modeled as two rigid bodies connected at one point, and the Newton-Euler method is used to derive the nonlinear differential equations that govern the motion of the system. The two-link system is driven by six pairs of muscle like actuators that possess physiologically inspired alpha like and gamma like inputs, and spindle like and Golgi tendon organ like outputs. These outputs are utilized as reflex feedback for stability and stiffness control, in a long loop feedback for the purpose of estimating the state of the system (somesthesis), and as part of the input to the controller. Ideal delays of different duration are included in the feedforward and feedback paths of the system to emulate such delays encountered in physiological systems. Dynamical neural networks are trained to learn effective control of the desired maneuvers of the system. The feasibility of the controller is demonstrated by computer simulation of the successful execution of the desired maneuvers. This work demonstrates the capabilities of neural circuits in controlling highly nonlinear systems with multidelays in their feedforward and feedback paths. The ultimate long range goal of this research is toward understanding the working of the central nervous system in controlling movement. It is an interdisciplinary effort relying on mechanics, biomechanics, neuroscience, system theory, physiology and anatomy, and its short range relevance to rehabilitation must be noted. PMID:18255985
Laminar Neural Field Model of Laterally Propagating Waves of Orientation Selectivity
2015-01-01
We construct a laminar neural-field model of primary visual cortex (V1) consisting of a superficial layer of neurons that encode the spatial location and orientation of a local visual stimulus coupled to a deep layer of neurons that only encode spatial location. The spatially-structured connections in the deep layer support the propagation of a traveling front, which then drives propagating orientation-dependent activity in the superficial layer. Using a combination of mathematical analysis and numerical simulations, we establish that the existence of a coherent orientation-selective wave relies on the presence of weak, long-range connections in the superficial layer that couple cells of similar orientation preference. Moreover, the wave persists in the presence of feedback from the superficial layer to the deep layer. Our results are consistent with recent experimental studies that indicate that deep and superficial layers work in tandem to determine the patterns of cortical activity observed in vivo. PMID:26491877
DeepCNF-D: Predicting Protein Order/Disorder Regions by Weighted Deep Convolutional Neural Fields.
Wang, Sheng; Weng, Shunyan; Ma, Jianzhu; Tang, Qingming
2015-01-01
Intrinsically disordered proteins or protein regions are involved in key biological processes including regulation of transcription, signal transduction, and alternative splicing. Accurately predicting order/disorder regions ab initio from the protein sequence is a prerequisite step for further analysis of functions and mechanisms for these disordered regions. This work presents a learning method, weighted DeepCNF (Deep Convolutional Neural Fields), to improve the accuracy of order/disorder prediction by exploiting the long-range sequential information and the interdependency between adjacent order/disorder labels and by assigning different weights for each label during training and prediction to solve the label imbalance issue. Evaluated by the CASP9 and CASP10 targets, our method obtains 0.855 and 0.898 AUC values, which are higher than the state-of-the-art single ab initio predictors.
Perceptual and cognitive neural correlates of the useful field of view test in older adults.
O'Brien, Jennifer L; Lister, Jennifer J; Peronto, Carol L; Edwards, Jerri D
2015-10-22
The Useful Field of View Test (UFOV) is often used as a behavioral assessment of age-related decline in visual perception and cognition. Poor performance may reflect slowed processing speed, difficulty dividing attention, and difficulty ignoring irrelevant information. However, the underlying neural correlates of UFOV performance have not been identified. The relationship between older adults' UFOV performance and event-related potential (ERP) components reflecting visual processing was examined. P1 amplitude increased with better UFOV performance involving object identification (subtest 1), suggesting that this task is associated with stimulus processing at an early perceptual level. Better performance in all UFOV subtests was associated with faster speed of processing, as reflected by decreases in P3b latency. Current evidence supports the hypothesis that the UFOV recruits both early perceptual and later cognitive processing involved in attentional control. The implications of these results are discussed. PMID:26236026
Random field estimation approach to robot dynamics
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo
1990-01-01
The difference equations of Kalman filtering and smoothing recursively factor and invert the covariance of the output of a linear state-space system driven by a white-noise process. Here it is shown that similar recursive techniques factor and invert the inertia matrix of a multibody robot system. The random field models are based on the assumption that all of the inertial (D'Alembert) forces in the system are represented by a spatially distributed white-noise model. They are easier to describe than the models based on classical mechanics, which typically require extensive derivation and manipulation of equations of motion for complex mechanical systems. With the spatially random models, more primitive locally specified computations result in a global collective system behavior equivalent to that obtained with deterministic models. The primary goal of applying random field estimation is to provide a concise analytical foundation for solving robot control and motion planning problems.
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.
Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian
2016-10-01
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches. PMID:26660697
Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields.
Liu, Fayao; Shen, Chunhua; Lin, Guosheng; Reid, Ian
2016-10-01
In this article, we tackle the problem of depth estimation from single monocular images. Compared with depth estimation using multiple images such as stereo depth perception, depth from monocular images is much more challenging. Prior work typically focuses on exploiting geometric priors or additional sources of information, most using hand-crafted features. Recently, there is mounting evidence that features from deep convolutional neural networks (CNN) set new records for various vision applications. On the other hand, considering the continuous characteristic of the depth values, depth estimation can be naturally formulated as a continuous conditional random field (CRF) learning problem. Therefore, here we present a deep convolutional neural field model for estimating depths from single monocular images, aiming to jointly explore the capacity of deep CNN and continuous CRF. In particular, we propose a deep structured learning scheme which learns the unary and pairwise potentials of continuous CRF in a unified deep CNN framework. We then further propose an equally effective model based on fully convolutional networks and a novel superpixel pooling method, which is about 10 times faster, to speedup the patch-wise convolutions in the deep model. With this more efficient model, we are able to design deeper networks to pursue better performance. Our proposed method can be used for depth estimation of general scenes with no geometric priors nor any extra information injected. In our case, the integral of the partition function can be calculated in a closed form such that we can exactly solve the log-likelihood maximization. Moreover, solving the inference problem for predicting depths of a test image is highly efficient as closed-form solutions exist. Experiments on both indoor and outdoor scene datasets demonstrate that the proposed method outperforms state-of-the-art depth estimation approaches.
Neural Dynamics of Emotional Salience Processing in Response to Voices during the Stages of Sleep
Chen, Chenyi; Sung, Jia-Ying; Cheng, Yawei
2016-01-01
Sleep has been related to emotional functioning. However, the extent to which emotional salience is processed during sleep is unknown. To address this concern, we investigated night sleep in healthy adults regarding brain reactivity to the emotionally (happily, fearfully) spoken meaningless syllables dada, along with correspondingly synthesized nonvocal sounds. Electroencephalogram (EEG) signals were continuously acquired during an entire night of sleep while we applied a passive auditory oddball paradigm. During all stages of sleep, mismatch negativity (MMN) in response to emotional syllables, which is an index for emotional salience processing of voices, was detected. In contrast, MMN to acoustically matching nonvocal sounds was undetected during Sleep Stage 2 and 3 as well as rapid eye movement (REM) sleep. Post-MMN positivity (PMP) was identified with larger amplitudes during Stage 3, and at earlier latencies during REM sleep, relative to wakefulness. These findings clearly demonstrated the neural dynamics of emotional salience processing during the stages of sleep. PMID:27378870
Han, Seong-Ik; Lee, Jang-Myung
2014-01-01
This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator.
Neural Dynamics of Emotional Salience Processing in Response to Voices during the Stages of Sleep.
Chen, Chenyi; Sung, Jia-Ying; Cheng, Yawei
2016-01-01
Sleep has been related to emotional functioning. However, the extent to which emotional salience is processed during sleep is unknown. To address this concern, we investigated night sleep in healthy adults regarding brain reactivity to the emotionally (happily, fearfully) spoken meaningless syllables dada, along with correspondingly synthesized nonvocal sounds. Electroencephalogram (EEG) signals were continuously acquired during an entire night of sleep while we applied a passive auditory oddball paradigm. During all stages of sleep, mismatch negativity (MMN) in response to emotional syllables, which is an index for emotional salience processing of voices, was detected. In contrast, MMN to acoustically matching nonvocal sounds was undetected during Sleep Stage 2 and 3 as well as rapid eye movement (REM) sleep. Post-MMN positivity (PMP) was identified with larger amplitudes during Stage 3, and at earlier latencies during REM sleep, relative to wakefulness. These findings clearly demonstrated the neural dynamics of emotional salience processing during the stages of sleep. PMID:27378870
Sunlu, F S; Demir, I; Onkal Engin, G; Buyukisik, B; Sunlu, U; Koray, T; Kukrer, S
2009-06-01
The bay of Izmir, which is the biggest harbor on the Aegean Sea, is of upmost economical importance for Izmir, the third largest city in Turkey. Most of the studies carried out focused on the effects of intensive industrial activity and agricultural production on the bay pollution within the region. These studies, most of the time, are limited to monitoring the level of pollution. However, it is believed that these studies should be supported with models and statistical analysis techniques, as the models, especially the prediction ones, provide an important approach to assessing risk and assessment. In this study, neural network analysis was used to construct prediction models for nanoplankton population change with nutrients and other environmentally important parameters. The results indicated that, using data over a 52 week period, it is possible to predict nanoplankton population dynamics and dissolved oxygen change for the future.
Sustained neural activity to gaze and emotion perception in dynamic social scenes.
Ulloa, José Luis; Puce, Aina; Hugueville, Laurent; George, Nathalie
2014-03-01
To understand social interactions, we must decode dynamic social cues from seen faces. Here, we used magnetoencephalography (MEG) to study the neural responses underlying the perception of emotional expressions and gaze direction changes as depicted in an interaction between two agents. Subjects viewed displays of paired faces that first established a social scenario of gazing at each other (mutual attention) or gazing laterally together (deviated group attention) and then dynamically displayed either an angry or happy facial expression. The initial gaze change elicited a significantly larger M170 under the deviated than the mutual attention scenario. At around 400 ms after the dynamic emotion onset, responses at posterior MEG sensors differentiated between emotions, and between 1000 and 2200 ms, left posterior sensors were additionally modulated by social scenario. Moreover, activity on right anterior sensors showed both an early and prolonged interaction between emotion and social scenario. These results suggest that activity in right anterior sensors reflects an early integration of emotion and social attention, while posterior activity first differentiated between emotions only, supporting the view of a dual route for emotion processing. Altogether, our data demonstrate that both transient and sustained neurophysiological responses underlie social processing when observing interactions between others. PMID:23202662
ERIC Educational Resources Information Center
Zhang, Yaxu; Zhang, Jinlu; Min, Baoquan
2012-01-01
An event-related potential experiment was conducted to investigate the temporal neural dynamics of animacy processing in the interpretation of classifier-noun combinations. Participants read sentences that had a non-canonical structure, "object noun" + "subject noun" + "verb" + "numeral-classifier" + "adjective". The object noun and its classifier…
Neural dynamics necessary and sufficient for transition into pre-sleep induced by EEG neurofeedback.
Kinreich, Sivan; Podlipsky, Ilana; Jamshy, Shahar; Intrator, Nathan; Hendler, Talma
2014-08-15
The transition from being fully awake to pre-sleep occurs daily just before falling asleep; thus its disturbance might be detrimental. Yet, the neuronal correlates of the transition remain unclear, mainly due to the difficulty in capturing its inherent dynamics. We used an EEG theta/alpha neurofeedback to rapidly induce the transition into pre-sleep and simultaneous fMRI to reveal state-dependent neural activity. The relaxed mental state was verified by the corresponding enhancement in the parasympathetic response. Neurofeedback sessions were categorized as successful or unsuccessful, based on the known EEG signature of theta power increases over alpha, temporally marked as a distinct "crossover" point. The fMRI activation was considered before and after this point. During successful transition into pre-sleep the period before the crossover was signified by alpha modulation that corresponded to decreased fMRI activity mainly in sensory gating related regions (e.g. medial thalamus). In parallel, although not sufficient for the transition, theta modulation corresponded with increased activity in limbic and autonomic control regions (e.g. hippocampus, cerebellum vermis, respectively). The post-crossover period was designated by alpha modulation further corresponding to reduced fMRI activity within the anterior salience network (e.g. anterior cingulate cortex, anterior insula), and in contrast theta modulation corresponded to the increased variance in the posterior salience network (e.g. posterior insula, posterior cingulate cortex). Our findings portray multi-level neural dynamics underlying the mental transition from awake to pre-sleep. To initiate the transition, decreased activity was required in external monitoring regions, and to sustain the transition, opposition between the anterior and posterior parts of the salience network was needed, reflecting shifting from extra- to intrapersonal based processing, respectively.
Gilam, Gadi; Lin, Tamar; Raz, Gal; Azrielant, Shir; Fruchter, Eyal; Ariely, Dan; Hendler, Talma
2015-10-15
In managing our way through interpersonal conflict, anger might be crucial in determining whether the dispute escalates to aggressive behaviors or resolves cooperatively. The Ultimatum Game (UG) is a social decision-making paradigm that provides a framework for studying interpersonal conflict over division of monetary resources. Unfair monetary UG-offers elicit anger and while accepting them engages regulatory processes, rejecting them is regarded as an aggressive retribution. Ventro-medial prefrontal-cortex (vmPFC) activity has been shown to relate to idiosyncratic tendencies in accepting unfair offers possibly through its role in emotion regulation. Nevertheless, standard UG paradigms lack fundamental aspects of real-life social interactions in which one reacts to other people in a response contingent fashion. To uncover the neural substrates underlying the tendency to accept anger-infused ultimatum offers during dynamic social interactions, we incorporated on-line verbal negotiations with an obnoxious partner in a repeated-UG during fMRI scanning. We hypothesized that vmPFC activity will differentiate between individuals with high or low monetary gains accumulated throughout the game and reflect a divergence in the associated emotional experience. We found that as individuals gained more money, they reported less anger but also more positive feelings and had slower sympathetic response. In addition, high-gain individuals had increased vmPFC activity, but also decreased brainstem activity, which possibly reflected the locus coeruleus. During the more angering unfair offers, these individuals had increased dorsal-posterior Insula (dpI) activity which functionally coupled to the medial-thalamus (mT). Finally, both vmPFC activity and dpI-mT connectivity contributed to increased gain, possibly by modulating the ongoing subjective emotional experience. These ecologically valid findings point towards a neural mechanism that might nurture pro-social interactions by
Neuroplasticity in dynamic neural networks comprised of neurons attached to adaptive base plate.
Joghataie, Abdolreza; Shafiei Dizaji, Mehrdad
2016-03-01
In this paper, a learning algorithm is developed for Dynamic Plastic Continuous Neural Networks (DPCNNs) to improve their learning of highly nonlinear time dependent problems. A DPCNN is comprised of a base medium, which is nonlinear and plastic, and a number of neurons that are attached to the base by wire-like connections similar to perceptrons. The information is distributed within DPCNNs gradually and through wave propagation mechanism. While a DPCNN is adaptive due to its connection weights, the material properties of its base medium can also be adjusted to improve its learning. The material of the medium is plastic and can contribute to memorizing the history of input-response similar to neuroplasticity in natural brain. The results obtained from numerical simulation of DPCNNs have been encouraging. Nonlinear plastic finite element modeling has been used for numerical simulation of dynamic behavior and wave propagation in the medium. Two significant differences of DPCNNs with other types of neural networks are that: (1) there is a medium to which the neurons are attached where the medium can contribute to the learning, (2) the input layer is not made of nodes but it is an edge terminal which is capable of receiving a continuous function over the input edge, though it is discretized in the finite element model. A DPCNN is reduced to a perceptron if the medium is removed and the neurons are connected to each other only by wires. Continuity of the input lets the discretization of data take place intrinsically within the DPCNN instead of being applied by the user.
Fu, Jing-Peng; Mo, Wei-Chuan; Liu, Ying; Bartlett, Perry F; He, Rong-Qiao
2016-09-01
Living organisms are exposed to the geomagnetic field (GMF) throughout their lifespan. Elimination of the GMF, resulting in a hypogeomagnetic field (HMF), leads to central nervous system dysfunction and abnormal development in animals. However, the cellular mechanisms underlying these effects have not been identified so far. Here, we show that exposure to an HMF (<200 nT), produced by a magnetic field shielding chamber, promotes the proliferation of neural progenitor/stem cells (NPCs/NSCs) from C57BL/6 mice. Following seven-day HMF-exposure, the primary neurospheres (NSs) were significantly larger in size, and twice more NPCs/NSCs were harvested from neonatal NSs, when compared to the GMF controls. The self-renewal capacity and multipotency of the NSs were maintained, as HMF-exposed NSs were positive for NSC markers (Nestin and Sox2), and could differentiate into neurons and astrocyte/glial cells and be passaged continuously. In addition, adult mice exposed to the HMF for one month were observed to have a greater number of proliferative cells in the subventricular zone. These findings indicate that continuous HMF-exposure increases the proliferation of NPCs/NSCs, in vitro and in vivo. HMF-disturbed NPCs/NSCs production probably affects brain development and function, which provides a novel clue for elucidating the cellular mechanisms of the bio-HMF response. PMID:27484904
Dynamically important magnetic fields near accreting supermassive black holes.
Zamaninasab, M; Clausen-Brown, E; Savolainen, T; Tchekhovskoy, A
2014-06-01
Accreting supermassive black holes at the centres of active galaxies often produce 'jets'--collimated bipolar outflows of relativistic particles. Magnetic fields probably play a critical role in jet formation and in accretion disk physics. A dynamically important magnetic field was recently found near the Galactic Centre black hole. If this is common and if the field continues to near the black hole event horizon, disk structures will be affected, invalidating assumptions made in standard models. Here we report that jet magnetic field and accretion disk luminosity are tightly correlated over seven orders of magnitude for a sample of 76 radio-loud active galaxies. We conclude that the jet-launching regions of these radio-loud galaxies are threaded by dynamically important fields, which will affect the disk properties. These fields obstruct gas infall, compress the accretion disk vertically, slow down the disk rotation by carrying away its angular momentum in an outflow and determine the directionality of jets.
Dutt-Mazumder, Aviroop; Button, Chris; Robins, Anthony; Bartlett, Roger
2011-12-01
Recent studies have explored the organization of player movements in team sports using a range of statistical tools. However, the factors that best explain the performance of association football teams remain elusive. Arguably, this is due to the high-dimensional behavioural outputs that illustrate the complex, evolving configurations typical of team games. According to dynamical system analysts, movement patterns in team sports exhibit nonlinear self-organizing features. Nonlinear processing tools (i.e. Artificial Neural Networks; ANNs) are becoming increasingly popular to investigate the coordination of participants in sports competitions. ANNs are well suited to describing high-dimensional data sets with nonlinear attributes, however, limited information concerning the processes required to apply ANNs exists. This review investigates the relative value of various ANN learning approaches used in sports performance analysis of team sports focusing on potential applications for association football. Sixty-two research sources were summarized and reviewed from electronic literature search engines such as SPORTDiscus, Google Scholar, IEEE Xplore, Scirus, ScienceDirect and Elsevier. Typical ANN learning algorithms can be adapted to perform pattern recognition and pattern classification. Particularly, dimensionality reduction by a Kohonen feature map (KFM) can compress chaotic high-dimensional datasets into low-dimensional relevant information. Such information would be useful for developing effective training drills that should enhance self-organizing coordination among players. We conclude that ANN-based qualitative analysis is a promising approach to understand the dynamical attributes of association football players.
Electron dynamics in nanostructures subjected to a laser field
NASA Astrophysics Data System (ADS)
Bubin, Sergiy; Driscoll, Joseph; Varga, Kalman
2010-03-01
Recent experiments (Zhu et al., J. Appl. Phys. 102, 114302 (2007); Gabor et al., Science, 325, 1367 (2009)) have shown that application of a laser field can significantly influence the electron dynamics in nanostructures. The study of such phenomena is vital both for fundamental understanding as well as for technological applications. We use time-dependent density functional theory to study how laser fields affect electron dynamics in nanostructures. Examples include the enhancement of field emission from carbon nanotubes (CNT) and effects on transport properties of a CNT-based nanowire.
Monitoring the Earth's Dynamic Magnetic Field
Love, Jeffrey J.; Applegate, David; Townshend, John B.
2008-01-01
The mission of the U.S. Geological Survey's Geomagnetism Program is to monitor the Earth's magnetic field. Using ground-based observatories, the Program provides continuous records of magnetic field variations covering long timescales; disseminates magnetic data to various governmental, academic, and private institutions; and conducts research into the nature of geomagnetic variations for purposes of scientific understanding and hazard mitigation. The program is an integral part of the U.S. Government's National Space Weather Program (NSWP), which also includes programs in the National Aeronautics and Space Administration (NASA), the Department of Defense (DOD), the National Oceanic and Atmospheric Administration (NOAA), and the National Science Foundation (NSF). The NSWP works to provide timely, accurate, and reliable space weather warnings, observations, specifications, and forecasts, and its work is important for the U.S. economy and national security. Please visit the National Geomagnetism Program?s website, http://geomag.usgs.gov, where you can learn more about the Program and the science of geomagnetism. You can find additional related information at the Intermagnet website, http://www.intermagnet.org.
Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan
2014-01-01
The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442
Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano
2008-09-01
This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.
Ferroelectric domain dynamics under an external field
NASA Astrophysics Data System (ADS)
Rappe, Andrew; Shin, Young-Han; Grinberg, Ilya; Chen, I.-Wei
2007-03-01
Ferroelectric oxides with the perovskite structure are promising materials for nonvolatile random access computer memories. PbZr1-xTixO3 is currently used for this purpose. In these materials, storage of a bit involves the reorientation of polarization, or the movement of a ferroelectric domain wall. However, the intrinsic properties of the polarization reversal process of ferroelectrics at the microscopic level still have not been revealed, either by experiments or computations. In this talk, I will show how this problem can be studied with a multi-scale approach. First, an interatomic potential is parameterized to first-principles calculations, and molecular dynamics (MD) simulations are performed. Second, stochastic Monte Carlo simulations are conducted, with nucleation and growth rates extracted from the MD simulations. For PbTiO3, we find that while the overall domain-wall speed from our calculation is in good agreement with the recent experiments, the size of the critical nucleus is much smaller than predicted from the Miller-Weinreich model. We think that this discrepancy can be explained by a diffuse-boundary model and by the fact that the overall wall motion is controlled by both the nucleation and growth processes.
A dynamic model of Venus's gravity field
NASA Technical Reports Server (NTRS)
Kiefer, W. S.; Richards, M. A.; Hager, B. H.; Bills, B. G.
1984-01-01
Unlike Earth, long wavelength gravity anomalies and topography correlate well on Venus. Venus's admittance curve from spherical harmonic degree 2 to 18 is inconsistent with either Airy or Pratt isostasy, but is consistent with dynamic support from mantle convection. A model using whole mantle flow and a high viscosity near surface layer overlying a constant viscosity mantle reproduces this admittance curve. On Earth, the effective viscosity deduced from geoid modeling increases by a factor of 300 from the asthenosphere to the lower mantle. These viscosity estimates may be biased by the neglect of lateral variations in mantle viscosity associated with hot plumes and cold subducted slabs. The different effective viscosity profiles for Earth and Venus may reflect their convective styles, with tectonism and mantle heat transport dominated by hot plumes on Venus and by subducted slabs on Earth. Convection at degree 2 appears much stronger on Earth than on Venus. A degree 2 convective structure may be unstable on Venus, but may have been stabilized on Earth by the insulating effects of the Pangean supercontinental assemblage.
Technology Transfer Automated Retrieval System (TEKTRAN)
Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Bamford, Simeon A; Hogri, Roni; Giovannucci, Andrea; Taub, Aryeh H; Herreros, Ivan; Verschure, Paul F M J; Mintz, Matti; Del Giudice, Paolo
2012-07-01
A very-large-scale integration field-programmable mixed-signal array specialized for neural signal processing and neural modeling has been designed. This has been fabricated as a core on a chip prototype intended for use in an implantable closed-loop prosthetic system aimed at rehabilitation of the learning of a discrete motor response. The chosen experimental context is cerebellar classical conditioning of the eye-blink response. The programmable system is based on the intimate mixing of switched capacitor analog techniques with low speed digital computation; power saving innovations within this framework are presented. The utility of the system is demonstrated by the implementation of a motor classical conditioning model applied to eye-blink conditioning in real time with associated neural signal processing. Paired conditioned and unconditioned stimuli were repeatedly presented to an anesthetized rat and recordings were taken simultaneously from two precerebellar nuclei. These paired stimuli were detected in real time from this multichannel data. This resulted in the acquisition of a trigger for a well-timed conditioned eye-blink response, and repetition of unpaired trials constructed from the same data led to the extinction of the conditioned response trigger, compatible with natural cerebellar learning in awake animals.
Feng, Yinghua; Harper, Willie F
2013-11-30
In this study microbial fuel cell-based biosensing was integrated with artificial neural networks (ANNs) in laboratory and field testing of water samples. Inoculation revealed two types of anode-respiring bacteria (ARB) induction profiles, a relatively slow gradual profile and a faster profile that was preceded by a significant lag time. During laboratory testing, the MFCs generated well-organized normally distributed profiles but during field experiments the peaks had irregular shapes and were smaller in magnitude. Generally, the COD concentration correlated better with peak area than with peak height. The ANN predicted the COD concentration (R(2) = 0.99) with one layer of hidden neurons and for concentrations as low as 5 mg acetate-COD/L. Adding 50 mM of 2-bromoethanesulfonate amplified the electrical signals when glucose was the substrate. This report is the first to identify two types of ARB induction profiles and to demonstrate the power of ANNs for interpreting a wide variety of electrical response peaks.
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-01-01
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-01-01
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.
Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin
2016-01-01
Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization. PMID:26358243
Saracoglu, Ö. Galip
2008-01-01
This paper describes artificial neural network (ANN) based prediction of the response of a fiber optic sensor using evanescent field absorption (EFA). The sensing probe of the sensor is made up a bundle of five PCS fibers to maximize the interaction of evanescent field with the absorbing medium. Different backpropagation algorithms are used to train the multilayer perceptron ANN. The Levenberg-Marquardt algorithm, as well as the other algorithms used in this work successfully predicts the sensor responses.
Ly, Cheng; Ermentrout, G Bard
2009-06-01
The response of neurons to external stimuli greatly depends on the intrinsic dynamics of the network. Here, the intrinsic dynamics are modeled as coupling and the external input is modeled as shared and unshared noise. We assume the neurons are repetitively firing action potentials (i.e., neural oscillators), are weakly and identically coupled, and the external noise is weak. Shared noise can induce bistability between the synchronous and anti-phase states even though the anti-phase state is the only stable state in the absence of noise. We study the Fokker-Planck equation of the system and perform an asymptotic reduction rho(0). The rho(0) solution is more computationally efficient than both the Monte Carlo simulations and the 2D Fokker-Planck solver, and agrees remarkably well with the full system with weak noise and weak coupling. With moderate noise and coupling, rho(0) is still qualitatively correct despite the small noise and coupling assumption in the asymptotic reduction. Our phase model accurately predicts the behavior of a realistic synaptically coupled Morris-Lecar system.
On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses
Cessac, Bruno; Viéville, Thierry
2008-01-01
We present a mathematical analysis of networks with integrate-and-fire (IF) neurons with conductance based synapses. Taking into account the realistic fact that the spike time is only known within some finite precision, we propose a model where spikes are effective at times multiple of a characteristic time scale δ, where δ can be arbitrary small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the “edge of chaos”, a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely “in the spikes” in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and IF models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions. PMID:18946532
Jiang, Jun; Zhang, Qinglin; Van Gaal, Simon
2015-07-14
Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict awareness was manipulated by masking a conflict-inducing color word preceding a color patch target. We isolated semantic from response conflict by introducing four color words/patches, of which two were matched to the same response. We observed that both semantic as well as response conflict were associated with mid-frontal theta-band and parietal alpha-band power modulations, irrespective of the level of conflict awareness (high vs. low), although awareness of conflict increased these conflict-related power dynamics. These results show that both semantic and response conflict can be processed in the human brain and suggest that the neural oscillatory mechanisms in EEG reflect mainly "domain general" conflict processing mechanisms, instead of conflict source specific effects.
Jiang, Jun; Zhang, Qinglin; Van Gaal, Simon
2015-01-01
Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict awareness was manipulated by masking a conflict-inducing color word preceding a color patch target. We isolated semantic from response conflict by introducing four color words/patches, of which two were matched to the same response. We observed that both semantic as well as response conflict were associated with mid-frontal theta-band and parietal alpha-band power modulations, irrespective of the level of conflict awareness (high vs. low), although awareness of conflict increased these conflict-related power dynamics. These results show that both semantic and response conflict can be processed in the human brain and suggest that the neural oscillatory mechanisms in EEG reflect mainly "domain general" conflict processing mechanisms, instead of conflict source specific effects. PMID:26169473
Injury to the Spinal Cord Niche Alters the Engraftment Dynamics of Human Neural Stem Cells
Sontag, Christopher J.; Uchida, Nobuko; Cummings, Brian J.; Anderson, Aileen J.
2014-01-01
Summary The microenvironment is a critical mediator of stem cell survival, proliferation, migration, and differentiation. The majority of preclinical studies involving transplantation of neural stem cells (NSCs) into the CNS have focused on injured or degenerating microenvironments, leaving a dearth of information as to how NSCs differentially respond to intact versus damaged CNS. Furthermore, single, terminal histological endpoints predominate, providing limited insight into the spatiotemporal dynamics of NSC engraftment and migration. We investigated the early and long-term engraftment dynamics of human CNS stem cells propagated as neurospheres (hCNS-SCns) following transplantation into uninjured versus subacutely injured spinal cords of immunodeficient NOD-scid mice. We stereologically quantified engraftment, survival, proliferation, migration, and differentiation at 1, 7, 14, 28, and 98 days posttransplantation, and identified injury-dependent alterations. Notably, the injured microenvironment decreased hCNS-SCns survival, delayed and altered the location of proliferation, influenced both total and fate-specific migration, and promoted oligodendrocyte maturation. PMID:24936450
Trenado, Carlos; Haab, Lars; Strauss, Daniel J
2007-01-01
Auditory evoked cortical potentials (AECP) are well established as diagnostic tool in audiology and gain more and more impact in experimental neuropsychology, neuro-science, and psychiatry, e.g., for the attention deficit disorder, schizophrenia, or for studying the tinnitus decompensation. The modulation of AECP due to exogenous and endogenous attention plays a major role in many clinical applications and has experimentally been studied in neuropsychology. However the relation of corticothalamic feedback dynamics to focal and non-focal attention and its large-scale effect reflected in AECPs is far from being understood. In this paper, we model neural correlates of auditory attention reflected in AECPs using corticothalamic feedback dynamics. We present a mapping of a recently developed multiscale model of evoked potentials to the hearing path and discuss for the first time its neurofunctionality in terms of corticothalamic feedback loops related to focal and non-focal attention. Our model reinforced recent experimental results related to online attention monitoring using AECPs with application as objective tinnitus decompensation measure. It is concluded that our model presents a promising approach to gain a deeper understanding of the neurodynamics of auditory attention and might be use as an efficient forward model to reinforce hypotheses that are obtained from experimental paradigms involving AECPs. PMID:18002948
Jiang, Jun; Zhang, Qinglin; Van Gaal, Simon
2015-01-01
Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict awareness was manipulated by masking a conflict-inducing color word preceding a color patch target. We isolated semantic from response conflict by introducing four color words/patches, of which two were matched to the same response. We observed that both semantic as well as response conflict were associated with mid-frontal theta-band and parietal alpha-band power modulations, irrespective of the level of conflict awareness (high vs. low), although awareness of conflict increased these conflict-related power dynamics. These results show that both semantic and response conflict can be processed in the human brain and suggest that the neural oscillatory mechanisms in EEG reflect mainly “domain general” conflict processing mechanisms, instead of conflict source specific effects. PMID:26169473
Mascotte-Cruz, Juan Uriel; Ríos, Amelia; Escalante, Bruno
2016-01-01
Differentiation of bone marrow-derived mesenchymal stem cells (MSCs) into neural phenotype has been induced by either flow-induced shear stress (FSS) or electromagnetic fields (EMF). However, procedures are still expensive and time consuming. In the present work, induction for 1 h with the combination of both forces showed the presence of the neural precursor nestin as early as 9 h in culture after treatment and this result lasted for the following 6 d. In conclusion, the use of a combination of FSS and EMF for a short-time renders in neurite-like cells, although further investigation is required to analyze cell functionality.
Nakao, M; Takahashi, T; Mizutani, Y; Yamamoto, M
1990-01-01
We have found that single neuronal activities in different regions in the brain commonly exhibit the distinct dynamics transition during sleep-waking cycle in cats. Especially, power spectral densities of single neuronal activities change their profiles from the white to the 1/f along with sleep cycle from slow wave sleep (SWS) to paradoxical sleep (PS). Each region has different neural network structure and physiological function. This suggests a globally working mechanism may be underlying the dynamics transition we concern. Pharmacological studies have shown that a change in a wide-spread serotonergic input to these regions possibly causes the neuronal dynamics transition during sleep cycle. In this paper, based on these experimental results, an asynchronous and symmetry neural network model including inhibitory input, which represents the role of the serotonergic system, is utilized to examine the reality of our idea that the inhibitory input level varying during sleep cycle induce that transition. Simulation results show that the globally applied inhibitory input can control the dynamics of single neuronal state evolution in the artificial neural network: 1/f-like power spectral density profiles result under weak inhibition, which possibly corresponds to PS, and white profiles under strong inhibition, which possibly corresponds to SWS. An asynchronous neural network is known to change its state according to its energy function. The geometrical structure of network energy function is thought to vary along with the change in inhibitory level, which is expected to cause the dynamics transition of neuronal state evolution in the network model. These simulation results support the possibility that the serotonergic system is essential for the dynamics transition of single neuronal activities during sleep cycle.
Dynamic changes in connexin expression following engraftment of neural stem cells to striatal tissue
Jaederstad, Johan Jaederstad, Linda Maria; Herlenius, Eric
2011-01-01
Gap-junctional intercellular communication between grafted neural stem cells (NSCs) and host cells seem to be essential for many of the beneficial effects associated with NSC engraftment. Utilizing murine NSCs (mNSCs) grafted into an organotypic ex vivo model system for striatal tissue we examined the prerequisites for formation of gap-junctional couplings between graft and host cells at different time points following implantation. We utilized flow cytometry (to quantify the proportion of connexin (Cx) 26 and 43 expressing cells), immunohistochemistry (for localization of the gap-junctional proteins in graft and host cells), dye-transfer studies with and without pharmacological gap-junctional blockers (assaying the functionality of the formed gap-junctional couplings), and proliferation assays (to estimate the role of gap junctions for NSC well-being) to this end. Immunohistochemical staining and dye-transfer studies revealed that the NSCs already form functional gap junctions prior to engraftment, thereby creating a substrate for subsequent graft and host communication. The expression of Cx43 by grafted NSCs was decreased by neurotrophin-3 overexpression in NSCs and culturing of grafted tissue in serum-free Neurobasal B27 medium. Cx43 expression in NSC-derived cells also changed significantly following engraftment. In host cells the expression of Cx43 peaked following traumatic stimulation and then declined within two weeks, suggesting a window of opportunity for successful host cell rescue by NSC engraftment. Further investigation of the dynamic changes in gap junction expression in graft and host cells and the associated variations in intercellular communication between implanted and endogenous cells might help to understand and control the early positive and negative effects evident following neural stem cell transplantation and thereby optimize the outcome of future clinical NSC transplantation therapies.
The influence of mental fatigue and motivation on neural network dynamics; an EEG coherence study.
Lorist, Monicque M; Bezdan, Eniko; ten Caat, Michael; Span, Mark M; Roerdink, Jos B T M; Maurits, Natasha M
2009-05-13
The purpose of the present study is to examine the effects of mental fatigue and motivation on neural network dynamics activated during task switching. Mental fatigue was induced by 2 h of continuous performance; after which subjects were motivated by using social comparison and monetary reward as motivating factors to perform well for an additional 20 min. EEG coherence was used as a measure of synchronization of brain activity. Electrodes of interest were identified using a data-driven pre-processing method (ten Caat, M., Lorist, M.M., Bezdan, E., Roerdink, J.B.T.M., Maurits, N.M., 2008a. High-density EEG coherence analysis using functional units applied to mental fatigue. J. Neurosci. Meth. 171, 271-278; ten Caat, M., Maurits, N.M. and Roerdink, J.B.T.M., 2008b. Data-driven visualization and group analysis of multichannel EEG coherence with functional units. IEEE T. Vis. Comp. Gr. 14, 756-771). Performance on repetition trials was faster and more accurate than on switch trials. EEG data revealed more pronounced, frequency specific fronto-parietal network activation in switch trials, while power density was higher in repetition trials. The effects of mental fatigue on power and coherence were widespread, and not limited to specific frequency bands. Moreover, these effects were independent of specific task manipulations. This increase in neuronal activity and stronger synchronization between neural networks did not result in more efficient performance; response speed decreased and the number of errors increased in fatigued subjects. A modulation of the dopamine system is proposed as a common mechanism underlying the observed the fatigue effects.
High dynamic range diamond magnetometry for time dependent magnetic fields
NASA Astrophysics Data System (ADS)
Ummal Momeen, M.; Nusran, N. M.; Gurudev Dutt, M. V.
2012-02-01
Nitrogen-Vacancy (NV) centers in diamond have become a topic of great interest in recent years due to their promising applications in high resolution nanoscale magnetometry and quantum information processing devices at ambient conditions. We will present our recent progress on implementing novel phase estimation algorithms with a single electron spin qubit associated with the NV center, in combination with dynamical decoupling techniques, to improve the dynamic range and sensitivity of magnetometry with time-varying magnetic fields.
Electron Dynamics in Nanostructures in Strong Laser Fields
Kling, Matthias
2014-09-11
The goal of our research was to gain deeper insight into the collective electron dynamics in nanosystems in strong, ultrashort laser fields. The laser field strengths will be strong enough to extract and accelerate electrons from the nanoparticles and to transiently modify the materials electronic properties. We aimed to observe, with sub-cycle resolution reaching the attosecond time domain, how collective electronic excitations in nanoparticles are formed, how the strong field influences the optical and electrical properties of the nanomaterial, and how the excitations in the presence of strong fields decay.
Ansari, N; Hou, E H; Yu, Y
1995-01-01
Reports a new method for optimizing satellite broadcasting schedules based on the Hopfield neural model in combination with the mean field annealing theory. A clamping technique is used with an associative matrix, thus reducing the dimensions of the solution space. A formula for estimating the critical temperature for the mean field annealing procedure is derived, hence enabling the updating of the mean field theory equations to be more economical. Several factors on the numerical implementation of the mean field equations using a straightforward iteration method that may cause divergence are discussed; methods to avoid this kind of divergence are also proposed. Excellent results are consistently found for problems of various sizes.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction.
NASA Astrophysics Data System (ADS)
De Geeter, N.; Crevecoeur, G.; Leemans, A.; Dupré, L.
2015-01-01
In transcranial magnetic stimulation (TMS), an applied alternating magnetic field induces an electric field in the brain that can interact with the neural system. It is generally assumed that this induced electric field is the crucial effect exciting a certain region of the brain. More specifically, it is the component of this field parallel to the neuron’s local orientation, the so-called effective electric field, that can initiate neuronal stimulation. Deeper insights on the stimulation mechanisms can be acquired through extensive TMS modelling. Most models study simple representations of neurons with assumed geometries, whereas we embed realistic neural trajectories computed using tractography based on diffusion tensor images. This way of modelling ensures a more accurate spatial distribution of the effective electric field that is in addition patient and case specific. The case study of this paper focuses on the single pulse stimulation of the left primary motor cortex with a standard figure-of-eight coil. Including realistic neural geometry in the model demonstrates the strong and localized variations of the effective electric field between the tracts themselves and along them due to the interplay of factors such as the tract’s position and orientation in relation to the TMS coil, the neural trajectory and its course along the white and grey matter interface. Furthermore, the influence of changes in the coil orientation is studied. Investigating the impact of tissue anisotropy confirms that its contribution is not negligible. Moreover, assuming isotropic tissues lead to errors of the same size as rotating or tilting the coil with 10 degrees. In contrast, the model proves to be less sensitive towards the not well-known tissue conductivity values.
Simmering, Vanessa R; Schutte, Anne R; Spencer, John P
2008-04-01
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the dynamic field theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks-the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity-generating novel, testable predictions-and generality-spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Fractional dynamics of charged particles in magnetic fields
NASA Astrophysics Data System (ADS)
Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Alvarado-Méndez, E.; Guerrero-Ramírez, G. V.; Escobar-Jiménez, R. F.
2016-02-01
In many physical applications the electrons play a relevant role. For example, when a beam of electrons accelerated to relativistic velocities is used as an active medium to generate Free Electron Lasers (FEL), the electrons are bound to atoms, but move freely in a magnetic field. The relaxation time, longitudinal effects and transverse variations of the optical field are parameters that play an important role in the efficiency of this laser. The electron dynamics in a magnetic field is a means of radiation source for coupling to the electric field. The transverse motion of the electrons leads to either gain or loss energy from or to the field, depending on the position of the particle regarding the phase of the external radiation field. Due to the importance to know with great certainty the displacement of charged particles in a magnetic field, in this work we study the fractional dynamics of charged particles in magnetic fields. Newton’s second law is considered and the order of the fractional differential equation is (0;1]. Based on the Grünwald-Letnikov (GL) definition, the discretization of fractional differential equations is reported to get numerical simulations. Comparison between the numerical solutions obtained on Euler’s numerical method for the classical case and the GL definition in the fractional approach proves the good performance of the numerical scheme applied. Three application examples are shown: constant magnetic field, ramp magnetic field and harmonic magnetic field. In the first example the results obtained show bistability. Dissipative effects are observed in the system and the standard dynamic is recovered when the order of the fractional derivative is 1.
Dynamic electrophoresis of charged colloids in an oscillating electric field.
Shih, Chunyu; Yamamoto, Ryoichi
2014-06-01
The dynamics of charged colloids in an electrolyte solution is studied using direct numerical simulations via the smoothed profile method. We calculated the complex electrophoretic mobility μ(ω) of the charged colloids under an oscillating electric field of frequency ω. We show the existence of three dynamically distinct regimes, determined by the momentum diffusion and ionic diffusion time scales. The present results agree well with approximate theories based on the cell model in dilute suspensions; however, systematic deviations between the simulation results and theoretical predictions are observed as the volume fraction of colloids is increased, similar to the case of constant electric fields.
Neural dynamics of feedforward and feedback processing in figure-ground segregation
Layton, Oliver W.; Mingolla, Ennio; Yazdanbakhsh, Arash
2014-01-01
Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation. PMID:25346703
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language–behavior relationships and the temporal patterns of interaction. Here, “internal dynamics” refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human’s linguistic instruction. After learning, the network actually formed the attractor structure representing both language–behavior relationships and the task’s temporal pattern in its internal dynamics. In the dynamics, language–behavior mapping was achieved by the branching structure. Repetition of human’s instruction and robot’s behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases. PMID:27471463
Raz, Gal; Winetraub, Yonatan; Jacob, Yael; Kinreich, Sivan; Maron-Katz, Adi; Shaham, Galit; Podlipsky, Ilana; Gilam, Gadi; Soreq, Eyal; Hendler, Talma
2012-04-01
Dynamic functional integration of distinct neural systems plays a pivotal role in emotional experience. We introduce a novel approach for studying emotion-related changes in the interactions within and between networks using fMRI. It is based on continuous computation of a network cohesion index (NCI), which is sensitive to both strength and variability of signal correlations between pre-defined regions. The regions encompass three clusters (namely limbic, medial prefrontal cortex (mPFC) and cognitive), each previously was shown to be involved in emotional processing. Two sadness-inducing film excerpts were viewed passively, and comparisons between viewer's rated sadness, parasympathetic, and inter-NCI and intra-NCI were obtained. Limbic intra-NCI was associated with reported sadness in both movies. However, the correlation between the parasympathetic-index, the rated sadness and the limbic-NCI occurred in only one movie, possibly related to a "deactivated" pattern of sadness. In this film, rated sadness intensity also correlated with the mPFC intra-NCI, possibly reflecting temporal correspondence between sadness and sympathy. Further, only for this movie, we found an association between sadness rating and the mPFC-limbic inter-NCI time courses. To the contrary, in the other film in which sadness was reported to commingle with horror and anger, dramatic events coincided with disintegration of these networks. Together, this may point to a difference between the cinematic experiences with regard to inter-network dynamics related to emotional regulation. These findings demonstrate the advantage of a multi-layered dynamic analysis for elucidating the uniqueness of emotional experiences with regard to an unguided processing of continuous and complex stimulation. PMID:22285693
Advances in neural networks research: an introduction.
Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar
2009-01-01
The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811
Advances in neural networks research: an introduction.
Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar
2009-01-01
The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications.
The Effect of Varying Magnetic Field Gradient on Combustion Dynamic
NASA Astrophysics Data System (ADS)
Suzdalenko, Vera; Zake, Maija; Barmina, Inesa; Gedrovics, Martins
2011-01-01
The focus of the recent experimental research is to provide control of the combustion dynamics and complex measurements (flame temperature, heat production rate, and composition of polluting emissions) for pelletized wood biomass using a non-uniform magnetic field that produces magnetic force interacting with magnetic moment of paramagnetic oxygen. The experimental results have shown that a gradient magnetic field provides enhanced mixing of the flame compounds by increasing combustion efficiency and enhancing the burnout of volatiles.
Hysteretic dynamics of active particles in a periodic orienting field
Romensky, Maksym; Scholz, Dimitri; Lobaskin, Vladimir
2015-01-01
Active motion of living organisms and artificial self-propelling particles has been an area of intense research at the interface of biology, chemistry and physics. Significant progress in understanding these phenomena has been related to the observation that dynamic self-organization in active systems has much in common with ordering in equilibrium condensed matter such as spontaneous magnetization in ferromagnets. The velocities of active particles may behave similar to magnetic dipoles and develop global alignment, although interactions between the individuals might be completely different. In this work, we show that the dynamics of active particles in external fields can also be described in a way that resembles equilibrium condensed matter. It follows simple general laws, which are independent of the microscopic details of the system. The dynamics is revealed through hysteresis of the mean velocity of active particles subjected to a periodic orienting field. The hysteresis is measured in computer simulations and experiments on unicellular organisms. We find that the ability of the particles to follow the field scales with the ratio of the field variation period to the particles' orientational relaxation time, which, in turn, is related to the particle self-propulsion power and the energy dissipation rate. The collective behaviour of the particles due to aligning interactions manifests itself at low frequencies via increased persistence of the swarm motion when compared with motion of an individual. By contrast, at high field frequencies, the active group fails to develop the alignment and tends to behave like a set of independent individuals even in the presence of interactions. We also report on asymptotic laws for the hysteretic dynamics of active particles, which resemble those in magnetic systems. The generality of the assumptions in the underlying model suggests that the observed laws might apply to a variety of dynamic phenomena from the motion of
Hysteretic dynamics of active particles in a periodic orienting field.
Romensky, Maksym; Scholz, Dimitri; Lobaskin, Vladimir
2015-07-01
Active motion of living organisms and artificial self-propelling particles has been an area of intense research at the interface of biology, chemistry and physics. Significant progress in understanding these phenomena has been related to the observation that dynamic self-organization in active systems has much in common with ordering in equilibrium condensed matter such as spontaneous magnetization in ferromagnets. The velocities of active particles may behave similar to magnetic dipoles and develop global alignment, although interactions between the individuals might be completely different. In this work, we show that the dynamics of active particles in external fields can also be described in a way that resembles equilibrium condensed matter. It follows simple general laws, which are independent of the microscopic details of the system. The dynamics is revealed through hysteresis of the mean velocity of active particles subjected to a periodic orienting field. The hysteresis is measured in computer simulations and experiments on unicellular organisms. We find that the ability of the particles to follow the field scales with the ratio of the field variation period to the particles' orientational relaxation time, which, in turn, is related to the particle self-propulsion power and the energy dissipation rate. The collective behaviour of the particles due to aligning interactions manifests itself at low frequencies via increased persistence of the swarm motion when compared with motion of an individual. By contrast, at high field frequencies, the active group fails to develop the alignment and tends to behave like a set of independent individuals even in the presence of interactions. We also report on asymptotic laws for the hysteretic dynamics of active particles, which resemble those in magnetic systems. The generality of the assumptions in the underlying model suggests that the observed laws might apply to a variety of dynamic phenomena from the motion of
DYNAMICS OF CHROMOSPHERIC UPFLOWS AND UNDERLYING MAGNETIC FIELDS
Yurchyshyn, V.; Abramenko, V.; Goode, P.
2013-04-10
We used H{alpha}-0.1 nm and magnetic field (at 1.56{mu}) data obtained with the New Solar Telescope to study the origin of the disk counterparts to type II spicules, so-called rapid blueshifted excursions (RBEs). The high time cadence of our chromospheric (10 s) and magnetic field (45 s) data allowed us to generate x-t plots using slits parallel to the spines of the RBEs. These plots, along with potential field extrapolation, led us to suggest that the occurrence of RBEs is generally correlated with the appearance of new, mixed, or unipolar fields in close proximity to network fields. RBEs show a tendency to occur at the interface between large-scale fields and small-scale dynamic magnetic loops and thus are likely to be associated with the existence of a magnetic canopy. Detection of kinked and/or inverse {sup Y-}shaped RBEs further confirm this conclusion.
Dynamics of shear velocity layer with bent magnetic field lines
NASA Astrophysics Data System (ADS)
Galinsky, V. L.; Sonnerup, B. U. Ö.
A fully three-dimensional, magnetohydro-dynamic simulation of velocity-sheared plasma flow in an ambient transverse magnetic field with bent magnetic field lines has been performed. “Ionospheric-like” boundary conditions were used for closing field-aligned currents, the two ionospheres being represented by conducting plates with constant resistivity. Compared to the standard plane 2D case with a uniform transverse magnetic field, the growth rate of the Kelvin-Helmholtz instability drops significantly as bending increases. Under conditions representative of the Earth's low latitude boundary layer, the instability may be suppressed completely by the magnetic field-line tension if the field-line bending is sufficiently strong. For weak bending, a combination of the tearing mode instability and the Kelvin-Helmholtz instability leads to the formation of localized 3D current/vortex tubes, the ionospheric foot prints of which are possible models of the auroral bright spots observed by the Viking satellite.
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
Dynamics of ultracold polar molecules in a microwave field
NASA Astrophysics Data System (ADS)
Avdeenkov, Alexander V.
2015-04-01
We analyze the temporal evolution of the population of ultracold polar molecules in a microwave (mw) field with a circular polarization. The molecules are in their ground 1Σ state and treated as rigid rotors with a permanent dipole moment which interact with each other via the dipole-dipole (DD) interaction Vdd. The mw field mixes states with different quantum and photon numbers and the collisional dynamics in the mw field is mostly controlled by the ratios of the mw field frequency versus the rotational constant, and mw field Rabi frequency versus the rotational constant. There exists a special scattering process which is elastic by nature and due to a rotational energy exchange between the ground and the first excited rotational states. To analyze dynamics of polar molecules system in the mw field the equation of motion for the bare and dressed states is solved under different mw field parameters and molecular gas characteristics. Depending on the ratio of the Rabi frequency of a mw field and the magnitude of the DD interaction, beatings and oscillations occur in the bare and dressed states time-development. At a certain relation between the magnitudes of the mw detuning δ and the DD interaction δ =+/- {{V}dd}, peak structures appear in the population of the excited bare state. Each peak is associated with an avoided crossing between the dressed states adiabatic curves at the same position of mw detuning.
Detorakis, Georgios Is.; Rougier, Nicolas P.
2012-01-01
We investigate the formation and maintenance of ordered topographic maps in the primary somatosensory cortex as well as the reorganization of representations after sensory deprivation or cortical lesion. We consider both the critical period (postnatal) where representations are shaped and the post-critical period where representations are maintained and possibly reorganized. We hypothesize that feed-forward thalamocortical connections are an adequate site of plasticity while cortico-cortical connections are believed to drive a competitive mechanism that is critical for learning. We model a small skin patch located on the distal phalangeal surface of a digit as a set of 256 Merkel ending complexes (MEC) that feed a computational model of the primary somatosensory cortex (area 3b). This model is a two-dimensional neural field where spatially localized solutions (a.k.a. bumps) drive cortical plasticity through a Hebbian-like learning rule. Simulations explain the initial formation of ordered representations following repetitive and random stimulations of the skin patch. Skin lesions as well as cortical lesions are also studied and results confirm the possibility to reorganize representations using the same learning rule and depending on the type of the lesion. For severe lesions, the model suggests that cortico-cortical connections may play an important role in complete recovery. PMID:22808127
A neural model of the frontal eye fields with reward-based learning.
Ye, Weijie; Liu, Shenquan; Liu, Xuanliang; Yu, Yuguo
2016-09-01
Decision-making is a flexible process dependent on the accumulation of various kinds of information; however, the corresponding neural mechanisms are far from clear. We extended a layered model of the frontal eye field to a learning-based model, using computational simulations to explain the cognitive process of choice tasks. The core of this extended model has three aspects: direction-preferred populations that cluster together the neurons with the same orientation preference, rule modules that control different rule-dependent activities, and reward-based synaptic plasticity that modulates connections to flexibly change the decision according to task demands. After repeated attempts in a number of trials, the network successfully simulated three decision choice tasks: an anti-saccade task, a no-go task, and an associative task. We found that synaptic plasticity could modulate the competition of choices by suppressing erroneous choices while enhancing the correct (rewarding) choice. In addition, the trained model captured some properties exhibited in animal and human experiments, such as the latency of the reaction time distribution of anti-saccades, the stop signal mechanism for canceling a reflexive saccade, and the variation of latency to half-max selectivity. Furthermore, the trained model was capable of reproducing the re-learning procedures when switching tasks and reversing the cue-saccade association. PMID:27284696
Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin
2016-01-01
The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V. Database URL: http://219.223.252.210:8080/SS/cdr.html PMID:27777244
Book review: old fields: dynamics and restoration of abandoned farmland
Technology Transfer Automated Retrieval System (TEKTRAN)
The 2007 volume, “Old Fields: Dynamics and Restoration of Abandoned Farmland”, edited by VA Cramer and RJ Hobbs and published by the Society for Ecological Restoration International (Island Press), is a valuable attempt to synthesize a dozen case studies on agricultural abandonment from all of the ...
Dynamical mean-field theory from a quantum chemical perspective.
Zgid, Dominika; Chan, Garnet Kin-Lic
2011-03-01
We investigate the dynamical mean-field theory (DMFT) from a quantum chemical perspective. Dynamical mean-field theory offers a formalism to extend quantum chemical methods for finite systems to infinite periodic problems within a local correlation approximation. In addition, quantum chemical techniques can be used to construct new ab initio Hamiltonians and impurity solvers for DMFT. Here, we explore some ways in which these things may be achieved. First, we present an informal overview of dynamical mean-field theory to connect to quantum chemical language. Next, we describe an implementation of dynamical mean-field theory where we start from an ab initio Hartree-Fock Hamiltonian that avoids double counting issues present in many applications of DMFT. We then explore the use of the configuration interaction hierarchy in DMFT as an approximate solver for the impurity problem. We also investigate some numerical issues of convergence within DMFT. Our studies are carried out in the context of the cubic hydrogen model, a simple but challenging test for correlation methods. Finally, we finish with some conclusions for future directions.
OLD-FIELD SUCCESSIONAL DYNAMICS FOLLOWING INTENSIVE HERBIVORY
Community composition and successional patterns can be altered by disturbance and exotic species invasions. Our objective was to describe vegetation dynamics following cessation of severe disturbance, which was heavy grazing by cattle, in an old-field grassland subject to invasi...
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503
Tunable nonequilibrium dynamics of field quenches in spin ice
Mostame, Sarah; Castelnovo, Claudio; Moessner, Roderich; Sondhi, Shivaji L.
2014-01-01
We present nonequilibrium physics in spin ice as a unique setting that combines kinematic constraints, emergent topological defects, and magnetic long-range Coulomb interactions. In spin ice, magnetic frustration leads to highly degenerate yet locally constrained ground states. Together, they form a highly unusual magnetic state—a “Coulomb phase”—whose excitations are point-like defects—magnetic monopoles—in the absence of which effectively no dynamics is possible. Hence, when they are sparse at low temperature, dynamics becomes very sluggish. When quenching the system from a monopole-rich to a monopole-poor state, a wealth of dynamical phenomena occur, the exposition of which is the subject of this article. Most notably, we find reaction diffusion behavior, slow dynamics owing to kinematic constraints, as well as a regime corresponding to the deposition of interacting dimers on a honeycomb lattice. We also identify potential avenues for detecting the magnetic monopoles in a regime of slow-moving monopoles. The interest in this model system is further enhanced by its large degree of tunability and the ease of probing it in experiment: With varying magnetic fields at different temperatures, geometric properties—including even the effective dimensionality of the system—can be varied. By monitoring magnetization, spin correlations or zero-field NMR, the dynamical properties of the system can be extracted in considerable detail. This establishes spin ice as a laboratory of choice for the study of tunable, slow dynamics. PMID:24379372
Quantum emitters dynamically coupled to a quantum field
NASA Astrophysics Data System (ADS)
Acevedo, O. L.; Quiroga, L.; Rodríguez, F. J.; Johnson, N. F.
2013-12-01
We study theoretically the dynamical response of a set of solid-state quantum emitters arbitrarily coupled to a single-mode microcavity system. Ramping the matter-field coupling strength in round trips, we quantify the hysteresis or irreversible quantum dynamics. The matter-field system is modeled as a finite-size Dicke model which has previously been used to describe equilibrium (including quantum phase transition) properties of systems such as quantum dots in a microcavity. Here we extend this model to address non-equilibrium situations. Analyzing the system's quantum fidelity, we find that the near-adiabatic regime exhibits the richest phenomena, with a strong asymmetry in the internal collective dynamics depending on which phase is chosen as the starting point. We also explore signatures of the crossing of the critical points on the radiation subsystem by monitoring its Wigner function; then, the subsystem can exhibit the emergence of non-classicality and complexity.
Taub-NUT dynamics with a magnetic field
NASA Astrophysics Data System (ADS)
Jante, Rogelio; Schroers, Bernd J.
2016-06-01
We study classical and quantum dynamics on the Euclidean Taub-NUT geometry coupled to an abelian gauge field with self-dual curvature and show that, even though Taub-NUT has neither bounded orbits nor quantum bound states, the magnetic binding via the gauge field produces both. The conserved Runge-Lenz vector of Taub-NUT dynamics survives, in a modified form, in the gauged model and allows for an essentially algebraic computation of classical trajectories and energies of quantum bound states. We also compute scattering cross sections and find a surprising electric-magnetic duality. Finally, we exhibit the dynamical symmetry behind the conserved Runge-Lenz and angular momentum vectors in terms of a twistorial formulation of phase space.
Quantum emitters dynamically coupled to a quantum field
Acevedo, O. L.; Quiroga, L.; Rodríguez, F. J.; Johnson, N. F.
2013-12-04
We study theoretically the dynamical response of a set of solid-state quantum emitters arbitrarily coupled to a single-mode microcavity system. Ramping the matter-field coupling strength in round trips, we quantify the hysteresis or irreversible quantum dynamics. The matter-field system is modeled as a finite-size Dicke model which has previously been used to describe equilibrium (including quantum phase transition) properties of systems such as quantum dots in a microcavity. Here we extend this model to address non-equilibrium situations. Analyzing the system’s quantum fidelity, we find that the near-adiabatic regime exhibits the richest phenomena, with a strong asymmetry in the internal collective dynamics depending on which phase is chosen as the starting point. We also explore signatures of the crossing of the critical points on the radiation subsystem by monitoring its Wigner function; then, the subsystem can exhibit the emergence of non-classicality and complexity.
Bubble dynamics in a standing sound field: the bubble habitat.
Koch, P; Kurz, T; Parlitz, U; Lauterborn, W
2011-11-01
Bubble dynamics is investigated numerically with special emphasis on the static pressure and the positional stability of the bubble in a standing sound field. The bubble habitat, made up of not dissolving, positionally and spherically stable bubbles, is calculated in the parameter space of the bubble radius at rest and sound pressure amplitude for different sound field frequencies, static pressures, and gas concentrations of the liquid. The bubble habitat grows with static pressure and shrinks with sound field frequency. The range of diffusionally stable bubble oscillations, found at positive slopes of the habitat-diffusion border, can be increased substantially with static pressure. PMID:22088010
Approximate photochemical dynamics of azobenzene with reactive force fields
Li, Yan; Hartke, Bernd
2013-12-14
We have fitted reactive force fields of the ReaxFF type to the ground and first excited electronic states of azobenzene, using global parameter optimization by genetic algorithms. Upon coupling with a simple energy-gap transition probability model, this setup allows for completely force-field-based simulations of photochemical cis→trans- and trans→cis-isomerizations of azobenzene, with qualitatively acceptable quantum yields. This paves the way towards large-scale dynamics simulations of molecular machines, including bond breaking and formation (via the reactive force field) as well as photochemical engines (presented in this work)
NASA Astrophysics Data System (ADS)
Andreon, S.; Gargiulo, G.; Longo, G.; Tagliaferri, R.; Capuano, N.
2000-12-01
Astronomical wide-field imaging performed with new large-format CCD detectors poses data reduction problems of unprecedented scale, which are difficult to deal with using traditional interactive tools. We present here NExt (Neural Extractor), a new neural network (NN) based package capable of detecting objects and performing both deblending and star/galaxy classification in an automatic way. Traditionally, in astronomical images, objects are first distinguished from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold; they are then classified as stars or as galaxies through diagnostic diagrams having variables chosen according to the astronomer's taste and experience. In the extraction step, assuming that images are well sampled, NExt requires only the simplest a priori definition of `what an object is' (i.e. it keeps all structures composed of more than one pixel) and performs the detection via an unsupervised NN, approaching detection as a clustering problem that has been thoroughly studied in the artificial intelligence literature. The first part of the NExt procedure consists of an optimal compression of the redundant information contained in the pixels via a mapping from pixel intensities to a subspace individualized through principal component analysis. At magnitudes fainter than the completeness limit, stars are usually almost indistinguishable from galaxies, and therefore the parameters characterizing the two classes do not lie in disconnected subspaces, thus preventing the use of unsupervised methods. We therefore adopted a supervised NN (i.e. a NN that first finds the rules to classify objects from examples and then applies them to the whole data set). In practice, each object is classified depending on its membership of the regions mapping the input feature space in the training set. In order to obtain an objective and reliable classification, instead of using an arbitrarily defined set of features
NASA Astrophysics Data System (ADS)
Melnikov, Leonid A.; Novosselova, Anna V.; Blinova, Nadejda V.; Vinitsky, Sergey I.; Serov, Vladislav V.; Bakutkin, Valery V.; Camenskich, T. G.; Guileva, E. V.
2000-03-01
In this work the numerical investigations of a potential dynamics of a neural network as the non-linear system and dynamics of the visual nerve which connect the eye retina receptors with the striate cortex cerebrum as the answer to the through-skin excitement of the eye retina by the electrical signal were realized. The visual evoked potential is the answer and characterizes the human brain state over the structures retina state and the conduction of the visual nerve fibers. The results of these investigations were presented. Specific features of the neural network, such as the excitation and depression, we took into account too. The discussion about the model parameters, used at the time of the numerical investigation, was made. The comparative analysis of the retina potential data and the data of the external signal filing by the brain hemicerebrum visual centers was made too.
Porée, Fabienne; Kachenoura, Amar; Carrault, Guy; Dal Molin, Renzo; Mabo, Philippe; Hernandez, Alfredo I.
2013-01-01
The study proposes a method to facilitate the remote follow-up of patients suffering from cardiac pathologies and treated with an implantable device, by synthesizing a 12-lead surface ECG from the intracardiac electrograms (EGM) recorded by the device. Two methods (direct and indirect), based on dynamic Time Delay artificial Neural Networks (TDNN) are proposed and compared with classical linear approaches. The direct method aims to estimate 12 different transfer functions between the EGM and each surface ECG signal. The indirect method is based on a preliminary orthogonalization phase of the available EGM and ECG signals, and the application of the TDNN between these orthogonalized signals, using only three transfer functions. These methods are evaluated on a dataset issued from 15 patients. Correlation coefficients calculated between the synthesized and the real ECG show that the proposed TDNN methods represent an efficient way to synthesize 12-lead ECG, from two or four EGM and perform better than the linear ones. We also evaluate the results as a function of the EGM configuration. Results are also supported by the comparison of extracted features and a qualitative analysis performed by a cardiologist. PMID:23086502
NASA Astrophysics Data System (ADS)
Carli, S.; Bonifetto, R.; Savoldi, L.; Zanino, R.
2015-09-01
A model based on Artificial Neural Networks (ANNs) is developed for the heated line portion of a cryogenic circuit, where supercritical helium (SHe) flows and that also includes a cold circulator, valves, pipes/cryolines and heat exchangers between the main loop and a saturated liquid helium (LHe) bath. The heated line mimics the heat load coming from the superconducting magnets to their cryogenic cooling circuits during the operation of a tokamak fusion reactor. An ANN is trained, using the output from simulations of the circuit performed with the 4C thermal-hydraulic (TH) code, to reproduce the dynamic behavior of the heated line, including for the first time also scenarios where different types of controls act on the circuit. The ANN is then implemented in the 4C circuit model as a new component, which substitutes the original 4C heated line model. For different operational scenarios and control strategies, a good agreement is shown between the simplified ANN model results and the original 4C results, as well as with experimental data from the HELIOS facility confirming the suitability of this new approach which, extended to an entire magnet systems, can lead to real-time control of the cooling loops and fast assessment of control strategies for heat load smoothing to the cryoplant.
Dynamic indoor thermal comfort model identification based on neural computing PMV index
NASA Astrophysics Data System (ADS)
Sahari, K. S. Mohamed; Jalal, M. F. Abdul; Homod, R. Z.; Eng, Y. K.
2013-06-01
This paper focuses on modelling and simulation of building dynamic thermal comfort control for non-linear HVAC system. Thermal comfort in general refers to temperature and also humidity. However in reality, temperature or humidity is just one of the factors affecting the thermal comfort but not the main measures. Besides, as HVAC control system has the characteristic of time delay, large inertia, and highly nonlinear behaviour, it is difficult to determine the thermal comfort sensation accurately if we use traditional Fanger's PMV index. Hence, Artificial Neural Network (ANN) has been introduced due to its ability to approximate any nonlinear mapping. Using ANN to train, we can get the input-output mapping of HVAC control system or in other word; we can propose a practical approach to identify thermal comfort of a building. Simulations were carried out to validate and verify the proposed method. Results show that the proposed ANN method can track down the desired thermal sensation for a specified condition space.
Blind Source Separation and Dynamic Fuzzy Neural Network for Fault Diagnosis in Machines
NASA Astrophysics Data System (ADS)
Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli
2015-07-01
Many assessment and detection methods are used to diagnose faults in machines. High accuracy in fault detection and diagnosis can be achieved by using numerical methods with noise-resistant properties. However, to some extent, noise always exists in measured data on real machines, which affects the identification results, especially in the diagnosis of early- stage faults. In view of this situation, a damage assessment method based on blind source separation and dynamic fuzzy neural network (DFNN) is presented to diagnose the early-stage machinery faults in this paper. In the processing of measurement signals, blind source separation is adopted to reduce noise. Then sensitive features of these faults are obtained by extracting low dimensional manifold characteristics from the signals. The model for fault diagnosis is established based on DFNN. Furthermore, on-line computation is accelerated by means of compressed sensing. Numerical vibration signals of ball screw fault modes are processed on the model for mechanical fault diagnosis and the results are in good agreement with the actual condition even at the early stage of fault development. This detection method is very useful in practice and feasible for early-stage fault diagnosis.
Task-dependent neural representations of salient events in dynamic auditory scenes
Shuai, Lan; Elhilali, Mounya
2014-01-01
Selecting pertinent events in the cacophony of sounds that impinge on our ears every day is regulated by the acoustic salience of sounds in the scene as well as their behavioral relevance as dictated by top-down task-dependent demands. The current study aims to explore the neural signature of both facets of attention, as well as their possible interactions in the context of auditory scenes. Using a paradigm with dynamic auditory streams with occasional salient events, we recorded neurophysiological responses of human listeners using EEG while manipulating the subjects' attentional state as well as the presence or absence of a competing auditory stream. Our results showed that salient events caused an increase in the auditory steady-state response (ASSR) irrespective of attentional state or complexity of the scene. Such increase supplemented ASSR increases due to task-driven attention. Salient events also evoked a strong N1 peak in the ERP response when listeners were attending to the target sound stream, accompanied by an MMN-like component in some cases and changes in the P1 and P300 components under all listening conditions. Overall, bottom-up attention induced by a salient change in the auditory stream appears to mostly modulate the amplitude of the steady-state response and certain event-related potentials to salient sound events; though this modulation is affected by top-down attentional processes and the prominence of these events in the auditory scene as well. PMID:25100934
Local community detection as pattern restoration by attractor dynamics of recurrent neural networks.
Okamoto, Hiroshi
2016-08-01
Densely connected parts in networks are referred to as "communities". Community structure is a hallmark of a variety of real-world networks. Individual communities in networks form functional modules of complex systems described by networks. Therefore, finding communities in networks is essential to approaching and understanding complex systems described by networks. In fact, network science has made a great deal of effort to develop effective and efficient methods for detecting communities in networks. Here we put forward a type of community detection, which has been little examined so far but will be practically useful. Suppose that we are given a set of source nodes that includes some (but not all) of "true" members of a particular community; suppose also that the set includes some nodes that are not the members of this community (i.e., "false" members of the community). We propose to detect the community from this "imperfect" and "inaccurate" set of source nodes using attractor dynamics of recurrent neural networks. Community detection by the proposed method can be viewed as restoration of the original pattern from a deteriorated pattern, which is analogous to cue-triggered recall of short-term memory in the brain. We demonstrate the effectiveness of the proposed method using synthetic networks and real social networks for which correct communities are known.
Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don
2013-12-01
Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more
Kasabov, Nikola; Dhoble, Kshitij; Nuntalid, Nuttapod; Indiveri, Giacomo
2013-05-01
On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain-computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes
Dynamics of lysozyme and its hydration water under electric field
Favi, Pelagie M; Zhang, Qiu; O'Neill, Hugh Michael; Mamontov, Eugene; Omar Diallo, Souleymane; Palmer, Jeremy
2014-01-01
The effects of static electric field on the dynamics of lysozyme and its hydration water have been investigated by means of incoherent quasi-elastic neutron scattering (QENS). Measurements were performed on lysozyme samples, hydrated respectively with heavy water (D2O) to capture the protein dynamics, and with light water (H2O), to probe the dynamics of the hydration shell, in the temperature range from 210 < T < 260 K. The hydration fraction in both cases was about 0.38 gram of water per gram of dry protein. The field strengths investigated were respectively 0 kV/mm and 2 kV/mm ( 2 106 V/m) for the protein hydrated with D2O and 0 kV and 1 kV/mm for the H2O-hydrated counterpart. While the overall internal protons dynamics of the protein appears to be unaffected by the application of electric field up to 2 kV/mm, likely due to the stronger intra-molecular interactions, there is also no appreciable quantitative enhancement of the diffusive dynamics of the hydration water, as would be anticipated based on our recent observations in water confined in silica pores under field values of 2.5 kV/mm. This may be due to the difference in surface interactions between water and the two adsorption hosts (silica and protein), or to the existence of a critical threshold field value Ec 2 3 kV/mm for increased molecular diffusion, for which electrical breakdown is a limitation for our sample.
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy
2014-01-01
In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions.
Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M; Dan, Bernard; McIntyre, Joseph; Cheron, Guy
2014-01-01
In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions
Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M.; Dan, Bernard; McIntyre, Joseph; Cheron, Guy
2014-01-01
In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions
Frezza-Buet, Hervé
2014-12-01
This paper presents a vector quantization process that can be applied online to a stream of inputs. It enables to set up and maintain a dynamical representation of the current information in the stream as a topology preserving graph of prototypical values, as well as a velocity field. The algorithm relies on the formulation of the accuracy of the quantization process, that allows for both the updating of the number of prototypes according to the stream evolution and the stabilization of the representation from which velocities can be extracted. A video processing application is presented. PMID:25248032
Dynamics of molecular superrotors in an external magnetic field
NASA Astrophysics Data System (ADS)
Korobenko, Aleksey; Milner, Valery
2015-08-01
We excite diatomic oxygen and nitrogen to high rotational states with an optical centrifuge and study their dynamics in an external magnetic field. Ion imaging is employed to directly visualize, and follow in time, the rotation plane of the molecular superrotors. The two different mechanisms of interaction between the magnetic field and the molecular angular momentum in paramagnetic oxygen and non-magnetic nitrogen lead to qualitatively different behaviour. In nitrogen, we observe the precession of the molecular angular momentum around the field vector. In oxygen, strong spin-rotation coupling results in faster and richer dynamics, encompassing the splitting of the rotation plane into three separate components. As the centrifuged molecules evolve with no significant dispersion of the molecular wave function, the observed magnetic interaction presents an efficient mechanism for controlling the plane of molecular rotation.
An implicit divalent counterion force field for RNA molecular dynamics
NASA Astrophysics Data System (ADS)
Henke, Paul S.; Mak, Chi H.
2016-03-01
How to properly account for polyvalent counterions in a molecular dynamics simulation of polyelectrolytes such as nucleic acids remains an open question. Not only do counterions such as Mg2+ screen electrostatic interactions, they also produce attractive intrachain interactions that stabilize secondary and tertiary structures. Here, we show how a simple force field derived from a recently reported implicit counterion model can be integrated into a molecular dynamics simulation for RNAs to realistically reproduce key structural details of both single-stranded and base-paired RNA constructs. This divalent counterion model is computationally efficient. It works with existing atomistic force fields, or coarse-grained models may be tuned to work with it. We provide optimized parameters for a coarse-grained RNA model that takes advantage of this new counterion force field. Using the new model, we illustrate how the structural flexibility of RNA two-way junctions is modified under different salt conditions.
Rajagopalan, Janani; Modi, Shilpi; Kumar, Pawan; Khushu, Subash; Mandal, Manas K
2015-12-01
It is not clearly known as to why some people identify camouflaged objects with ease compared with others. The literature suggests that Field-Independent individuals detect camouflaged object better than their Field-Dependent counterparts, without having evidence at the neural activation level. A paradigm was designed to obtain neural correlates of camouflage detection, with real-life photographs, using functional magnetic resonance imaging. Twenty-three healthy human subjects were stratified as Field-Independent (FI) and Field-Dependent (FD), with Witkin's Embedded Figure Test. FIs performed better than FDs (marginal significance; p=0.054) during camouflage detection task. fMRI revealed differential activation pattern between FI and FD subjects for this task. One sample T-test showed greater activation in terms of cluster size in FDs, whereas FIs showed additional areas for the same task. On direct comparison of the two groups, FI subjects showed additional activation in parts of primary visual cortex, thalamus, cerebellum, inferior and middle frontal gyrus. Conversely, FDs showed greater activation in inferior frontal gyrus, precentral gyrus, putamen, caudate nucleus and superior parietal lobule as compared to FIs. The results give preliminary evidence to the differential neural activation underlying the variances in cognitive styles of the two groups. PMID:26648036
First principles molecular dynamics without self-consistent field optimization
Souvatzis, Petros; Niklasson, Anders M. N.
2014-01-28
We present a first principles molecular dynamics approach that is based on time-reversible extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] in the limit of vanishing self-consistent field optimization. The optimization-free dynamics keeps the computational cost to a minimum and typically provides molecular trajectories that closely follow the exact Born-Oppenheimer potential energy surface. Only one single diagonalization and Hamiltonian (or Fockian) construction are required in each integration time step. The proposed dynamics is derived for a general free-energy potential surface valid at finite electronic temperatures within hybrid density functional theory. Even in the event of irregular functional behavior that may cause a dynamical instability, the optimization-free limit represents a natural starting guess for force calculations that may require a more elaborate iterative electronic ground state optimization. Our optimization-free dynamics thus represents a flexible theoretical framework for a broad and general class of ab initio molecular dynamics simulations.
Quantum dynamics of charge state in silicon field evaporation
NASA Astrophysics Data System (ADS)
Silaeva, Elena P.; Uchida, Kazuki; Watanabe, Kazuyuki
2016-08-01
The charge state of an ion field-evaporating from a silicon-atom cluster is analyzed using time-dependent density functional theory coupled to molecular dynamics. The final charge state of the ion is shown to increase gradually with increasing external electrostatic field in agreement with the average charge state of silicon ions detected experimentally. When field evaporation is triggered by laser-induced electronic excitations the charge state also increases with increasing intensity of the laser pulse. At the evaporation threshold, the charge state of the evaporating ion does not depend on the electrostatic field due to the strong contribution of laser excitations to the ionization process both at low and high laser energies. A neutral silicon atom escaping the cluster due to its high initial kinetic energy is shown to be eventually ionized by external electrostatic field.
Pollutants dynamics in a rice field and an upland field during storm events
NASA Astrophysics Data System (ADS)
Kim, Jin Soo; Park, Jong-Wha; Jang, Hoon; Kim, Young Hyeon
2010-05-01
We investigated the dynamics of pollutants such as total nitrogen (TN), total phosphorous (TP), biochemical oxygen demand (BOD), chemical oxygen demand (COD), and suspended sediment (SS) in runoff from a rice field and an upland field near the upper stream of the Han river in South Korea for multiple storm events. The upland field was cropped with red pepper, sweet potato, beans, and sesame. Runoff from the rice field started later than that from the upland field due to the water storage function of rice field. Unlike the upland field, runoff from the rice field was greatly affected by farmers' water management practices. Overall, event mean concentrations (EMCs) of pollutants in runoff water from the upland field were higher than those from the rice field. Especially, EMCs of TP and SS in runoff water from the upland field were one order of magnitude higher than those from the rice field. This may be because ponding condition and flat geographical features of the rice field greatly reduces the transport of particulate phosphorous associated with soil erosion. The results suggest that the rice field contributes to control particulate pollutants into adjacent water bodies.
Dynamics of Dollard asymptotic variables. Asymptotic fields in Coulomb scattering
NASA Astrophysics Data System (ADS)
Morchio, G.; Strocchi, F.
2016-03-01
Generalizing Dollard’s strategy, we investigate the structure of the scattering theory associated to any large time reference dynamics UD(t) allowing for the existence of Møller operators. We show that (for each scattering channel) UD(t) uniquely identifies, for t →±∞, asymptotic dynamics U±(t); they are unitary groups acting on the scattering spaces, satisfy the Møller interpolation formulas and are interpolated by the S-matrix. In view of the application to field theory models, we extend the result to the adiabatic procedure. In the Heisenberg picture, asymptotic variables are obtained as LSZ-like limits of Heisenberg variables; their time evolution is induced by U±(t), which replace the usual free asymptotic dynamics. On the asymptotic states, (for each channel) the Hamiltonian can by written in terms of the asymptotic variables as H = H±(qout/in,pout/in), H±(q,p) the generator of the asymptotic dynamics. As an application, we obtain the asymptotic fields ψout/in in repulsive Coulomb scattering by an LSZ modified formula; in this case, U±(t) = U0(t), so that ψout/in are free canonical fields and H = H0(ψout/in).
Hideen Markov Models and Neural Networks for Fault Detection in Dynamic Systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
None given. (From conclusion): Neural networks plus Hidden Markov Models(HMM)can provide excellene detection and false alarm rate performance in fault detection applications. Modified models allow for novelty detection. Also covers some key contributions of neural network model, and application status.
Simultaneous Electromagnetic Tracking and Calibration for Dynamic Field Distortion Compensation.
Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor
2016-08-01
Electromagnetic (EM) tracking systems are highly susceptible to field distortion. The interference can cause measurement errors up to a few centimeters in clinical environments, which limits the reliability of these systems. Unless corrected for, this measurement error imperils the success of clinical procedures. It is therefore fundamental to dynamically calibrate EM tracking systems and compensate for measurement error caused by field distorting objects commonly present in clinical environments. We propose to combine a motion model with observations of redundant EM sensors and compensate for field distortions in real time. We employ a simultaneous localization and mapping technique to accurately estimate the pose of the tracked instrument while creating the field distortion map. We conducted experiments with six degrees-of-freedom motions in the presence of field distorting objects in research and clinical environments. We applied our approach to improve the EM tracking accuracy and compared our results to a conventional sensor fusion technique. Using our approach, the maximum tracking error was reduced by 67% for position measurements and by 64% for orientation measurements. Currently, clinical applications of EM trackers are hampered by the adverse distortion effects. Our approach introduces a novel method for dynamic field distortion compensation, independent from preoperative calibrations or external tracking devices, and enables reliable EM navigation for potential applications. PMID:26595908
Downscaling Transpiration from the Field to the Tree Scale using the Neural Network Approach
NASA Astrophysics Data System (ADS)
Hopmans, J. W.
2015-12-01
Estimating actual evapotranspiration (ETa) spatial variability in orchards is key when trying to quantify water (and associated nutrients) leaching, both with the mass balance and inverse modeling methods. ETa measurements however generally occur at larger scales (e.g. Eddy-covariance method) or have a limited quantitative accuracy. In this study we propose to establish a statistical relation between field ETa and field averaged variables known to be closely related to it, such as stem water potential (WP), soil water storage (WS) and ETc. For that we use 4 years of soil and almond trees water status data to train artificial neural networks (ANNs) predicting field scale ETa and downscale the relation to the individual tree scale. ANNs composed of only two neurons in a hidden layer (11 parameters on total) proved to be the most accurate (overall RMSE = 0.0246 mm/h, R2 = 0.944), seemingly because adding more neurons generated overfitting of noise in the training dataset. According to the optimized weights in the best ANNs, the first hidden neuron could be considered in charge of relaying the ETc information while the other one would deal with the water stress response to stem WP, soil WS, and ETc. As individual trees had specific signatures for combinations of these variables, variability was generated in their ETa responses. The relative canopy cover was the main source of variability of ETa while stem WP was the most influent factor for the ETa / ETc ratio. Trees on drip-irrigated side of the orchard appeared to be less affected by low estimated soil WS in the root zone than on the fanjet micro-sprinklers side, possibly due to a combination of (i) more substantial root biomass increasing the plant hydraulic conductance, (ii) bias in the soil WS estimation due to soil moisture heterogeneity on the drip-side, and (iii) the access to deeper water resource. Tree scale ETa responses are in good agreement with soil-plant water relations reported in the literature, and
Microstructures fabricated by dynamically controlled femtosecond patterned vector optical fields.
Cai, Meng-Qiang; Li, Ping-Ping; Feng, Dan; Pan, Yue; Qian, Sheng-Xia; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian
2016-04-01
We have presented and demonstrated a method for the fabrication of various complicated microstructures based on dynamically controlled patterned vector optical fields (PVOFs). We design and generate dynamic PVOFs by loading patterned holograms displayed on the spatial light modulator and moving traces of focuses with different patterns. We experimentally fabricate the various microstructures in z-cut lithium niobate plates. The method we present has some benefits such as no motion of the fabricated samples and high efficiency due to its parallel feature. Moreover, our approach is able to fabricate three-dimensional microstructures. PMID:27192265
The role of membrane dynamics in electrical and infrared neural stimulation
NASA Astrophysics Data System (ADS)
Moen, Erick K.; Beier, Hope T.; Ibey, Bennett L.; Armani, Andrea M.
2016-03-01
We recently developed a nonlinear optical imaging technique based on second harmonic generation (SHG) to identify membrane disruption events in live cells. This technique was used to detect nanoporation in the plasma membrane following nanosecond pulsed electric field (nsPEF) exposure. It has been hypothesized that similar poration events could be induced by the thermal gradients generated by infrared (IR) laser energy. Optical pulses are a highly desirable stimulus for the nervous system, as they are capable of inhibiting and producing action potentials in a highly localized but non-contact fashion. However, the underlying mechanisms involved with infrared neural stimulation (INS) are not well understood. The ability of our method to non-invasively measure membrane structure and transmembrane potential via Two Photon Fluorescence (TPF) make it uniquely suited to neurological research. In this work, we leverage our technique to understand what role membrane structure plays during INS and contrast it with nsPEF stimulation. We begin by examining the effect of IR pulses on CHO-K1 cells before progressing to primary hippocampal neurons. The use of these two cell lines allows us to directly compare poration as a result of IR pulses to nsPEF exposure in both a neuron-derived cell line, and one likely lacking native channels sensitive to thermal stimuli.
Neural Architectures for Control
NASA Technical Reports Server (NTRS)
Peterson, James K.
1991-01-01
The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.
Molecular dynamics simulations of methane hydrate using polarizable force fields
Jiang, H.N.; Jordan, K.D.; Taylor, C.E.
2007-06-14
Molecular dynamics simulations of methane hydrate have been carried out using the polarizable AMOEBA and COS/G2 force fields. Properties calculated include the temperature dependence of the lattice constant, the OC and OO radial distribution functions, and the vibrational spectra. Both the AMOEBA and COS/G2 force fields are found to successfully account for the available experimental data, with overall somewhat better agreement with experiment being found for the AMOEBA model. Comparison is made with previous results obtained using TIP4P and SPC/E effective two-body force fields and the polarizable TIP4P-FQ force field, which allows for in-plane polarization only. Significant differences are found between the properties calculated using the TIP4P-FQ model and those obtained using the other models, indicating an inadequacy of restricting explicit polarization to in-plane onl
Anomaly-Induced Dynamical Refringence in Strong-Field QED.
Mueller, N; Hebenstreit, F; Berges, J
2016-08-01
We investigate the impact of the Adler-Bell-Jackiw anomaly on the nonequilibrium evolution of strong-field quantum electrodynamics (QED) using real-time lattice gauge theory techniques. For field strengths exceeding the Schwinger limit for pair production, we encounter a highly absorptive medium with anomaly induced dynamical refractive properties. In contrast to earlier expectations based on equilibrium properties, where net anomalous effects vanish because of the trivial vacuum structure, we find that out-of-equilibrium conditions can have dramatic consequences for the presence of quantum currents with distinctive macroscopic signatures. We observe an intriguing tracking behavior, where the system spends longest times near collinear field configurations with maximum anomalous current. Apart from the potential relevance of our findings for future laser experiments, similar phenomena related to the chiral magnetic effect are expected to play an important role for strong QED fields during initial stages of heavy-ion collision experiments. PMID:27541456
Anomaly-Induced Dynamical Refringence in Strong-Field QED
NASA Astrophysics Data System (ADS)
Mueller, N.; Hebenstreit, F.; Berges, J.
2016-08-01
We investigate the impact of the Adler-Bell-Jackiw anomaly on the nonequilibrium evolution of strong-field quantum electrodynamics (QED) using real-time lattice gauge theory techniques. For field strengths exceeding the Schwinger limit for pair production, we encounter a highly absorptive medium with anomaly induced dynamical refractive properties. In contrast to earlier expectations based on equilibrium properties, where net anomalous effects vanish because of the trivial vacuum structure, we find that out-of-equilibrium conditions can have dramatic consequences for the presence of quantum currents with distinctive macroscopic signatures. We observe an intriguing tracking behavior, where the system spends longest times near collinear field configurations with maximum anomalous current. Apart from the potential relevance of our findings for future laser experiments, similar phenomena related to the chiral magnetic effect are expected to play an important role for strong QED fields during initial stages of heavy-ion collision experiments.
NASA Astrophysics Data System (ADS)
Omori, Toshiaki; Horiguchi, Tsuyoshi
2004-12-01
We propose a two-layered neural network model for oscillatory phenomena in the thalamic system and investigate an effect of neuromodulation due to the acetylcholine on the oscillatory phenomena by numerical simulations. The proposed model consists of a layer of the thalamic reticular neurons and that of the cholinergic neurons. We introduce a dynamics of concentration of the acetylcholine which depends on a state of the cholinergic neurons, and assume that the conductance of the thalamic reticular neurons is dynamically regulated by the acetylcholine. From the results obtained by numerical simulations, we find that a dynamical transition between a bursting state and a resting state occurs successively in the layer of the thalamic reticular neurons due to the acetylcholine. Therefore it turns out that the neuromodulation due to the acetylcholine is important for the dynamical state transition in the thalamic system.
Ziv, Omer; Zaritsky, Assaf; Yaffe, Yakey; Mutukula, Naresh; Edri, Reuven; Elkabetz, Yechiel
2015-10-01
Neural stem cells (NSCs) are progenitor cells for brain development, where cellular spatial composition (cytoarchitecture) and dynamics are hypothesized to be linked to critical NSC capabilities. However, understanding cytoarchitectural dynamics of this process has been limited by the difficulty to quantitatively image brain development in vivo. Here, we study NSC dynamics within Neural Rosettes--highly organized multicellular structures derived from human pluripotent stem cells. Neural rosettes contain NSCs with strong epithelial polarity and are expected to perform apical-basal interkinetic nuclear migration (INM)--a hallmark of cortical radial glial cell development. We developed a quantitative live imaging framework to characterize INM dynamics within rosettes. We first show that the tendency of cells to follow the INM orientation--a phenomenon we referred to as radial organization, is associated with rosette size, presumably via mechanical constraints of the confining structure. Second, early forming rosettes, which are abundant with founder NSCs and correspond to the early proliferative developing cortex, show fast motions and enhanced radial organization. In contrast, later derived rosettes, which are characterized by reduced NSC capacity and elevated numbers of differentiated neurons, and thus correspond to neurogenesis mode in the developing cortex, exhibit slower motions and decreased radial organization. Third, later derived rosettes are characterized by temporal instability in INM measures, in agreement with progressive loss in rosette integrity at later developmental stages. Finally, molecular perturbations of INM by inhibition of actin or non-muscle myosin-II (NMII) reduced INM measures. Our framework enables quantification of cytoarchitecture NSC dynamics and may have implications in functional molecular studies, drug screening, and iPS cell-based platforms for disease modeling.