Metastable dynamics in heterogeneous neural fields
Schwappach, Cordula; Hutt, Axel; beim Graben, Peter
2015-01-01
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671
Neural field theory with variance dynamics.
Robinson, P A
2013-06-01
Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.
Dynamic Neural Fields with Intrinsic Plasticity
Strub, Claudius; Schöner, Gregor; Wörgötter, Florentin; Sandamirskaya, Yulia
2017-01-01
Dynamic neural fields (DNFs) are dynamical systems models that approximate the activity of large, homogeneous, and recurrently connected neural networks based on a mean field approach. Within dynamic field theory, the DNFs have been used as building blocks in architectures to model sensorimotor embedding of cognitive processes. Typically, the parameters of a DNF in an architecture are manually tuned in order to achieve a specific dynamic behavior (e.g., decision making, selection, or working memory) for a given input pattern. This manual parameters search requires expert knowledge and time to find and verify a suited set of parameters. The DNF parametrization may be particular challenging if the input distribution is not known in advance, e.g., when processing sensory information. In this paper, we propose the autonomous adaptation of the DNF resting level and gain by a learning mechanism of intrinsic plasticity (IP). To enable this adaptation, an input and output measure for the DNF are introduced, together with a hyper parameter to define the desired output distribution. The online adaptation by IP gives the possibility to pre-define the DNF output statistics without knowledge of the input distribution and thus, also to compensate for changes in it. The capabilities and limitations of this approach are evaluated in a number of experiments. PMID:28912706
Dynamic Neural Fields with Intrinsic Plasticity.
Strub, Claudius; Schöner, Gregor; Wörgötter, Florentin; Sandamirskaya, Yulia
2017-01-01
Dynamic neural fields (DNFs) are dynamical systems models that approximate the activity of large, homogeneous, and recurrently connected neural networks based on a mean field approach. Within dynamic field theory, the DNFs have been used as building blocks in architectures to model sensorimotor embedding of cognitive processes. Typically, the parameters of a DNF in an architecture are manually tuned in order to achieve a specific dynamic behavior (e.g., decision making, selection, or working memory) for a given input pattern. This manual parameters search requires expert knowledge and time to find and verify a suited set of parameters. The DNF parametrization may be particular challenging if the input distribution is not known in advance, e.g., when processing sensory information. In this paper, we propose the autonomous adaptation of the DNF resting level and gain by a learning mechanism of intrinsic plasticity (IP). To enable this adaptation, an input and output measure for the DNF are introduced, together with a hyper parameter to define the desired output distribution. The online adaptation by IP gives the possibility to pre-define the DNF output statistics without knowledge of the input distribution and thus, also to compensate for changes in it. The capabilities and limitations of this approach are evaluated in a number of experiments.
Investigation on Amari's dynamical neural field with global constant inhibition.
Jin, Dequan; Peng, Jigen
2015-11-01
In this paper, the properties of Amari's dynamical neural field with global constant inhibition induced by its kernel are investigated. Amari's dynamical neural field illustrates many neurophysiological phenomena successfully and has been applied to unsupervised learning like data clustering in recent years. In its applications, the stationary solution to Amari's dynamical neural field plays an important role that the underlying patterns being perceived are usually presented as the excited region in it. However, the type of stationary solution to dynamical neural field with typical kernel is often sensitive to parameters of its kernel that limits its range of application. Different from dynamical neural field with typical kernel that have been discussed a lot, there are few theoretical results on dynamical neural field with global constant inhibitory kernel that has already shown better performance in practice. In this paper, some important results on existence and stability of stationary solution to dynamical neural field with global constant inhibitory kernel are obtained. All of these results show that such kind of dynamical neural field has better potential for missions like data clustering than those with typical kernels, which provide a theoretical basis of its further extensive application.
Neural Population Dynamics Modeled by Mean-Field Graphs
NASA Astrophysics Data System (ADS)
Kozma, Robert; Puljic, Marko
2011-09-01
In this work we apply random graph theory approach to describe neural population dynamics. There are important advantages of using random graph theory approach in addition to ordinary and partial differential equations. The mathematical theory of large-scale random graphs provides an efficient tool to describe transitions between high- and low-dimensional spaces. Recent advances in studying neural correlates of higher cognition indicate the significance of sudden changes in space-time neurodynamics, which can be efficiently described as phase transitions in the neuropil medium. Phase transitions are rigorously defined mathematically on random graph sequences and they can be naturally generalized to a class of percolation processes called neuropercolation. In this work we employ mean-field graphs with given vertex degree distribution and edge strength distribution. We demonstrate the emergence of collective oscillations in the style of brains.
Dynamic neural fields as a step toward cognitive neuromorphic architectures
Sandamirskaya, Yulia
2014-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620
TUTORIAL: The dynamic neural field approach to cognitive robotics
NASA Astrophysics Data System (ADS)
Erlhagen, Wolfram; Bicho, Estela
2006-09-01
This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work. .
The dynamic neural field approach to cognitive robotics.
Erlhagen, Wolfram; Bicho, Estela
2006-09-01
This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work.
A dynamic neural field model of temporal order judgments.
Hecht, Lauren N; Spencer, John P; Vecera, Shaun P
2015-12-01
Temporal ordering of events is biased, or influenced, by perceptual organization-figure-ground organization-and by spatial attention. For example, within a region assigned figural status or at an attended location, onset events are processed earlier (Lester, Hecht, & Vecera, 2009; Shore, Spence, & Klein, 2001), and offset events are processed for longer durations (Hecht & Vecera, 2011; Rolke, Ulrich, & Bausenhart, 2006). Here, we present an extension of a dynamic field model of change detection (Johnson, Spencer, Luck, & Schöner, 2009; Johnson, Spencer, & Schöner, 2009) that accounts for both the onset and offset performance for figural and attended regions. The model posits that neural populations processing the figure are more active, resulting in a peak of activation that quickly builds toward a detection threshold when the onset of a target is presented. This same enhanced activation for some neural populations is maintained when a present target is removed, creating delays in the perception of the target's offset. We discuss the broader implications of this model, including insights regarding how neural activation can be generated in response to the disappearance of information. (c) 2015 APA, all rights reserved).
Neural masses and fields in dynamic causal modeling
Moran, Rosalyn; Pinotsis, Dimitris A.; Friston, Karl
2013-01-01
Dynamic causal modeling (DCM) provides a framework for the analysis of effective connectivity among neuronal subpopulations that subtend invasive (electrocorticograms and local field potentials) and non-invasive (electroencephalography and magnetoencephalography) electrophysiological responses. This paper reviews the suite of neuronal population models including neural masses, fields and conductance-based models that are used in DCM. These models are expressed in terms of sets of differential equations that allow one to model the synaptic underpinnings of connectivity. We describe early developments using neural mass models, where convolution-based dynamics are used to generate responses in laminar-specific populations of excitatory and inhibitory cells. We show that these models, though resting on only two simple transforms, can recapitulate the characteristics of both evoked and spectral responses observed empirically. Using an identical neuronal architecture, we show that a set of conductance based models—that consider the dynamics of specific ion-channels—present a richer space of responses; owing to non-linear interactions between conductances and membrane potentials. We propose that conductance-based models may be more appropriate when spectra present with multiple resonances. Finally, we outline a third class of models, where each neuronal subpopulation is treated as a field; in other words, as a manifold on the cortical surface. By explicitly accounting for the spatial propagation of cortical activity through partial differential equations (PDEs), we show that the topology of connectivity—through local lateral interactions among cortical layers—may be inferred, even in the absence of spatially resolved data. We also show that these models allow for a detailed analysis of structure–function relationships in the cortex. Our review highlights the relationship among these models and how the hypothesis asked of empirical data suggests an appropriate
Behavioral dynamics and neural grounding of a dynamic field theory of multi-object tracking.
Spencer, J P; Barich, K; Goldberg, J; Perone, S
2012-09-01
The ability to dynamically track moving objects in the environment is crucial for efficient interaction with the local surrounds. Here, we examined this ability in the context of the multi-object tracking (MOT) task. Several theories have been proposed to explain how people track moving objects; however, only one of these previous theories is implemented in a real-time process model, and there has been no direct contact between theories of object tracking and the growing neural literature using ERPs and fMRI. Here, we present a neural process model of object tracking that builds from a Dynamic Field Theory of spatial cognition. Simulations reveal that our dynamic field model captures recent behavioral data examining the impact of speed and tracking duration on MOT performance. Moreover, we show that the same model with the same trajectories and parameters can shed light on recent ERP results probing how people distribute attentional resources to targets vs. distractors. We conclude by comparing this new theory of object tracking to other recent accounts, and discuss how the neural grounding of the theory might be effectively explored in future work.
Validating a model for detecting magnetic field intensity using dynamic neural fields.
Taylor, Brian K
2016-11-07
Several animals use properties of Earth's magnetic field as a part of their navigation toolkit to accomplish tasks ranging from local homing to continental migration. Studying these behaviors has led to the postulation of both a magnetite-based sense, and a chemically based radical-pair mechanism. Several researchers have proposed models aimed at both understanding these mechanisms, and offering insights into future physiological experiments. The present work mathematically implements a previously developed conceptual model for sensing and processing magnetite-based magnetosensory feedback by using dynamic neural fields, a computational neuroscience tool for modeling nervous system dynamics and processing. Results demonstrate the plausibility of the conceptual model's predictions. Specifically, a population of magnetoreceptors in which each individual can only sense directional information can encode magnetic intensity en masse. Multiple populations can encode both magnetic direction, and intensity, two parameters that several animals use in their navigational toolkits. This work can be expanded to test other magnetoreceptor models. Published by Elsevier Ltd.
Neural field simulator: two-dimensional spatio-temporal dynamics involving finite transmission speed
Nichols, Eric J.; Hutt, Axel
2015-01-01
Neural Field models (NFM) play an important role in the understanding of neural population dynamics on a mesoscopic spatial and temporal scale. Their numerical simulation is an essential element in the analysis of their spatio-temporal dynamics. The simulation tool described in this work considers scalar spatially homogeneous neural fields taking into account a finite axonal transmission speed and synaptic temporal derivatives of first and second order. A text-based interface offers complete control of field parameters and several approaches are used to accelerate simulations. A graphical output utilizes video hardware acceleration to display running output with reduced computational hindrance compared to simulators that are exclusively software-based. Diverse applications of the tool demonstrate breather oscillations, static and dynamic Turing patterns and activity spreading with finite propagation speed. The simulator is open source to allow tailoring of code and this is presented with an extension use case. PMID:26539105
Modeling human target reaching with an adaptive observer implemented with dynamic neural fields.
Fard, Farzaneh S; Hollensen, Paul; Heinke, Dietmar; Trappenberg, Thomas P
2015-12-01
Humans can point fairly accurately to memorized states when closing their eyes despite slow or even missing sensory feedback. It is also common that the arm dynamics changes during development or from injuries. We propose a biologically motivated implementation of an arm controller that includes an adaptive observer. Our implementation is based on the neural field framework, and we show how a path integration mechanism can be trained from few examples. Our results illustrate successful generalization of path integration with a dynamic neural field by which the robotic arm can move in arbitrary directions and velocities. Also, by adapting the strength of the motor effect the observer implicitly learns to compensate an image acquisition delay in the sensory system. Our dynamic implementation of an observer successfully guides the arm toward the target in the dark, and the model produces movements with a bell-shaped velocity profile, consistent with human behavior data. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields
Deco, Gustavo; Jirsa, Viktor K.; Robinson, Peter A.; Breakspear, Michael; Friston, Karl
2008-01-01
The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space–time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences
Markounikau, Valentin; Igel, Christian; Grinvald, Amiram; Jancke, Dirk
2010-09-09
A neural field model is presented that captures the essential non-linear characteristics of activity dynamics across several millimeters of visual cortex in response to local flashed and moving stimuli. We account for physiological data obtained by voltage-sensitive dye (VSD) imaging which reports mesoscopic population activity at high spatio-temporal resolution. Stimulation included a single flashed square, a single flashed bar, the line-motion paradigm--for which psychophysical studies showed that flashing a square briefly before a bar produces sensation of illusory motion within the bar--and moving squares controls. We consider a two-layer neural field (NF) model describing an excitatory and an inhibitory layer of neurons as a coupled system of non-linear integro-differential equations. Under the assumption that the aggregated activity of both layers is reflected by VSD imaging, our phenomenological model quantitatively accounts for the observed spatio-temporal activity patterns. Moreover, the model generalizes to novel similar stimuli as it matches activity evoked by moving squares of different speeds. Our results indicate that feedback from higher brain areas is not required to produce motion patterns in the case of the illusory line-motion paradigm. Physiological interpretation of the model suggests that a considerable fraction of the VSD signal may be due to inhibitory activity, supporting the notion that balanced intra-layer cortical interactions between inhibitory and excitatory populations play a major role in shaping dynamic stimulus representations in the early visual cortex.
Bicho, Estela; Louro, Luís; Erlhagen, Wolfram
2010-01-01
How do humans coordinate their intentions, goals and motor behaviors when performing joint action tasks? Recent experimental evidence suggests that resonance processes in the observer's motor system are crucially involved in our ability to understand actions of others', to infer their goals and even to comprehend their action-related language. In this paper, we present a control architecture for human-robot collaboration that exploits this close perception-action linkage as a means to achieve more natural and efficient communication grounded in sensorimotor experiences. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of neural populations that encode in their activation patterns goals, actions and shared task knowledge. We validate the verbal and nonverbal communication skills of the robot in a joint assembly task in which the human-robot team has to construct toy objects from their components. The experiments focus on the robot's capacity to anticipate the user's needs and to detect and communicate unexpected events that may occur during joint task execution.
Bicho, Estela; Louro, Luís; Erlhagen, Wolfram
2010-01-01
How do humans coordinate their intentions, goals and motor behaviors when performing joint action tasks? Recent experimental evidence suggests that resonance processes in the observer's motor system are crucially involved in our ability to understand actions of others’, to infer their goals and even to comprehend their action-related language. In this paper, we present a control architecture for human–robot collaboration that exploits this close perception-action linkage as a means to achieve more natural and efficient communication grounded in sensorimotor experiences. The architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of neural populations that encode in their activation patterns goals, actions and shared task knowledge. We validate the verbal and nonverbal communication skills of the robot in a joint assembly task in which the human–robot team has to construct toy objects from their components. The experiments focus on the robot's capacity to anticipate the user's needs and to detect and communicate unexpected events that may occur during joint task execution. PMID:20725504
Mean-field theory of globally coupled integrate-and-fire neural oscillators with dynamic synapses
NASA Astrophysics Data System (ADS)
Bressloff, P. C.
1999-08-01
We analyze the effects of synaptic depression or facilitation on the existence and stability of the splay or asynchronous state in a population of all-to-all, pulse-coupled neural oscillators. We use mean-field techniques to derive conditions for the local stability of the splay state and determine how stability depends on the degree of synaptic depression or facilitation. We also consider the effects of noise. Extensions of the mean-field results to finite networks are developed in terms of the nonlinear firing time map.
Stochastic mean-field formulation of the dynamics of diluted neural networks
NASA Astrophysics Data System (ADS)
Angulo-Garcia, D.; Torcini, A.
2015-02-01
We consider pulse-coupled leaky integrate-and-fire neural networks with randomly distributed synaptic couplings. This random dilution induces fluctuations in the evolution of the macroscopic variables and deterministic chaos at the microscopic level. Our main aim is to mimic the effect of the dilution as a noise source acting on the dynamics of a globally coupled nonchaotic system. Indeed, the evolution of a diluted neural network can be well approximated as a fully pulse-coupled network, where each neuron is driven by a mean synaptic current plus additive noise. These terms represent the average and the fluctuations of the synaptic currents acting on the single neurons in the diluted system. The main microscopic and macroscopic dynamical features can be retrieved with this stochastic approximation. Furthermore, the microscopic stability of the diluted network can be also reproduced, as demonstrated from the almost coincidence of the measured Lyapunov exponents in the deterministic and stochastic cases for an ample range of system sizes. Our results strongly suggest that the fluctuations in the synaptic currents are responsible for the emergence of chaos in this class of pulse-coupled networks.
Stochastic mean-field formulation of the dynamics of diluted neural networks.
Angulo-Garcia, D; Torcini, A
2015-02-01
We consider pulse-coupled leaky integrate-and-fire neural networks with randomly distributed synaptic couplings. This random dilution induces fluctuations in the evolution of the macroscopic variables and deterministic chaos at the microscopic level. Our main aim is to mimic the effect of the dilution as a noise source acting on the dynamics of a globally coupled nonchaotic system. Indeed, the evolution of a diluted neural network can be well approximated as a fully pulse-coupled network, where each neuron is driven by a mean synaptic current plus additive noise. These terms represent the average and the fluctuations of the synaptic currents acting on the single neurons in the diluted system. The main microscopic and macroscopic dynamical features can be retrieved with this stochastic approximation. Furthermore, the microscopic stability of the diluted network can be also reproduced, as demonstrated from the almost coincidence of the measured Lyapunov exponents in the deterministic and stochastic cases for an ample range of system sizes. Our results strongly suggest that the fluctuations in the synaptic currents are responsible for the emergence of chaos in this class of pulse-coupled networks.
Dynamics of neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Learning to recognize objects on the fly: a neurally based dynamic field approach.
Faubel, Christian; Schöner, Gregor
2008-05-01
Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework.
Perone, Sammy; Spencer, John P.
2013-01-01
Looking is a fundamental exploratory behavior by which infants acquire knowledge about the world. In theories of infant habituation, however, looking as an exploratory behavior has been deemphasized relative to the reliable nature with which looking indexes active cognitive processing. We present a new theory that connects looking to the dynamics of memory formation and formally implement this theory in a Dynamic Neural Field model that learns autonomously as it actively looks and looks away from a stimulus. We situate this model in a habituation task and illustrate the mechanisms by which looking, encoding, working memory formation, and long-term memory formation give rise to habituation across multiple stimulus and task contexts. We also illustrate how the act of looking and the temporal dynamics of learning affect each other. Finally, we test a new hypothesis about the sources of developmental differences in looking. PMID:23136815
Dynamical Mean-Field Equations for a Neural Network with Spike Timing Dependent Plasticity
NASA Astrophysics Data System (ADS)
Mayer, Jörg; Ngo, Hong-Viet V.; Schuster, Heinz Georg
2012-09-01
We study the discrete dynamics of a fully connected network of threshold elements interacting via dynamically evolving synapses displaying spike timing dependent plasticity. Dynamical mean-field equations, which become exact in the thermodynamical limit, are derived to study the behavior of the system driven with uncorrelated and correlated Gaussian noise input. We use correlated noise to verify that our model gives account to the fact that correlated noise provides stronger drive for synaptic modification. Further we find that stochastic independent input leads to a noise dependent transition to the coherent state where all neurons fire together, most notably there exists an optimal noise level for the enhancement of synaptic potentiation in our model.
Dynamic interactions in neural networks
Arbib, M.A. ); Amari, S. )
1989-01-01
The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.
A Dynamic Neural Field Model of Visual Working Memory and Change Detection
Johnson, Jeffrey S.; Spencer, John P.; Luck, Steven J.; Schöner, Gregor
2009-01-01
Efficient visually guided behavior depends on the ability to form, retain, and compare visual representations for objects that may be separated in space and time. This ability relies on a short-term form of memory known as visual working memory. Although a considerable body of research has begun to shed light on the neurocognitive systems subserving this form of memory, few theories have addressed these processes in an integrated, neurally plausible framework. We describe a layered neural architecture that implements encoding and maintenance, and links these processes to a plausible comparison process. In addition, the model makes the novel prediction that change detection will be enhanced when metrically similar features are remembered. Results from experiments probing memory for color and for orientation were consistent with this novel prediction. These findings place strong constraints on models addressing the nature of visual working memory and its underlying mechanisms. PMID:19368698
A dynamic neural field model of visual working memory and change detection.
Johnson, Jeffrey S; Spencer, John P; Luck, Steven J; Schöner, Gregor
2009-05-01
Efficient visually guided behavior depends on the ability to form, retain, and compare visual representations for objects that may be separated in space and time. This ability relies on a short-term form of memory known as visual working memory. Although a considerable body of research has begun to shed light on the neurocognitive systems subserving this form of memory, few theories have addressed these processes in an integrated, neurally plausible framework. We describe a layered neural architecture that implements encoding and maintenance, and links these processes to a plausible comparison process. In addition, the model makes the novel prediction that change detection will be enhanced when metrically similar features are remembered. Results from experiments probing memory for color and for orientation were consistent with this novel prediction. These findings place strong constraints on models addressing the nature of visual working memory and its underlying mechanisms.
Local Dynamics in Trained Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rivkind, Alexander; Barak, Omri
2017-06-01
Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task-related neural dynamics, we study trained recurrent neural networks. We develop a mean field theory for reservoir computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low-order linear ordinary differential equation. The stability of the resulting equation can be assessed, predicting training success or failure. As a consequence, networks of rectified linear units and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state-dependent frequency selectivity in the network response.
Veit, Julia; Bhattacharyya, Anwesha; Kretz, Robert; Rainer, Gregor
2011-11-01
Entrainment of neural activity to luminance impulses during the refresh of cathode ray tube monitor displays has been observed in the primary visual cortex (V1) of humans and macaque monkeys. This entrainment is of interest because it tends to temporally align and thus synchronize neural responses at the millisecond timescale. Here we show that, in tree shrew V1, both spiking and local field potential activity are also entrained at cathode ray tube refresh rates of 120, 90, and 60 Hz, with weakest but still significant entrainment even at 120 Hz, and strongest entrainment occurring in cortical input layer IV. For both luminance increments ("white" stimuli) and decrements ("black" stimuli), refresh rate had a strong impact on the temporal dynamics of the neural response for subsequent luminance impulses. Whereas there was rapid, strong attenuation of spikes and local field potential to prolonged visual stimuli composed of luminance impulses presented at 120 Hz, attenuation was nearly absent at 60-Hz refresh rate. In addition, neural onset latencies were shortest at 120 Hz and substantially increased, by ∼15 ms, at 60 Hz. In terms of neural response amplitude, black responses dominated white responses at all three refresh rates. However, black/white differences were much larger at 60 Hz than at higher refresh rates, suggesting a mechanism that is sensitive to stimulus timing. Taken together, our findings reveal many similarities between V1 of macaque and tree shrew, while underscoring a greater temporal sensitivity of the tree shrew visual system.
Francis, Joseph T; Chapin, John K
2006-06-01
In everyday life, we reach, grasp, and manipulate a variety of different objects all with their own dynamic properties. This degree of adaptability is essential for a brain-controlled prosthetic arm to work in the real world. In this study, rats were trained to make reaching movements while holding a torque manipulandum working against two distinct loads. Neural recordings obtained from arrays of 32 microelectrodes spanning the motor cortex were used to predict several movement related variables. In this paper, we demonstrate that a simple linear regression model can translate neural activity into endpoint position of a robotic manipulandum even while the animal controlling it works against different loads. A second regression model can predict, with 100% accuracy, which of the two loads is being manipulated by the animal. Finally, a third model predicts the work needed to move the manipulandum endpoint. This prediction is significantly better than that for position. In each case, the regression model uses a single set of weights. Thus, the neural ensemble is capable of providing the information necessary to compensate for at least two distinct load conditions.
Creative-Dynamics Approach To Neural Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1992-01-01
Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.
Creative-Dynamics Approach To Neural Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1992-01-01
Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.
Dynamical systems, attractors, and neural circuits.
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic-they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions.
On conductance-based neural field models
Pinotsis, Dimitris A.; Leite, Marco; Friston, Karl J.
2013-01-01
This technical note introduces a conductance-based neural field model that combines biologically realistic synaptic dynamics—based on transmembrane currents—with neural field equations, describing the propagation of spikes over the cortical surface. This model allows for fairly realistic inter-and intra-laminar intrinsic connections that underlie spatiotemporal neuronal dynamics. We focus on the response functions of expected neuronal states (such as depolarization) that generate observed electrophysiological signals (like LFP recordings and EEG). These response functions characterize the model's transfer functions and implicit spectral responses to (uncorrelated) input. Our main finding is that both the evoked responses (impulse response functions) and induced responses (transfer functions) show qualitative differences depending upon whether one uses a neural mass or field model. Furthermore, there are differences between the equivalent convolution and conductance models. Overall, all models reproduce a characteristic increase in frequency, when inhibition was increased by increasing the rate constants of inhibitory populations. However, convolution and conductance-based models showed qualitatively different changes in power, with convolution models showing decreases with increasing inhibition, while conductance models show the opposite effect. These differences suggest that conductance based field models may be important in empirical studies of cortical gain control or pharmacological manipulations. PMID:24273508
Dynamical systems, attractors, and neural circuits
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic—they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions. PMID:27408709
Spatiotemporal canards in neural field equations
NASA Astrophysics Data System (ADS)
Avitabile, D.; Desroches, M.; Knobloch, E.
2017-04-01
Canards are special solutions to ordinary differential equations that follow invariant repelling slow manifolds for long time intervals. In realistic biophysical single-cell models, canards are responsible for several complex neural rhythms observed experimentally, but their existence and role in spatially extended systems is largely unexplored. We identify and describe a type of coherent structure in which a spatial pattern displays temporal canard behavior. Using interfacial dynamics and geometric singular perturbation theory, we classify spatiotemporal canards and give conditions for the existence of folded-saddle and folded-node canards. We find that spatiotemporal canards are robust to changes in the synaptic connectivity and firing rate. The theory correctly predicts the existence of spatiotemporal canards with octahedral symmetry in a neural field model posed on the unit sphere.
Neural fields, spectral responses and lateral connections
Pinotsis, D.A.; Friston, K.J.
2011-01-01
This paper describes a neural field model for local (mesoscopic) dynamics on the cortical surface. Our focus is on sparse intrinsic connections that are characteristic of real cortical microcircuits. This sparsity is modelled with radial connectivity functions or kernels with non-central peaks. The ensuing analysis allows one to generate or predict spectral responses to known exogenous input or random fluctuations. Here, we characterise the effect of different connectivity architectures (the range, dispersion and propagation speed of intrinsic or lateral connections) and synaptic gains on spatiotemporal dynamics. Specifically, we look at spectral responses to random fluctuations and examine the ability of synaptic gain and connectivity parameters to induce Turing instabilities. We find that although the spatial deployment and speed of lateral connections can have a profound affect on the behaviour of spatial modes over different scales, only synaptic gain is capable of producing phase-transitions. We discuss the implications of these findings for the use of neural fields as generative models in dynamic causal modeling (DCM). PMID:21138771
Neural dynamics based on the recognition of neural fingerprints
Carrillo-Medina, José Luis; Latorre, Roberto
2015-01-01
Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531
Model Of Neural Network With Creative Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail; Barhen, Jacob
1993-01-01
Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.
Model Of Neural Network With Creative Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail; Barhen, Jacob
1993-01-01
Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.
Non-Lipschitzian neural dynamics
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Zak, Michail; Toomarian, Nikzad
1990-01-01
A novel approach is presented which is motivated by an attempt to remove one of the most fundamental limitations of artificial neural networks: their rigid behavior as compared with even the simplest biological systems. It is demonstrated that non-Lipschitzian dynamics, based on the faliure of the Lipschitz conditions at repellers, displays a new qualitative effect, i.e., a multichoice response to periodic external excitations. This makes it possible to construct unpredictable systems, represented in the form of coupled activation and learning dynamical equations. It is shown that unpredictable systems can be controlled by sign strings which uniquely define the system behavior by specifying the direction of the motions at the critical points. Unpredictable systems driven by sign strings are extremely flexible and can serve as a powerful tool for complex pattern recognition.
Non-Lipschitzian neural dynamics
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Zak, Michail; Toomarian, Nikzad
1990-01-01
A novel approach is presented which is motivated by an attempt to remove one of the most fundamental limitations of artificial neural networks: their rigid behavior as compared with even the simplest biological systems. It is demonstrated that non-Lipschitzian dynamics, based on the faliure of the Lipschitz conditions at repellers, displays a new qualitative effect, i.e., a multichoice response to periodic external excitations. This makes it possible to construct unpredictable systems, represented in the form of coupled activation and learning dynamical equations. It is shown that unpredictable systems can be controlled by sign strings which uniquely define the system behavior by specifying the direction of the motions at the critical points. Unpredictable systems driven by sign strings are extremely flexible and can serve as a powerful tool for complex pattern recognition.
Coupling layers regularizes wave propagation in stochastic neural fields
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.
2014-02-01
We explore how layered architectures influence the dynamics of stochastic neural field models. Our main focus is how the propagation of waves of neural activity in each layer is affected by interlaminar coupling. Synaptic connectivities within and between each layer are determined by integral kernels of an integrodifferential equation describing the temporal evolution of neural activity. Excitatory neural fields, with purely positive connectivities, support traveling fronts in each layer, whose speeds are increased when coupling between layers is considered. Studying the effects of noise, we find coupling reduces the variance in the position of traveling fronts, as long as the noise sources to each layer are not completely correlated. Neural fields with asymmetric connectivity support traveling pulses whose speeds are decreased by interlaminar coupling. Again, coupling reduces the variance in traveling pulse position. Asymptotic analysis is performed using a small-noise expansion, assuming interlaminar connectivity scales similarly.
Propagating waves can explain irregular neural dynamics.
Keane, Adam; Gong, Pulin
2015-01-28
Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.
Dynamic Alignment Models for Neural Coding
Kollmorgen, Sepp; Hahnloser, Richard H. R.
2014-01-01
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448
Dynamic decomposition of spatiotemporal neural signals
2017-01-01
Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals. PMID:28558039
The Complexity of Dynamics in Small Neural Circuits.
Fasoli, Diego; Cattani, Anna; Panzeri, Stefano
2016-08-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing.
The Complexity of Dynamics in Small Neural Circuits
Panzeri, Stefano
2016-01-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
Global rhythmic activities in hippocampal neural fields and neural coding.
Ventriglia, Francesco
2006-01-01
Global oscillations of the neural field represent some of the most interesting expressions of the hippocampal activity, being related also to learning and memory. To study oscillatory activities of the CA3 field in theta range, a model of this sub-field of Hippocampus has been formulated. The model describes the firing activity of CA3 neuronal populations within the frame of a kinetic theory of neural systems and it has been used for computer simulations. The results show that the propagation of activities induced in the neural field by hippocampal afferents occurs only in narrow time windows confined by inhibitory barrages, whose time-course follows the theta rhythm. Moreover, during each period of a theta wave, the entire CA3 field bears a firing activity with peculiar space-time patterns, a sort of specific imprint, which can induce effects with similar patterns on brain regions driven by the hippocampal formation. The simulation has also demonstrated the ability of medial septum to influence the global activity of the CA3 pyramidal population through the control of the population of inhibitory interneurons. At last, the possible involvement of global population oscillations in neural coding has been discussed.
A Neural Dynamic Model Generates Descriptions of Object-Oriented Actions.
Richter, Mathis; Lins, Jonas; Schöner, Gregor
2017-01-01
Describing actions entails that relations between objects are discovered. A pervasively neural account of this process requires that fundamental problems are solved: the neural pointer problem, the binding problem, and the problem of generating discrete processing steps from time-continuous neural processes. We present a prototypical solution to these problems in a neural dynamic model that comprises dynamic neural fields holding representations close to sensorimotor surfaces as well as dynamic neural nodes holding discrete, language-like representations. Making the connection between these two types of representations enables the model to describe actions as well as to perceptually ground movement phrases-all based on real visual input. We demonstrate how the dynamic neural processes autonomously generate the processing steps required to describe or ground object-oriented actions. By solving the fundamental problems of neural pointing, binding, and emergent discrete processing, the model may be a first but critical step toward a systematic neural processing account of higher cognition.
Neural network with formed dynamics of activity
Dunin-Barkovskii, V.L.; Osovets, N.B.
1995-03-01
The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.
Kubota, Michinori; Miyamoto, Akihiro; Hosokawa, Yutaka; Sugimoto, Shunji; Horikawa, Junsei
2012-05-30
Auditory induction is a continuity illusion in which missing sounds are perceived under appropriate conditions, for example, when noise is inserted during silent gaps in the sound. To elucidate the neural mechanisms underlying auditory induction, neural responses to tones interrupted by a silent gap or noise were examined in the core and belt fields of the auditory cortex using real-time optical imaging with a voltage-sensitive dye. Tone stimuli interrupted by a silent gap elicited responses to the second tone following the gap as well as early phasic responses to the first tone. Tone stimuli interrupted by broad-band noise (BN), considered to cause auditory induction, considerably reduced or eliminated responses to the tone following the noise. This reduction was stronger in the dorsocaudal field (field DC) and belt fields compared with the anterior field (the primary auditory cortex of guinea pig). Tone stimuli interrupted by notched (band-stopped) noise centered at the tone frequency, considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. These results suggest that substantial changes between responses to silent gap-inserted tones and those to BN-inserted tones emerged in field DC and belt fields. Moreover, the findings indicate that field DC is the first area in which these changes emerge, suggesting that it may be an important region for auditory induction of simple sounds.
Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio
2012-08-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2015-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of
Dynamic neural controllers for induction motor.
Brdyś, M A; Kulawski, G J
1999-01-01
The paper reports application of recently developed adaptive control techniques based on neural networks to the induction motor control. This case study represents one of the more difficult control problems due to the complex, nonlinear, and time-varying dynamics of the motor and unavailability of full-state measurements. A partial solution is first presented based on a single input-single output (SISO) algorithm employing static multilayer perceptron (MLP) networks. A novel technique is subsequently described which is based on a recurrent neural network employed as a dynamical model of the plant. Recent stability results for this algorithm are reported. The technique is applied to multiinput-multioutput (MIMO) control of the motor. A simulation study of both methods is presented. It is argued that appropriately structured recurrent neural networks can provide conveniently parameterized dynamic models for many nonlinear systems for use in adaptive control.
Dynamics and kinematics of simple neural systems
Rabinovich, M. |; Selverston, A.; Rubchinsky, L.; Huerta, R.
1996-09-01
The dynamics of simple neural systems is of interest to both biologists and physicists. One of the possible roles of such systems is the production of rhythmic patterns, and their alterations (modification of behavior, processing of sensory information, adaptation, control). In this paper, the neural systems are considered as a subject of modeling by the dynamical systems approach. In particular, we analyze how a stable, ordinary behavior of a small neural system can be described by simple finite automata models, and how more complicated dynamical systems modeling can be used. The approach is illustrated by biological and numerical examples: experiments with and numerical simulations of the stomatogastric central pattern generators network of the California spiny lobster. {copyright} {ital 1996 American Institute of Physics.}
On lateral competition in dynamic neural networks
Bellyustin, N.S.
1995-02-01
Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.
Large deviations for nonlocal stochastic neural fields.
Kuehn, Christian; Riedler, Martin G
2014-04-17
We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers' law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations.Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20.
Large Deviations for Nonlocal Stochastic Neural Fields
2014-01-01
We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297
Axonal Velocity Distributions in Neural Field Equations
Bojak, Ingo; Liley, David T. J.
2010-01-01
By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, long-range propagation of activity in MFMs is often approximated with partial differential equations (PDEs). However, PDE approximations in current use correspond to underlying axonal velocity distributions incompatible with experimental measurements. In order to rectify this deficiency, we here introduce novel propagation PDEs that give rise to smooth unimodal distributions of axonal conduction velocities. We also argue that velocities estimated from fibre diameters in slice and from latency measurements, respectively, relate quite differently to such distributions, a significant point for any phenomenological description. Our PDEs are then successfully fit to fibre diameter data from human corpus callosum and rat subcortical white matter. This allows for the first time to simulate long-range conduction in the mammalian brain with realistic, convenient PDEs. Furthermore, the obtained results suggest that the propagation of activity in rat and human differs significantly beyond mere scaling. The dynamical consequences of our new formulation are investigated in the context of a well known neural field model. On the basis of Turing instability analyses, we conclude that pattern formation is more easily initiated using our more realistic propagator. By increasing characteristic conduction velocities, a smooth transition can occur from self-sustaining bulk oscillations to travelling waves of various wavelengths, which may influence axonal growth during development. Our analytic results are also corroborated numerically using simulations on a large spatial grid. Thus we provide here a comprehensive analysis of empirically constrained activity propagation in the context of MFMs, which will allow more realistic studies
Dynamic Neural Networks Supporting Memory Retrieval
St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.
2011-01-01
How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407
Nonlinear dynamics of neural delayed feedback
Longtin, A.
1990-01-01
Neural delayed feedback is a property shared by many circuits in the central and peripheral nervous systems. The evolution of the neural activity in these circuits depends on their present state as well as on their past states, due to finite propagation time of neural activity along the feedback loop. These systems are often seen to undergo a change from a quiescent state characterized by low level fluctuations to an oscillatory state. We discuss the problem of analyzing this transition using techniques from nonlinear dynamics and stochastic processes. Our main goal is to characterize the nonlinearities which enable autonomous oscillations to occur and to uncover the properties of the noise sources these circuits interact with. The concepts are illustrated on the human pupil light reflex (PLR) which has been studied both theoretically and experimentally using this approach. 5 refs., 3 figs.
Waves, bumps, and patterns in neural field theories.
Coombes, S
2005-08-01
Neural field models of firing rate activity have had a major impact in helping to develop an understanding of the dynamics seen in brain slice preparations. These models typically take the form of integro-differential equations. Their non-local nature has led to the development of a set of analytical and numerical tools for the study of waves, bumps and patterns, based around natural extensions of those used for local differential equation models. In this paper we present a review of such techniques and show how recent advances have opened the way for future studies of neural fields in both one and two dimensions that can incorporate realistic forms of axo-dendritic interactions and the slow intrinsic currents that underlie bursting behaviour in single neurons.
The neural dynamics of sensory focus
Clarke, Stephen E.; Longtin, André; Maler, Leonard
2015-01-01
Coordinated sensory and motor system activity leads to efficient localization behaviours; but what neural dynamics enable object tracking and what are the underlying coding principles? Here we show that optimized distance estimation from motion-sensitive neurons underlies object tracking performance in weakly electric fish. First, a relationship is presented for determining the distance that maximizes the Fisher information of a neuron's response to object motion. When applied to our data, the theory correctly predicts the distance chosen by an electric fish engaged in a tracking behaviour, which is associated with a bifurcation between tonic and burst modes of spiking. Although object distance, size and velocity alter the neural response, the location of the Fisher information maximum remains invariant, demonstrating that the circuitry must actively adapt to maintain ‘focus' during relative motion. PMID:26549346
Neural field model of binocular rivalry waves.
Bressloff, Paul C; Webber, Matthew A
2012-04-01
We present a neural field model of binocular rivalry waves in visual cortex. For each eye we consider a one-dimensional network of neurons that respond maximally to a particular feature of the corresponding image such as the orientation of a grating stimulus. Recurrent connections within each one-dimensional network are assumed to be excitatory, whereas connections between the two networks are inhibitory (cross-inhibition). Slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We derive an analytical expression for the speed of a binocular rivalry wave as a function of various neurophysiological parameters, and show how properties of the wave are consistent with the wave-like propagation of perceptual dominance observed in recent psychophysical experiments. In addition to providing an analytical framework for studying binocular rivalry waves, we show how neural field methods provide insights into the mechanisms underlying the generation of the waves. In particular, we highlight the important role of slow adaptation in providing a "symmetry breaking mechanism" that allows waves to propagate.
Dynamic Attractors and Basin Class Capacity in Binary Neural Networks
1994-12-21
The wide repertoire of attractors and basins of attraction that appear in dynamic neural networks not only serve as models of brain activity patterns...limitations of static neural networks by use of dynamic attractors and their basins. The results show that dynamic networks have a high capacity for
Beyond mean field theory: statistical field theory for neural networks
Buice, Michael A; Chow, Carson C
2014-01-01
Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi–Peliti–Janssen formalism, are particularly useful in this regard. PMID:25243014
Natural neural projection dynamics underlying social behavior.
Gunaydin, Lisa A; Grosenick, Logan; Finkelstein, Joel C; Kauvar, Isaac V; Fenno, Lief E; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J; Airan, Raag D; Zalocusky, Kelly A; Tye, Kay M; Anikeeva, Polina; Malenka, Robert C; Deisseroth, Karl
2014-06-19
Social interaction is a complex behavior essential for many species and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social, but not novel object, interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type 1 dopamine receptor signaling downstream in the NAc. Direct observation of deep projection-specific activity in this way captures a fundamental and previously inaccessible dimension of mammalian circuit dynamics. Copyright © 2014 Elsevier Inc. All rights reserved.
Natural neural projection dynamics underlying social behavior
Gunaydin, Lisa A.; Grosenick, Logan; Finkelstein, Joel C.; Kauvar, Isaac V.; Fenno, Lief E.; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J.; Airan, Raag D.; Zalocusky, Kelly A.; Tye, Kay M.; Anikeeva, Polina; Malenka, Robert C.; Deisseroth, Karl
2014-01-01
Social interaction is a complex behavior essential for many species, and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically- and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social but not novel-object interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type-1 dopamine receptor signaling downstream in the NAc. Direct observation of projection-specific activity in this way captures a fundamental and previously inaccessible dimension of circuit dynamics. PMID:24949967
Dynamical system modeling via signal reduction and neural network simulation
Paez, T.L.; Hunter, N.F.
1997-11-01
Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.
Creative dynamics approach to neural intelligence.
Zak, M
1990-01-01
The thrust of this paper is to introduce and discuss a substantially new type of dynamical system for modelling biological behavior. The approach was motivated by an attempt to remove one of the most fundamental limitations of artificial neural networks-their rigid behavior compared with even simplest biological systems. This approach exploits a novel paradigm in nonlinear dynamics based upon the concept of terminal attractors and repellers. It was demonstrated that non-Lipschitzian dynamics based upon the failure of Lipschitz condition exhibits a new qualitative effect--a multi-choice response to periodic external excitations. Based upon this property, a substantially new class of dynamical systems--the unpredictable systems--was introduced and analyzed. These systems are represented in the form of coupled activation and learning dynamical equations whose ability to be spontaneously activated is based upon two pathological characteristics. Firstly, such systems have zero Jacobian. As a result of that, they have an infinite number of equilibrium points which occupy curves, surfaces or hypersurfaces. Secondly, at all these equilibrium points, the Lipschitz conditions fails, so the equilibrium points become terminal attractors or repellers depending upon the sign of the periodic excitation. Both of these pathological characteristics result in multi-choice response of unpredictable dynamical systems. It has been shown that the unpredictable systems can be controlled by sign strings which uniquely define the system behaviors by specifying the direction of the motions in the critical points. By changing the combinations of signs in the code strings the system can reproduce any prescribed behavior to a prescribed accuracy.(ABSTRACT TRUNCATED AT 250 WORDS)
Electronic neural network for dynamic resource allocation
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Eberhardt, S. P.; Daud, T.
1991-01-01
A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.
Spatial Dynamics of Multilayer Cellular Neural Networks
NASA Astrophysics Data System (ADS)
Wu, Shi-Liang; Hsu, Cheng-Hsiung
2017-06-01
The purpose of this work is to study the spatial dynamics of one-dimensional multilayer cellular neural networks. We first establish the existence of rightward and leftward spreading speeds of the model. Then we show that the spreading speeds coincide with the minimum wave speeds of the traveling wave fronts in the right and left directions. Moreover, we obtain the asymptotic behavior of the traveling wave fronts when the wave speeds are positive and greater than the spreading speeds. According to the asymptotic behavior and using various kinds of comparison theorems, some front-like entire solutions are constructed by combining the rightward and leftward traveling wave fronts with different speeds and a spatially homogeneous solution of the model. Finally, various qualitative features of such entire solutions are investigated.
Neural Substrates of Dynamic Object Occlusion
Shuwairi, Sarah M.; Curtis, Clayton E.; Johnson, Scott P.
2011-01-01
In everyday environments, objects frequently go out of sight as they move and our view of them becomes obstructed by nearer objects, yet we perceive these objects as continuous and enduring entities. Here, we used functional MRI with an attentive tracking paradigm to clarify the nature of perceptual and cognitive mechanisms subserving this ability to fill in the gaps in perception of dynamic object occlusion. Imaging data revealed distinct regions of cortex showing increased activity during periods of occlusion relative to full visibility. These regions may support active maintenance of a representation of the target’s spatiotemporal properties ensuring that the object is perceived as a persisting entity when occluded. Our findings may shed light on the neural substrates involved in object tracking that give rise to the phenomenon of object permanence. PMID:17651002
Population clocks: motor timing with neural dynamics
Buonomano, Dean V.; Laje, Rodrigo
2010-01-01
An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing. PMID:20889368
Neural Dynamics of Phonological Processing in the Dorsal Auditory Stream
Sabri, Merav; Beardsley, Scott A.; Mangalathu-Arumana, Jain; Desai, Anjali
2013-01-01
Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80–100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors. PMID:24068810
Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.
Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu
2017-10-01
This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.
An integrated architecture of adaptive neural network control for dynamic systems
Ke, Liu; Tokar, R.; Mcvey, B.
1994-07-01
In this study, an integrated neural network control architecture for nonlinear dynamic systems is presented. Most of the recent emphasis in the neural network control field has no error feedback as the control input which rises the adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. Feed forward neural network controllers with state feedback establish fixed control mappings which can not adapt when model uncertainties present. With error feedbacks, neural network controllers learn the slopes or the gains respecting to the error feedbacks, which are error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation.
Two-photon imaging and analysis of neural network dynamics
NASA Astrophysics Data System (ADS)
Lütcke, Henry; Helmchen, Fritjof
2011-08-01
The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.
NASA Astrophysics Data System (ADS)
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
, the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools
ERIC Educational Resources Information Center
Noyons, E. C. M.; van Raan, A. F. J.
1998-01-01
Using bibliometric mapping techniques, authors developed a methodology of self-organized structuring of scientific fields which was applied to neural network research. Explores the evolution of a data generated field structure by monitoring the interrelationships between subfields, the internal structure of subfields, and the dynamic features of…
Neural dynamics during repetitive visual stimulation
NASA Astrophysics Data System (ADS)
Tsoneva, Tsvetomira; Garcia-Molina, Gary; Desain, Peter
2015-12-01
Objective. Steady-state visual evoked potentials (SSVEPs), the brain responses to repetitive visual stimulation (RVS), are widely utilized in neuroscience. Their high signal-to-noise ratio and ability to entrain oscillatory brain activity are beneficial for their applications in brain-computer interfaces, investigation of neural processes underlying brain rhythmic activity (steady-state topography) and probing the causal role of brain rhythms in cognition and emotion. This paper aims at analyzing the space and time EEG dynamics in response to RVS at the frequency of stimulation and ongoing rhythms in the delta, theta, alpha, beta, and gamma bands. Approach.We used electroencephalography (EEG) to study the oscillatory brain dynamics during RVS at 10 frequencies in the gamma band (40-60 Hz). We collected an extensive EEG data set from 32 participants and analyzed the RVS evoked and induced responses in the time-frequency domain. Main results. Stable SSVEP over parieto-occipital sites was observed at each of the fundamental frequencies and their harmonics and sub-harmonics. Both the strength and the spatial propagation of the SSVEP response seem sensitive to stimulus frequency. The SSVEP was more localized around the parieto-occipital sites for higher frequencies (>54 Hz) and spread to fronto-central locations for lower frequencies. We observed a strong negative correlation between stimulation frequency and relative power change at that frequency, the first harmonic and the sub-harmonic components over occipital sites. Interestingly, over parietal sites for sub-harmonics a positive correlation of relative power change and stimulation frequency was found. A number of distinct patterns in delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz) and beta (15-30 Hz) bands were also observed. The transient response, from 0 to about 300 ms after stimulation onset, was accompanied by increase in delta and theta power over fronto-central and occipital sites, which returned to baseline
Symbolic representation of recurrent neural network dynamics.
Huynh, Thuan Q; Reggia, James A
2012-10-01
Simple recurrent error backpropagation networks have been widely used to learn temporal sequence data, including regular and context-free languages. However, the production of relatively large and opaque weight matrices during learning has inspired substantial research on how to extract symbolic human-readable interpretations from trained networks. Unlike feedforward networks, where research has focused mainly on rule extraction, most past work with recurrent networks has viewed them as dynamical systems that can be approximated symbolically by finite-state machine (FSMs). With this approach, the network's hidden layer activation space is typically divided into a finite number of regions. Past research has mainly focused on better techniques for dividing up this activation space. In contrast, very little work has tried to influence the network training process to produce a better representation in hidden layer activation space, and that which has been done has had only limited success. Here we propose a powerful general technique to bias the error backpropagation training process so that it learns an activation space representation from which it is easier to extract FSMs. Using four publicly available data sets that are based on regular and context-free languages, we show via computational experiments that the modified learning method helps to extract FSMs with substantially fewer states and less variance than unmodified backpropagation learning, without decreasing the neural networks' accuracy. We conclude that modifying error backpropagation so that it more effectively separates learned pattern encodings in the hidden layer is an effective way to improve contemporary FSM extraction methods.
Using neural networks for dynamic light scattering time series processing
NASA Astrophysics Data System (ADS)
Chicea, Dan
2017-04-01
A basic experiment to record dynamic light scattering (DLS) time series was assembled using basic components. The DLS time series processing using the Lorentzian function fit was considered as reference. A Neural Network was designed and trained using simulated frequency spectra for spherical particles in the range 0-350 nm, assumed to be scattering centers, and the neural network design and training procedure are described in detail. The neural network output accuracy was tested both on simulated and on experimental time series. The match with the DLS results, considered as reference, was good serving as a proof of concept for using neural networks in fast DLS time series processing.
Dynamic switching of neural codes in networks with gap junctions.
Katori, Yuichi; Masuda, Naoki; Aihara, Kazuyuki
2006-12-01
Population rate coding and temporal coding are common neural codes. Recent studies suggest that these two codes may be alternatively used in one neural system. Based on the fact that there are massive gap junctions in the brain, we explore how this switching behavior may be related to neural codes in networks of neurons connected by gap junctions. First, we show that under time-varying inputs, such neural networks show switching between synchronous and asynchronous states. Then, we quantify network dynamics by three mutual information measures to show that population rate coding carries more information in asynchronous states and temporal coding does so in synchronous states.
Neural Networks for Dynamic Flight Control
1993-12-01
uses the Adaline (22) model for development of the neural networks. Neural Graphics and other AFIT applications use a slightly different model. The...primary difference in the Nguyen application is that the Adaline uses the nonlinear function .f(a) = tanh(a) where standard backprop uses the sigmoid
Neural Dynamics of Attentional Cross-Modality Control
Rabinovich, Mikhail; Tristan, Irma; Varona, Pablo
2013-01-01
Attentional networks that integrate many cortical and subcortical elements dynamically control mental processes to focus on specific events and make a decision. The resources of attentional processing are finite. Nevertheless, we often face situations in which it is necessary to simultaneously process several modalities, for example, to switch attention between players in a soccer field. Here we use a global brain mode description to build a model of attentional control dynamics. This model is based on sequential information processing stability conditions that are realized through nonsymmetric inhibition in cortical circuits. In particular, we analyze the dynamics of attentional switching and focus in the case of parallel processing of three interacting mental modalities. Using an excitatory-inhibitory network, we investigate how the bifurcations between different attentional control strategies depend on the stimuli and analyze the relationship between the time of attention focus and the strength of the stimuli. We discuss the interplay between attention and decision-making: in this context, a decision-making process is a controllable bifurcation of the attention strategy. We also suggest the dynamical evaluation of attentional resources in neural sequence processing. PMID:23696890
Shaping the learning curve: epigenetic dynamics in neural plasticity
Bronfman, Zohar Z.; Ginsburg, Simona; Jablonka, Eva
2014-01-01
A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation, and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network, and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies. PMID:25071483
Absolute stability and synchronization in neural field models with transmission delays
NASA Astrophysics Data System (ADS)
Kao, Chiu-Yen; Shih, Chih-Wen; Wu, Chang-Hong
2016-08-01
Neural fields model macroscopic parts of the cortex which involve several populations of neurons. We consider a class of neural field models which are represented by integro-differential equations with transmission time delays which are space-dependent. The considered domains underlying the systems can be bounded or unbounded. A new approach, called sequential contracting, instead of the conventional Lyapunov functional technique, is employed to investigate the global dynamics of such systems. Sufficient conditions for the absolute stability and synchronization of the systems are established. Several numerical examples are presented to demonstrate the theoretical results.
Kopecz, K
1995-10-01
The systematic variations of regular saccadic reaction times induced in gap/overlap paradigms are addressed by a quantitative model. Intentional and visual information are integrated on a retinotopic representation of visual space, on which activity dynamics is related to movement initiation. Using a specific conception of "motor preparation", known effects of general warnings and fixation point on- and offsets are reproduced. Results of new experiments are predicted and the extent to which fixation point offsets are specific to ocular responses is analyzed in the light of the exposed model architecture. Relations of the theoretical framework to neurophysiological findings are discussed.
Pulsating fronts in periodically modulated neural field models
NASA Astrophysics Data System (ADS)
Coombes, S.; Laing, C. R.
2011-01-01
We consider a coarse-grained neural field model for synaptic activity in spatially extended cortical tissue that possesses an underlying periodicity in its microstructure. The model is written as an integrodifferential equation with periodic modulation of a translationally invariant spatial kernel. This modulation can have a strong effect on wave propagation through the tissue, including the creation of pulsating fronts with widely varying speeds and wave-propagation failure. Here we develop a new analysis for the study of such phenomena, using two complementary techniques. The first uses linearized information from the leading edge of a traveling periodic wave to obtain wave speed estimates for pulsating fronts, and the second develops an interface description for waves in the full nonlinear model. For weak modulation and a Heaviside firing rate function the interface dynamics can be analyzed exactly and gives predictions that are in excellent agreement with direct numerical simulations. Importantly, the interface dynamics description improves on the standard homogenization calculation, which is restricted to modulation that is both fast and weak.
Neural network based dynamic controllers for industrial robots.
Oh, S Y; Shin, W C; Kim, H G
1995-09-01
The industrial robot's dynamic performance is frequently measured by positioning accuracy at high speeds and a good dynamic controller is essential that can accurately compute robot dynamics at a servo rate high enough to ensure system stability. A real-time dynamic controller for an industrial robot is developed here using neural networks. First, an efficient time-selectable hidden layer architecture has been developed based on system dynamics localized in time, which lends itself to real-time learning and control along with enhanced mapping accuracy. Second, the neural network architecture has also been specially tuned to accommodate servo dynamics. This not only facilitates the system design through reduced sensing requirements for the controller but also enhances the control performance over the control architecture neglecting servo dynamics. Experimental results demonstrate the controller's excellent learning and control performances compared with a conventional controller and thus has good potential for practical use in industrial robots.
Gap junctions: their importance for the dynamics of neural circuits.
Rela, Lorena; Szczupak, Lidia
2004-12-01
Electrical coupling through gap junctions constitutes a mode of signal transmission between neurons (electrical synaptic transmission). Originally discovered in invertebrates and in lower vertebrates, electrical synapses have recently been reported in immature and adult mammalian nervous systems. This has renewed the interest in understanding the role of electrical synapses in neural circuit function and signal processing. The present review focuses on the role of gap junctions in shaping the dynamics of neural networks by forming electrical synapses between neurons. Electrical synapses have been shown to be important elements in coincidence detection mechanisms and they can produce complex input-output functions when arranged in combination with chemical synapses. We postulate that these synapses may also be important in redefining neuronal compartments, associating anatomically distinct cellular structures into functional units. The original view of electrical synapses as static connecting elements in neural circuits has been revised and a considerable amount of evidence suggests that electrical synapses substantially affect the dynamics of neural circuits.
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
Neural Dynamics Underlying Event-Related Potentials
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Bressler, Steven L.; Knuth, Kevin H.; Ding, Ming-Zhou; Mehta, Ashesh D.; Ulbert, Istvan; Schroeder, Charles E.
2003-01-01
There are two opposing hypotheses about the brain mechanisms underlying sensory event-related potentials (ERPs). One holds that sensory ERPs are generated by phase resetting of ongoing electroencephalographic (EEG) activity, and the other that they result from signal averaging of stimulus-evoked neural responses. We tested several contrasting predictions of these hypotheses by direct intracortical analysis of neural activity in monkeys. Our findings clearly demonstrate evoked response contributions to the sensory ERP in the monkey, and they suggest the likelihood that a mixed (Evoked/Phase Resetting) model may account for the generation of scalp ERPs in humans.
Exploring the Neural Dynamics Underpinning Individual Differences in Sentence Comprehension
Just, Marcel Adam
2011-01-01
This study used functional magnetic resonance imaging to investigate individual differences in the neural underpinnings of sentence comprehension, with a focus on neural adaptability (dynamic configuration of neural networks with changing task demands). Twenty-seven undergraduates, with varying working memory capacities and vocabularies, read sentences that were either syntactically simple or complex under conditions of varying extrinsic working memory demands (sentences alone or preceded by to-be-remembered words or nonwords). All readers showed greater neural adaptability when extrinsic working memory demands were low, suggesting that adaptability is related to resource availability. Higher capacity readers showed greater neural adaptability (greater increase in activation with increasing syntactic complexity) across conditions than did lower capacity readers. Higher capacity readers also showed better maintenance of or increase in synchronization of activation between brain regions as tasks became more demanding. Larger vocabulary was associated with more efficient use of cortical resources (reduced activation in frontal regions) in all conditions but was not associated with greater neural adaptability or synchronization. The distinct characterizations of verbal working memory capacity and vocabulary suggest that dynamic facets of brain function such as adaptability and synchronization may underlie individual differences in more general information processing abilities, whereas neural efficiency may more specifically reflect individual differences in language experience. PMID:21148612
Exploring the neural dynamics underpinning individual differences in sentence comprehension.
Prat, Chantel S; Just, Marcel Adam
2011-08-01
This study used functional magnetic resonance imaging to investigate individual differences in the neural underpinnings of sentence comprehension, with a focus on neural adaptability (dynamic configuration of neural networks with changing task demands). Twenty-seven undergraduates, with varying working memory capacities and vocabularies, read sentences that were either syntactically simple or complex under conditions of varying extrinsic working memory demands (sentences alone or preceded by to-be-remembered words or nonwords). All readers showed greater neural adaptability when extrinsic working memory demands were low, suggesting that adaptability is related to resource availability. Higher capacity readers showed greater neural adaptability (greater increase in activation with increasing syntactic complexity) across conditions than did lower capacity readers. Higher capacity readers also showed better maintenance of or increase in synchronization of activation between brain regions as tasks became more demanding. Larger vocabulary was associated with more efficient use of cortical resources (reduced activation in frontal regions) in all conditions but was not associated with greater neural adaptability or synchronization. The distinct characterizations of verbal working memory capacity and vocabulary suggest that dynamic facets of brain function such as adaptability and synchronization may underlie individual differences in more general information processing abilities, whereas neural efficiency may more specifically reflect individual differences in language experience.
Dynamic properties of force fields
NASA Astrophysics Data System (ADS)
Vitalini, F.; Mey, A. S. J. S.; Noé, F.; Keller, B. G.
2015-02-01
Molecular-dynamics simulations are increasingly used to study dynamic properties of biological systems. With this development, the ability of force fields to successfully predict relaxation timescales and the associated conformational exchange processes moves into focus. We assess to what extent the dynamic properties of model peptides (Ac-A-NHMe, Ac-V-NHMe, AVAVA, A10) differ when simulated with different force fields (AMBER ff99SB-ILDN, AMBER ff03, OPLS-AA/L, CHARMM27, and GROMOS43a1). The dynamic properties are extracted using Markov state models. For single-residue models (Ac-A-NHMe, Ac-V-NHMe), the slow conformational exchange processes are similar in all force fields, but the associated relaxation timescales differ by up to an order of magnitude. For the peptide systems, not only the relaxation timescales, but also the conformational exchange processes differ considerably across force fields. This finding calls the significance of dynamic interpretations of molecular-dynamics simulations into question.
Neural Computations in a Dynamical System with Multiple Time Scales
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions. PMID:27679569
Measuring Whole-Brain Neural Dynamics and Behavior of Freely-Moving C. elegans
NASA Astrophysics Data System (ADS)
Shipley, Frederick; Nguyen, Jeffrey; Plummer, George; Shaevitz, Joshua; Leifer, Andrew
2015-03-01
Bridging the gap between an organism's neural dynamics and its ultimate behavior is the fundamental goal of neuroscience. Previously, to probe neural dynamics, we have been limited to measuring from a limited number of neurons, whether by electrode or optogenetic measurements. Here we present an instrument to simultaneously monitor neural activity from every neuron in a freely moving Caenorhabditis elegans' head, while recording behavior at the same time. Previously, whole-brain imaging has been demonstrated in C. elegans, but only in restrained and anesthetized animals (1). For studying neural coding of behavior it is crucial to study neural activity in freely behaving animals. Neural activity is recorded optically from cells expressing a calcium indicator, GCaMP6. Real time computer vision tracks the worm's position in x-y, while a piezo stage sweeps through the brain in z, yielding five brain-volumes per second. Behavior is recorded under infrared, dark-field imaging. This tool will allow us to directly correlate neural activity with behavior and we will present progress toward this goal. Thank you to the Simons Foundation and Princeton University for supporting this research.
Beyond slots and resources: Grounding cognitive concepts in neural dynamics
Johnson, Jeffrey S.; Simmering, Vanessa R.; Buss, Aaron T.
2014-01-01
Research over the past decade has suggested that the ability to hold information in visual working memory (VWM) may be limited to as few as 3-4 items. However, the precise nature and source of these capacity limits remains hotly debated. Most commonly, capacity limits have been inferred from studies of visual change detection, in which performance declines systematically as a function of the number of items participants must remember. According to one view, such declines indicate that a limited number of fixed-resolution representations are held in independent memory ‘slots’. Another view suggests that capacity limits are more apparent than real, emerging as limited memory resources are distributed across more to-be-remembered items. Here we argue that, although both perspectives have merit and have generated and explained an impressive amount of empirical data, their central focus on the representations—rather than processes—underlying VWM may ultimately limit continuing progress in this area. As an alternative, we describe a neurally-grounded, process-based approach to VWM: the dynamic field theory. Simulations demonstrate that this model can account for key aspects of behavioral performance in change detection, in addition to generating novel behavioral predictions that have been confirmed experimentally. Furthermore, we describe extensions of the model to recall tasks, the integration of visual features, cognitive development, individual differences, and functional imaging studies of VWM. We conclude by discussing the importance of grounding psychological concepts in neural dynamics as a first step toward understanding the link between brain and behavior. PMID:24306983
Discriminating lysosomal membrane protein types using dynamic neural network.
Tripathi, Vijay; Gupta, Dwijendra Kumar
2014-01-01
This work presents a dynamic artificial neural network methodology, which classifies the proteins into their classes from their sequences alone: the lysosomal membrane protein classes and the various other membranes protein classes. In this paper, neural networks-based lysosomal-associated membrane protein type prediction system is proposed. Different protein sequence representations are fused to extract the features of a protein sequence, which includes seven feature sets; amino acid (AA) composition, sequence length, hydrophobic group, electronic group, sum of hydrophobicity, R-group, and dipeptide composition. To reduce the dimensionality of the large feature vector, we applied the principal component analysis. The probabilistic neural network, generalized regression neural network, and Elman regression neural network (RNN) are used as classifiers and compared with layer recurrent network (LRN), a dynamic network. The dynamic networks have memory, i.e. its output depends not only on the input but the previous outputs also. Thus, the accuracy of LRN classifier among all other artificial neural networks comes out to be the highest. The overall accuracy of jackknife cross-validation is 93.2% for the data-set. These predicted results suggest that the method can be effectively applied to discriminate lysosomal associated membrane proteins from other membrane proteins (Type-I, Outer membrane proteins, GPI-Anchored) and Globular proteins, and it also indicates that the protein sequence representation can better reflect the core feature of membrane proteins than the classical AA composition.
Neural networks and dynamic complex systems
Fox, G.; Furmanski, Wojtek; Ho, Alex; Koller, J.; Simic, P.; Wong, Isaac
1989-01-01
We describe the use of neural networks for optimization and inference associated with a variety of complex systems. We show how a string formalism can be used for parallel computer decomposition, message routing and sequential optimizing compilers. We extend these ideas to a general treatment of spatial assessment and distributed artificial intelligence. 34 refs., 12 figs.
Scale-Free Neural and Physiological Dynamics in Naturalistic Stimuli Processing.
Lin, Amy; Maniscalco, Brian; He, Biyu J
2016-01-01
Neural activity recorded at multiple spatiotemporal scales is dominated by arrhythmic fluctuations without a characteristic temporal periodicity. Such activity often exhibits a 1/f-type power spectrum, in which power falls off with increasing frequency following a power-law function: [Formula: see text], which is indicative of scale-free dynamics. Two extensively studied forms of scale-free neural dynamics in the human brain are slow cortical potentials (SCPs)-the low-frequency (<5 Hz) component of brain field potentials-and the amplitude fluctuations of α oscillations, both of which have been shown to carry important functional roles. In addition, scale-free dynamics characterize normal human physiology such as heartbeat dynamics. However, the exact relationships among these scale-free neural and physiological dynamics remain unclear. We recorded simultaneous magnetoencephalography and electrocardiography in healthy subjects in the resting state and while performing a discrimination task on scale-free dynamical auditory stimuli that followed different scale-free statistics. We observed that long-range temporal correlation (captured by the power-law exponent β) in SCPs positively correlated with that of heartbeat dynamics across time within an individual and negatively correlated with that of α-amplitude fluctuations across individuals. In addition, across individuals, long-range temporal correlation of both SCP and α-oscillation amplitude predicted subjects' discrimination performance in the auditory task, albeit through antagonistic relationships. These findings reveal interrelations among different scale-free neural and physiological dynamics and initial evidence for the involvement of scale-free neural dynamics in the processing of natural stimuli, which often exhibit scale-free dynamics.
Scale-Free Neural and Physiological Dynamics in Naturalistic Stimuli Processing
Lin, Amy
2016-01-01
Abstract Neural activity recorded at multiple spatiotemporal scales is dominated by arrhythmic fluctuations without a characteristic temporal periodicity. Such activity often exhibits a 1/f-type power spectrum, in which power falls off with increasing frequency following a power-law function: P(f)∝1/fβ, which is indicative of scale-free dynamics. Two extensively studied forms of scale-free neural dynamics in the human brain are slow cortical potentials (SCPs)—the low-frequency (<5 Hz) component of brain field potentials—and the amplitude fluctuations of α oscillations, both of which have been shown to carry important functional roles. In addition, scale-free dynamics characterize normal human physiology such as heartbeat dynamics. However, the exact relationships among these scale-free neural and physiological dynamics remain unclear. We recorded simultaneous magnetoencephalography and electrocardiography in healthy subjects in the resting state and while performing a discrimination task on scale-free dynamical auditory stimuli that followed different scale-free statistics. We observed that long-range temporal correlation (captured by the power-law exponent β) in SCPs positively correlated with that of heartbeat dynamics across time within an individual and negatively correlated with that of α-amplitude fluctuations across individuals. In addition, across individuals, long-range temporal correlation of both SCP and α-oscillation amplitude predicted subjects’ discrimination performance in the auditory task, albeit through antagonistic relationships. These findings reveal interrelations among different scale-free neural and physiological dynamics and initial evidence for the involvement of scale-free neural dynamics in the processing of natural stimuli, which often exhibit scale-free dynamics. PMID:27822495
Turbulence via information field dynamics
NASA Astrophysics Data System (ADS)
Enßlin, Torsten A.
Turbulent flows exhibit scale-free regimes, for which information on the statistical properties of the dynamics exists for many length-scales. The simulation of turbulent systems can benefit from the inclusion of such information on sub-grid process. How can statistical information about the flow on small scales be optimally incorporated into simulation schemes? Information field dynamics (IFD) is a novel information theoretical framework to design schemes that exploit such statistical knowledge on sub-grid flow fluctuations.
Dynamic security contingency screening and ranking using neural networks.
Mansour, Y; Vaahedi, E; El-Sharkawi, M A
1997-01-01
This paper summarizes BC Hydro's experience in applying neural networks to dynamic security contingency screening and ranking. The idea is to use the information on the prevailing operating condition and directly provide contingency screening and ranking using a trained neural network. To train the two neural networks for the large scale systems of BC Hydro and Hydro Quebec, in total 1691 detailed transient stability simulation were conducted, 1158 for BC Hydro system and 533 for the Hydro Quebec system. The simulation program was equipped with the energy margin calculation module (second kick) to measure the energy margin in each run. The first set of results showed poor performance for the neural networks in assessing the dynamic security. However a number of corrective measures improved the results significantly. These corrective measures included: 1) the effectiveness of output; 2) the number of outputs; 3) the type of features (static versus dynamic); 4) the number of features; 5) system partitioning; and 6) the ratio of training samples to features. The final results obtained using the large scale systems of BC Hydro and Hydro Quebec demonstrates a good potential for neural network in dynamic security assessment contingency screening and ranking.
Neural network approaches to dynamic collision-free trajectory generation.
Yang, S X; Meng, M
2001-01-01
In this paper, dynamic collision-free trajectory generation in a nonstationary environment is studied using biologically inspired neural network approaches. The proposed neural network is topologically organized, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The state space of the neural network can be either the Cartesian workspace or the joint space of multi-joint robot manipulators. There are only local lateral connections among neurons. The real-time optimal trajectory is generated through the dynamic activity landscape of the neural network without explicitly searching over the free space nor the collision paths, without explicitly optimizing any global cost functions, without any prior knowledge of the dynamic environment, and without any learning procedures. Therefore the model algorithm is computationally efficient. The stability of the neural network system is guaranteed by the existence of a Lyapunov function candidate. In addition, this model is not very sensitive to the model parameters. Several model variations are presented and the differences are discussed. As examples, the proposed models are applied to generate collision-free trajectories for a mobile robot to solve a maze-type of problem, to avoid concave U-shaped obstacles, to track a moving target and at the same to avoid varying obstacles, and to generate a trajectory for a two-link planar robot with two targets. The effectiveness and efficiency of the proposed approaches are demonstrated through simulation and comparison studies.
A theory of neural dimensionality, dynamics, and measurement
NASA Astrophysics Data System (ADS)
Ganguli, Surya
In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing millions of behaviorally relevant neurons. Dimensionality reduction has often shown that such datasets are strikingly simple; they can be described using a much smaller number of dimensions than the number of recorded neurons, and the resulting projections onto these dimensions yield a remarkably insightful dynamical portrait of circuit computation. This ubiquitous simplicity raises several profound and timely conceptual questions. What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons? We present a theory that answers these questions, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover dynamical portraits in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. membership pending.
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
Neural dynamic optimization for control systems. I. Background.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Neural dynamic optimization for control systems.II. Theory.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Neural network with dynamically adaptable neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.
Spontaneous Neural Dynamics and Multi-scale Network Organization
Foster, Brett L.; He, Biyu J.; Honey, Christopher J.; Jerbi, Karim; Maier, Alexander; Saalmann, Yuri B.
2016-01-01
Spontaneous neural activity has historically been viewed as task-irrelevant noise that should be controlled for via experimental design, and removed through data analysis. However, electrophysiology and functional MRI studies of spontaneous activity patterns, which have greatly increased in number over the past decade, have revealed a close correspondence between these intrinsic patterns and the structural network architecture of functional brain circuits. In particular, by analyzing the large-scale covariation of spontaneous hemodynamics, researchers are able to reliably identify functional networks in the human brain. Subsequent work has sought to identify the corresponding neural signatures via electrophysiological measurements, as this would elucidate the neural origin of spontaneous hemodynamics and would reveal the temporal dynamics of these processes across slower and faster timescales. Here we survey common approaches to quantifying spontaneous neural activity, reviewing their empirical success, and their correspondence with the findings of neuroimaging. We emphasize invasive electrophysiological measurements, which are amenable to amplitude- and phase-based analyses, and which can report variations in connectivity with high spatiotemporal precision. After summarizing key findings from the human brain, we survey work in animal models that display similar multi-scale properties. We highlight that, across many spatiotemporal scales, the covariance structure of spontaneous neural activity reflects structural properties of neural networks and dynamically tracks their functional repertoire. PMID:26903823
Dynamics of some neural network models with delay.
Ruan, J; Li, L; Lin, W
2001-05-01
The dynamics of the neuronic model described by the one-dimensional delay functional differential equation are studied in this paper. We give a strict and detailed analysis of dynamical characteristic of this model by the Lyapunov functional approach and Hopf bifurcation proposition. Furthermore, numerical simulations, as well as Lyapunov exponents, are presented to support our conjectures about the appearance of complex dynamics such as chaos. We also investigate the dynamics of the neural network model described by the n-dimensional delay functional differential equation with a symmetrical weight matrix, and corresponding simulation results are included as concrete examples.
Logic Dynamics for Deductive Inference -- Its Stability and Neural Basis
NASA Astrophysics Data System (ADS)
Tsuda, Ichiro
2014-12-01
We propose a dynamical model that represents a process of deductive inference. We discuss the stability of logic dynamics and a neural basis for the dynamics. We propose a new concept of descriptive stability, thereby enabling a structure of stable descriptions of mathematical models concerning dynamic phenomena to be clarified. The present theory is based on the wider and deeper thoughts of John S. Nicolis. In particular, it is based on our joint paper on the chaos theory of human short-term memories with a magic number of seven plus or minus two.
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
Slow diffusive dynamics in a chaotic balanced neural network.
Shaham, Nimrod; Burak, Yoram
2017-05-01
It has been proposed that neural noise in the cortex arises from chaotic dynamics in the balanced state: in this model of cortical dynamics, the excitatory and inhibitory inputs to each neuron approximately cancel, and activity is driven by fluctuations of the synaptic inputs around their mean. It remains unclear whether neural networks in the balanced state can perform tasks that are highly sensitive to noise, such as storage of continuous parameters in working memory, while also accounting for the irregular behavior of single neurons. Here we show that continuous parameter working memory can be maintained in the balanced state, in a neural circuit with a simple network architecture. We show analytically that in the limit of an infinite network, the dynamics generated by this architecture are characterized by a continuous set of steady balanced states, allowing for the indefinite storage of a continuous parameter. In finite networks, we show that the chaotic noise drives diffusive motion along the approximate attractor, which gradually degrades the stored memory. We analyze the dynamics and show that the slow diffusive motion induces slowly decaying temporal cross correlations in the activity, which differ substantially from those previously described in the balanced state. We calculate the diffusivity, and show that it is inversely proportional to the system size. For large enough (but realistic) neural population sizes, and with suitable tuning of the network connections, the proposed balanced network can sustain continuous parameter values in memory over time scales larger by several orders of magnitude than the single neuron time scale.
Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture
Disney, Adam; Reynolds, John
2015-01-01
Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.
3-D flame temperature field reconstruction with multiobjective neural network
NASA Astrophysics Data System (ADS)
Wan, Xiong; Gao, Yiqing; Wang, Yuanmei
2003-02-01
A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.
Turbulence via information field dynamics
NASA Astrophysics Data System (ADS)
Ensslin, Torsten A.
2015-08-01
Turbulent flows exhibit-scale free regimes, for which information on the statistical properties of the dynamics exists for many length-scales. The simulation of turbulent systems can benefit from the inclusion of such information on sub-grid process. How can statistical information about the flow on small scales be optimally be incorporated into simulation schemes? Information field dynamics (IFD) is a novel information theoretical framework to design schemes that exploit such statistical knowledge on sub-grid flow fluctuations. In this talk, I will introduce the basic idea of IFD, present its first toy applications, and discuss the next steps towards its usage in complex turbulence simulations.
Dynamics of a neural system with a multiscale architecture
Breakspear, Michael; Stam, Cornelis J
2005-01-01
The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448
Visual field interpretation with a personal computer based neural network.
Mutlukan, E; Keating, D
1994-01-01
The Computer Assisted Touch Screen (CATS) and Computer Assisted Moving Eye Campimeter (CAMEC) are personal computer (PC)-based video-campimeters which employ multiple and single static stimuli on a cathode ray tube respectively. Clinical studies show that CATS and CAMEC provide comparable results to more expensive conventional visual field test devices. A neural network has been designed to classify visual field data from PC-based video-campimeters to facilitate diagnostic interpretation of visual field test results by non-experts. A three-layer back propagation network was designed, with 110 units in the input layer (each unit corresponding to a test point on the visual field test grid), a hidden layer of 40 processing units, and an output layer of 27 units (each one corresponding to a particular type of visual field pattern). The network was trained by a training set of 540 simulated visual field test result patterns, including normal, glaucomatous and neuro-ophthalmic defects, for up to 20,000 cycles. The classification accuracy of the network was initially measured with a previously unseen test set of 135 simulated fields and further tested with a genuine test result set of 100 neurological and 200 glaucomatous fields. A classification accuracy of 91-97% with simulated field results and 65-100% with genuine field results were achieved. This suggests that neural networks incorporated into PC-based video-campimeters may enable correct interpretation of results in non-specialist clinics or in the community.
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming
2015-01-01
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic
Neural representation of muscle dynamics in voluntary movement control.
Hasson, Christopher J
2014-07-01
Several theories of motor control posit that the nervous system has access to a neural representation of muscle dynamics. Yet, this has not been tested experimentally. Should such a representation exist, it was hypothesized that subjects who learned to control a virtual limb using virtual muscles would improve performance faster and show greater generalization than those who learned with a less dynamically complex virtual force generator. Healthy adults practiced using their biceps brachii activity to move a myoelectrically controlled virtual limb from rest to a standard target position with maximum speed and accuracy. Throughout practice, generalization was assessed with untrained target trials and sensitivity to actuator dynamics was probed by unexpected actuator model switches. In a muscle model subject group (n = 10), the biceps electromyographic signal activated a virtual muscle that pulled on the virtual limb with a force governed by muscle dynamics, defined by a nonlinear force-length-velocity relation and series elastic stiffness. A force generator group (n = 10) performed the same task, but the actuation force was a linear function of the biceps activation signal. Both groups made significant errors with unexpected actuator dynamics switches, supporting task sensitivity to actuator dynamics. The muscle model group improved performance as fast as the force generator group and showed greater generalization in early practice, despite using an actuator with more complex dynamics. These results are consistent with a preexisting neural representation of muscle dynamics, which may have offset any learning challenges associated with the more dynamically complex virtual muscle model.
Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno
2013-09-18
Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks.
Framing effects: behavioral dynamics and neural basis.
Zheng, Hongming; Wang, X T; Zhu, Liqi
2010-09-01
This study examined the neural basis of framing effects using life-death decision problems framed either positively in terms of lives saved or negatively in terms of lives lost in large group and small group contexts. Using functional MRI we found differential brain activations to the verbal and social cues embedded in the choice problems. In large group contexts, framing effects were significant where participants were more risk seeking under the negative (loss) framing than under the positive (gain) framing. This behavioral difference in risk preference was mainly regulated by the activation in the right inferior frontal gyrus, including the homologue of the Broca's area. In contrast, framing effects diminished in small group contexts while the insula and parietal lobe in the right hemisphere were distinctively activated, suggesting an important role of emotion in switching choice preference from an indecisive mode to a more consistent risk-taking inclination, governed by a kith-and-kin decision rationality.
Dynamic behaviors of the non-neural ectoderm during mammalian cranial neural tube closure.
Ray, Heather J; Niswander, Lee A
2016-08-15
The embryonic brain and spinal cord initially form through the process of neural tube closure (NTC). NTC is thought to be highly similar between rodents and humans, and studies of mouse genetic mutants have greatly increased our understanding of the molecular basis of NTC with relevance for human neural tube defects. In addition, studies using amphibian and chick embryos have shed light into the cellular and tissue dynamics underlying NTC. However, the dynamics of mammalian NTC has been difficult to study due to in utero development until recently when advances in mouse embryo ex vivo culture techniques along with confocal microscopy have allowed for imaging of mouse NTC in real time. Here, we have performed live imaging of mouse embryos with a particular focus on the non-neural ectoderm (NNE). Previous studies in multiple model systems have found that the NNE is important for proper NTC, but little is known about the behavior of these cells during mammalian NTC. Here we utilized a NNE-specific genetic labeling system to assess NNE dynamics during murine NTC and identified different NNE cell behaviors as the cranial region undergoes NTC. These results bring valuable new insight into regional differences in cellular behavior during NTC that may be driven by different molecular regulators and which may underlie the various positional disruptions of NTC observed in humans with neural tube defects.
Dynamics of gauge field inflation
Alexander, Stephon; Jyoti, Dhrubo; Kosowsky, Arthur; Marcianò, Antonino
2015-05-05
We analyze the existence and stability of dynamical attractor solutions for cosmological inflation driven by the coupling between fermions and a gauge field. Assuming a spatially homogeneous and isotropic gauge field and fermion current, the interacting fermion equation of motion reduces to that of a free fermion up to a phase shift. Consistency of the model is ensured via the Stückelberg mechanism. We prove the existence of exactly one stable solution, and demonstrate the stability numerically. Inflation arises without fine tuning, and does not require postulating any effective potential or non-standard coupling.
A solution to neural field equations by a recurrent neural network method
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2012-09-01
Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.
Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang
2016-01-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312
Dynamic Pricing in Electronic Commerce Using Neural Network
NASA Astrophysics Data System (ADS)
Ghose, Tapu Kumar; Tran, Thomas T.
In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.
Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns
Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario
2015-01-01
The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381
Nonlinear dynamical system approaches towards neural prosthesis
Torikai, Hiroyuki; Hashimoto, Sho
2011-04-19
An asynchronous discrete-state spiking neurons is a wired system of shift registers that can mimic nonlinear dynamics of an ODE-based neuron model. The control parameter of the neuron is the wiring pattern among the registers and thus they are suitable for on-chip learning. In this paper an asynchronous discrete-state spiking neuron is introduced and its typical nonlinear phenomena are demonstrated. Also, a learning algorithm for a set of neurons is presented and it is demonstrated that the algorithm enables the set of neurons to reconstruct nonlinear dynamics of another set of neurons with unknown parameter values. The learning function is validated by FPGA experiments.
Approximate Inference for Time-Varying Interactions and Macroscopic Dynamics of Neural Populations
Obermayer, Klaus
2017-01-01
The models in statistical physics such as an Ising model offer a convenient way to characterize stationary activity of neural populations. Such stationary activity of neurons may be expected for recordings from in vitro slices or anesthetized animals. However, modeling activity of cortical circuitries of awake animals has been more challenging because both spike-rates and interactions can change according to sensory stimulation, behavior, or an internal state of the brain. Previous approaches modeling the dynamics of neural interactions suffer from computational cost; therefore, its application was limited to only a dozen neurons. Here by introducing multiple analytic approximation methods to a state-space model of neural population activity, we make it possible to estimate dynamic pairwise interactions of up to 60 neurons. More specifically, we applied the pseudolikelihood approximation to the state-space model, and combined it with the Bethe or TAP mean-field approximation to make the sequential Bayesian estimation of the model parameters possible. The large-scale analysis allows us to investigate dynamics of macroscopic properties of neural circuitries underlying stimulus processing and behavior. We show that the model accurately estimates dynamics of network properties such as sparseness, entropy, and heat capacity by simulated data, and demonstrate utilities of these measures by analyzing activity of monkey V4 neurons as well as a simulated balanced network of spiking neurons. PMID:28095421
Dynamic neural architecture for social knowledge retrieval.
Wang, Yin; Collins, Jessica A; Koski, Jessica; Nugiel, Tehila; Metoki, Athanasia; Olson, Ingrid R
2017-03-13
Social behavior is often shaped by the rich storehouse of biographical information that we hold for other people. In our daily life, we rapidly and flexibly retrieve a host of biographical details about individuals in our social network, which often guide our decisions as we navigate complex social interactions. Even abstract traits associated with an individual, such as their political affiliation, can cue a rich cascade of person-specific knowledge. Here, we asked whether the anterior temporal lobe (ATL) serves as a hub for a distributed neural circuit that represents person knowledge. Fifty participants across two studies learned biographical information about fictitious people in a 2-d training paradigm. On day 3, they retrieved this biographical information while undergoing an fMRI scan. A series of multivariate and connectivity analyses suggest that the ATL stores abstract person identity representations. Moreover, this region coordinates interactions with a distributed network to support the flexible retrieval of person attributes. Together, our results suggest that the ATL is a central hub for representing and retrieving person knowledge.
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
Dynamical analysis of uncertain neural networks with multiple time delays
NASA Astrophysics Data System (ADS)
Arik, Sabri
2016-02-01
This paper investigates the robust stability problem for dynamical neural networks in the presence of time delays and norm-bounded parameter uncertainties with respect to the class of non-decreasing, non-linear activation functions. By employing the Lyapunov stability and homeomorphism mapping theorems together, a new delay-independent sufficient condition is obtained for the existence, uniqueness and global asymptotic stability of the equilibrium point for the delayed uncertain neural networks. The condition obtained for robust stability establishes a matrix-norm relationship between the network parameters of the neural system, which can be easily verified by using properties of the class of the positive definite matrices. Some constructive numerical examples are presented to show the applicability of the obtained result and its advantages over the previously published corresponding literature results.
Dynamic regulation of mRNA decay during neural development.
Burow, Dana A; Umeh-Garcia, Maxine C; True, Marie B; Bakhaj, Crystal D; Ardell, David H; Cleary, Michael D
2015-04-21
Gene expression patterns are determined by rates of mRNA transcription and decay. While transcription is known to regulate many developmental processes, the role of mRNA decay is less extensively defined. A critical step toward defining the role of mRNA decay in neural development is to measure genome-wide mRNA decay rates in neural tissue. Such information should reveal the degree to which mRNA decay contributes to differential gene expression and provide a foundation for identifying regulatory mechanisms that affect neural mRNA decay. We developed a technique that allows genome-wide mRNA decay measurements in intact Drosophila embryos, across all tissues and specifically in the nervous system. Our approach revealed neural-specific decay kinetics, including stabilization of transcripts encoding regulators of axonogenesis and destabilization of transcripts encoding ribosomal proteins and histones. We also identified correlations between mRNA stability and physiologic properties of mRNAs; mRNAs that are predicted to be translated within axon growth cones or dendrites have long half-lives while mRNAs encoding transcription factors that regulate neurogenesis have short half-lives. A search for candidate cis-regulatory elements identified enrichment of the Pumilio recognition element (PRE) in mRNAs encoding regulators of neurogenesis. We found that decreased expression of the RNA-binding protein Pumilio stabilized predicted neural mRNA targets and that a PRE is necessary to trigger reporter-transcript decay in the nervous system. We found that differential mRNA decay contributes to the relative abundance of transcripts involved in cell-fate decisions, axonogenesis, and other critical events during Drosophila neural development. Neural-specific decay kinetics and the functional specificity of mRNA decay suggest the existence of a dynamic neurodevelopmental mRNA decay network. We found that Pumilio is one component of this network, revealing a novel function for this RNA
Transient dynamics for sequence processing neural networks
NASA Astrophysics Data System (ADS)
Kawamura, Masaki; Okada, Masato
2002-01-01
An exact solution of the transient dynamics for a sequential associative memory model is discussed through both the path-integral method and the statistical neurodynamics. Although the path-integral method has the ability to give an exact solution of the transient dynamics, only stationary properties have been discussed for the sequential associative memory. We have succeeded in deriving an exact macroscopic description of the transient dynamics by analysing the correlation of crosstalk noise. Surprisingly, the order parameter equations of this exact solution are completely equivalent to those of the statistical neurodynamics, which is an approximation theory that assumes crosstalk noise to obey the Gaussian distribution. In order to examine our theoretical findings, we numerically obtain cumulants of the crosstalk noise. We verify that the third- and fourth-order cumulants are equal to zero, and that the crosstalk noise is normally distributed even in the non-retrieval case. We show that the results obtained by our theory agree with those obtained by computer simulations. We have also found that the macroscopic unstable state completely coincides with the separatrix.
Topological defects control collective dynamics in neural progenitor cell cultures
NASA Astrophysics Data System (ADS)
Kawaguchi, Kyogo; Kageyama, Ryoichiro; Sano, Masaki
2017-04-01
Cultured stem cells have become a standard platform not only for regenerative medicine and developmental biology but also for biophysical studies. Yet, the characterization of cultured stem cells at the level of morphology and of the macroscopic patterns resulting from cell-to-cell interactions remains largely qualitative. Here we report on the collective dynamics of cultured murine neural progenitor cells (NPCs), which are multipotent stem cells that give rise to cells in the central nervous system. At low densities, NPCs moved randomly in an amoeba-like fashion. However, NPCs at high density elongated and aligned their shapes with one another, gliding at relatively high velocities. Although the direction of motion of individual cells reversed stochastically along the axes of alignment, the cells were capable of forming an aligned pattern up to length scales similar to that of the migratory stream observed in the adult brain. The two-dimensional order of alignment within the culture showed a liquid-crystalline pattern containing interspersed topological defects with winding numbers of +1/2 and -1/2 (half-integer due to the nematic feature that arises from the head-tail symmetry of cell-to-cell interaction). We identified rapid cell accumulation at +1/2 defects and the formation of three-dimensional mounds. Imaging at the single-cell level around the defects allowed us to quantify the velocity field and the evolving cell density; cells not only concentrate at +1/2 defects, but also escape from -1/2 defects. We propose a generic mechanism for the instability in cell density around the defects that arises from the interplay between the anisotropic friction and the active force field.
Traveling bumps and their collisions in a two-dimensional neural field.
Lu, Yao; Sato, Yuzuru; Amari, Shun-Ichi
2011-05-01
A neural field is a continuous version of a neural network model accounting for dynamical pattern forming from populational firing activities in neural tissues. These patterns include standing bumps, moving bumps, traveling waves, target waves, breathers, and spiral waves, many of them observed in various brain areas. They can be categorized into two types: a wave-like activity spreading over the field and a particle-like localized activity. We show through numerical experiments that localized traveling excitation patterns (traveling bumps), which behave like particles, exist in a two-dimensional neural field with excitation and inhibition mechanisms. The traveling bumps do not require any geometric restriction (boundary) to prevent them from propagating away, a fact that might shed light on how neurons in the brain are functionally organized. Collisions of traveling bumps exhibit rich phenomena; they might reveal the manner of information processing in the cortex and be useful in various applications. The trajectories of traveling bumps can be controlled by external inputs.
A mean field neural network for hierarchical module placement
NASA Technical Reports Server (NTRS)
Unaltuna, M. Kemal; Pitchumani, Vijay
1992-01-01
This paper proposes a mean field neural network for the two-dimensional module placement problem. An efficient coding scheme with only O(N log N) neurons is employed where N is the number of modules. The neurons are evolved in groups of N in log N iteration steps such that the circuit is recursively partitioned in alternating vertical and horizontal directions. In our simulations, the network was able to find optimal solutions to all test problems with up to 128 modules.
Simulation of dynamic processes with adaptive neural networks.
Tzanos, C. P.
1998-02-03
Many industrial processes are highly non-linear and complex. Their simulation with first-principle or conventional input-output correlation models is not satisfactory, either because the process physics is not well understood, or it is so complex that direct simulation is either not adequately accurate, or it requires excessive computation time, especially for on-line applications. Artificial intelligence techniques (neural networks, expert systems, fuzzy logic) or their combination with simple process-physics models can be effectively used for the simulation of such processes. Feedforward (static) neural networks (FNNs) can be used effectively to model steady-state processes. They have also been used to model dynamic (time-varying) processes by adding to the network input layer input nodes that represent values of input variables at previous time steps. The number of previous time steps is problem dependent and, in general, can be determined after extensive testing. This work demonstrates that for dynamic processes that do not vary fast with respect to the retraining time of the neural network, an adaptive feedforward neural network can be an effective simulator that is free of the complexities introduced by the use of input values at previous time steps.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Response of traveling waves to transient inputs in neural fields
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Ermentrout, Bard
2012-02-01
We analyze the effects of transient stimulation on traveling waves in neural field equations. Neural fields are modeled as integro-differential equations whose convolution term represents the synaptic connections of a spatially extended neuronal network. The adjoint of the linearized wave equation can be used to identify how a particular input will shift the location of a traveling wave. This wave response function is analogous to the phase response curve of limit cycle oscillators. For traveling fronts in an excitatory network, the sign of the shift depends solely on the sign of the transient input. A complementary estimate of the effective shift is derived using an equation for the time-dependent speed of the perturbed front. Traveling pulses are analyzed in an asymmetric lateral inhibitory network and they can be advanced or delayed, depending on the position of spatially localized transient inputs. We also develop bounds on the amplitude of transient input necessary to terminate traveling pulses, based on the global bifurcation structure of the neural field.
Response of traveling waves to transient inputs in neural fields.
Kilpatrick, Zachary P; Ermentrout, Bard
2012-02-01
We analyze the effects of transient stimulation on traveling waves in neural field equations. Neural fields are modeled as integro-differential equations whose convolution term represents the synaptic connections of a spatially extended neuronal network. The adjoint of the linearized wave equation can be used to identify how a particular input will shift the location of a traveling wave. This wave response function is analogous to the phase response curve of limit cycle oscillators. For traveling fronts in an excitatory network, the sign of the shift depends solely on the sign of the transient input. A complementary estimate of the effective shift is derived using an equation for the time-dependent speed of the perturbed front. Traveling pulses are analyzed in an asymmetric lateral inhibitory network and they can be advanced or delayed, depending on the position of spatially localized transient inputs. We also develop bounds on the amplitude of transient input necessary to terminate traveling pulses, based on the global bifurcation structure of the neural field.
Neural dynamic optimization for control systems.III. Applications.
Seong, C Y; Widrow, B
2001-01-01
For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.
Simulating dynamic plastic continuous neural networks by finite elements.
Joghataie, Abdolreza; Torghabehi, Omid Oliyan
2014-08-01
We introduce dynamic plastic continuous neural network (DPCNN), which is comprised of neurons distributed in a nonlinear plastic medium where wire-like connections of neural networks are replaced with the continuous medium. We use finite element method to model the dynamic phenomenon of information processing within the DPCNNs. During the training, instead of weights, the properties of the continuous material at its different locations and some properties of neurons are modified. Input and output can be vectors and/or continuous functions over lines and/or areas. Delay and feedback from neurons to themselves and from outputs occur in the DPCNNs. We model a simple form of the DPCNN where the medium is a rectangular plate of bilinear material, and the neurons continuously fire a signal, which is a function of the horizontal displacement.
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control
Speech Recognition Using Neural Nets and Dynamic Time Warping
1988-12-01
long did it take you to find the ON/OFF button? Imagine simply saying, "Stereo, power on". Considering the vastly more complicated process of...Figure 3.6 shows the normalization process in terms of a unit hypersphere. The second advantage of energy normalization is that neural net processing of...resulted in overall best p,-rformnace. it is used to train all later nets. Dynamic Time Warping Dynanic time warping tests wcre performed over a long period
Dynamic neural activity during stress signals resilient coping
Sinha, Rajita; Lacadie, Cheryl M.; Constable, R. Todd; Seo, Dongju
2016-01-01
Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping. PMID:27432990
Generalized neural networks for spectral analysis: dynamics and Liapunov functions.
Vegas, José M; Zufiria, Pedro J
2004-03-01
This paper analyzes local and global behavior of several dynamical systems which generalize some artificial neural network (ANN) semilinear models originally designed for principal component analysis (PCA) in the characterization of random vectors. These systems implicitly performed the spectral analysis of correlation (i.e. symmetric positive definite) matrices. Here, the proposed generalizations cover both nonsymmetric matrices as well as fully nonlinear models. Local stability analysis is performed via linearization and global behavior is analyzed by constructing several Liapunov functions.
Some new results on system identification with dynamic neural networks.
Yu, W; Li, X
2001-01-01
Nonlinear system online identification via dynamic neural networks is studied in this paper. The main contribution of the paper is that the passivity approach is applied to access several new stable properties of neuro identification. The conditions for passivity, stability, asymptotic stability, and input-to-state stability are established in certain senses. We conclude that the gradient descent algorithm for weight adjustment is stable in an L(infinity) sense and robust to any bounded uncertainties.
Dynamic neural activity during stress signals resilient coping.
Sinha, Rajita; Lacadie, Cheryl M; Constable, R Todd; Seo, Dongju
2016-08-02
Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping.
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
NASA Astrophysics Data System (ADS)
Pyle, Ryan; Rosenbaum, Robert
2017-01-01
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
Shaping the Dynamics of a Bidirectional Neural Interface
Vato, Alessandro; Semprini, Marianna; Maggiolini, Emma; Szymanski, Francois D.; Fadiga, Luciano; Panzeri, Stefano; Mussa-Ivaldi, Ferdinando A.
2012-01-01
Progress in decoding neural signals has enabled the development of interfaces that translate cortical brain activities into commands for operating robotic arms and other devices. The electrical stimulation of sensory areas provides a means to create artificial sensory information about the state of a device. Taken together, neural activity recording and microstimulation techniques allow us to embed a portion of the central nervous system within a closed-loop system, whose behavior emerges from the combined dynamical properties of its neural and artificial components. In this study we asked if it is possible to concurrently regulate this bidirectional brain-machine interaction so as to shape a desired dynamical behavior of the combined system. To this end, we followed a well-known biological pathway. In vertebrates, the communications between brain and limb mechanics are mediated by the spinal cord, which combines brain instructions with sensory information and organizes coordinated patterns of muscle forces driving the limbs along dynamically stable trajectories. We report the creation and testing of the first neural interface that emulates this sensory-motor interaction. The interface organizes a bidirectional communication between sensory and motor areas of the brain of anaesthetized rats and an external dynamical object with programmable properties. The system includes (a) a motor interface decoding signals from a motor cortical area, and (b) a sensory interface encoding the state of the external object into electrical stimuli to a somatosensory area. The interactions between brain activities and the state of the external object generate a family of trajectories converging upon a selected equilibrium point from arbitrary starting locations. Thus, the bidirectional interface establishes the possibility to specify not only a particular movement trajectory but an entire family of motions, which includes the prescribed reactions to unexpected perturbations. PMID
Slow diffusive dynamics in a chaotic balanced neural network
Shaham, Nimrod
2017-01-01
It has been proposed that neural noise in the cortex arises from chaotic dynamics in the balanced state: in this model of cortical dynamics, the excitatory and inhibitory inputs to each neuron approximately cancel, and activity is driven by fluctuations of the synaptic inputs around their mean. It remains unclear whether neural networks in the balanced state can perform tasks that are highly sensitive to noise, such as storage of continuous parameters in working memory, while also accounting for the irregular behavior of single neurons. Here we show that continuous parameter working memory can be maintained in the balanced state, in a neural circuit with a simple network architecture. We show analytically that in the limit of an infinite network, the dynamics generated by this architecture are characterized by a continuous set of steady balanced states, allowing for the indefinite storage of a continuous parameter. In finite networks, we show that the chaotic noise drives diffusive motion along the approximate attractor, which gradually degrades the stored memory. We analyze the dynamics and show that the slow diffusive motion induces slowly decaying temporal cross correlations in the activity, which differ substantially from those previously described in the balanced state. We calculate the diffusivity, and show that it is inversely proportional to the system size. For large enough (but realistic) neural population sizes, and with suitable tuning of the network connections, the proposed balanced network can sustain continuous parameter values in memory over time scales larger by several orders of magnitude than the single neuron time scale. PMID:28459813
The neural dynamics of updating person impressions
Cai, Yang; Todorov, Alexander
2013-01-01
Person perception is a dynamic, evolving process. Because other people are an endless source of social information, people need to update their impressions of others based upon new information. We devised an fMRI study to identify brain regions involved in updating impressions. Participants saw faces paired with valenced behavioral information and were asked to form impressions of these individuals. Each face was seen five times in a row, each time with a different behavioral description. Critically, for half of the faces the behaviors were evaluatively consistent, while for the other half they were inconsistent. In line with prior work, dorsomedial prefrontal cortex (dmPFC) was associated with forming impressions of individuals based on behavioral information. More importantly, a whole-brain analysis revealed a network of other regions associated with updating impressions of individuals who exhibited evaluatively inconsistent behaviors, including rostrolateral PFC, superior temporal sulcus, right inferior parietal lobule and posterior cingulate cortex. PMID:22490923
The neural dynamics of updating person impressions.
Mende-Siedlecki, Peter; Cai, Yang; Todorov, Alexander
2013-08-01
Person perception is a dynamic, evolving process. Because other people are an endless source of social information, people need to update their impressions of others based upon new information. We devised an fMRI study to identify brain regions involved in updating impressions. Participants saw faces paired with valenced behavioral information and were asked to form impressions of these individuals. Each face was seen five times in a row, each time with a different behavioral description. Critically, for half of the faces the behaviors were evaluatively consistent, while for the other half they were inconsistent. In line with prior work, dorsomedial prefrontal cortex (dmPFC) was associated with forming impressions of individuals based on behavioral information. More importantly, a whole-brain analysis revealed a network of other regions associated with updating impressions of individuals who exhibited evaluatively inconsistent behaviors, including rostrolateral PFC, superior temporal sulcus, right inferior parietal lobule and posterior cingulate cortex.
Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.
Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc
2016-08-01
This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.
Perspective: network-guided pattern formation of neural dynamics.
Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C
2014-10-05
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics.
Deterministic dynamics of neural activity during absence seizures in rats
NASA Astrophysics Data System (ADS)
Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A.
2009-04-01
The study of brain electrical activities in terms of deterministic nonlinear dynamics has recently received much attention. Forbidden ordinal patterns (FOP) is a recently proposed method to investigate the determinism of a dynamical system through the analysis of intrinsic ordinal properties of a nonstationary time series. The advantages of this method in comparison to others include simplicity and low complexity in computation without further model assumptions. In this paper, the FOP of the EEG series of genetic absence epilepsy rats from Strasbourg was examined to demonstrate evidence of deterministic dynamics during epileptic states. Experiments showed that the number of FOP of the EEG series grew significantly from an interictal to an ictal state via a preictal state. These findings indicated that the deterministic dynamics of neural networks increased significantly in the transition from the interictal to the ictal states and also suggested that the FOP measures of the EEG series could be considered as a predictor of absence seizures.
Deterministic dynamics of neural activity during absence seizures in rats.
Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A
2009-04-01
The study of brain electrical activities in terms of deterministic nonlinear dynamics has recently received much attention. Forbidden ordinal patterns (FOP) is a recently proposed method to investigate the determinism of a dynamical system through the analysis of intrinsic ordinal properties of a nonstationary time series. The advantages of this method in comparison to others include simplicity and low complexity in computation without further model assumptions. In this paper, the FOP of the EEG series of genetic absence epilepsy rats from Strasbourg was examined to demonstrate evidence of deterministic dynamics during epileptic states. Experiments showed that the number of FOP of the EEG series grew significantly from an interictal to an ictal state via a preictal state. These findings indicated that the deterministic dynamics of neural networks increased significantly in the transition from the interictal to the ictal states and also suggested that the FOP measures of the EEG series could be considered as a predictor of absence seizures.
Predicting physical time series using dynamic ridge polynomial neural networks.
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.
Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
NASA Astrophysics Data System (ADS)
Tang, Song; Ye, Mao; Zhu, Ce; Liu, Yiguang
2017-01-01
How to transfer the trained detector into the target scenarios has been an important topic for a long time in the field of computer vision. Unfortunately, most of the existing transfer methods need to keep source samples or label target samples in the detection phase. Therefore, they are difficult to apply to real applications. For this problem, we propose a framework that consists of a controlled convolutional neural network (CCNN) and a modulating neural network (MNN). In a CCNN, the parameters of the last layer, i.e., the classifier, are dynamically adjusted by a MNN. For each target sample, the CCNN adaptively generates a proprietary classifier. Our contributions include (1) the first detector-based unsupervised transfer method that is very suitable for real applications and (2) a new scheme of a dynamically adjusting classifier in which a new object function is invented. Experimental results confirm that our method can achieve state-of-the-art results on two pedestrian datasets.
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S; Zhang, Mingming; Durand, Dominique M
2015-12-02
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2-6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5-5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. Copyright © 2015 the authors 0270-6474/15/3515800-12$15.00/0.
Dynamical criticality in the collective activity of a neural population
NASA Astrophysics Data System (ADS)
Mora, Thierry
The past decade has seen a wealth of physiological data suggesting that neural networks may behave like critical branching processes. Concurrently, the collective activity of neurons has been studied using explicit mappings to classic statistical mechanics models such as disordered Ising models, allowing for the study of their thermodynamics, but these efforts have ignored the dynamical nature of neural activity. I will show how to reconcile these two approaches by learning effective statistical mechanics models of the full history of the collective activity of a neuron population directly from physiological data, treating time as an additional dimension. Applying this technique to multi-electrode recordings from retinal ganglion cells, and studying the thermodynamics of the inferred model, reveals a peak in specific heat reminiscent of a second-order phase transition.
Spatial gradients and multidimensional dynamics in a neural integrator circuit
Miri, Andrew; Daie, Kayvon; Arrenberg, Aristides B.; Baier, Herwig; Aksay, Emre; Tank, David W.
2011-01-01
In a neural integrator, the variability and topographical organization of neuronal firing rate persistence can provide information about the circuit’s functional architecture. Here we use optical recording to measure the time constant of decay of persistent firing (“persistence time”) across a population of neurons comprising the larval zebrafish oculomotor velocity-to-position neural integrator. We find extensive persistence time variation (10-fold; coefficients of variation 0.58–1.20) across cells within individual larvae. We also find that the similarity in firing between two neurons decreased as the distance between them increased and that a gradient in persistence time was mapped along the rostrocaudal and dorsoventral axes. This topography is consistent with the emergence of persistence time heterogeneity from a circuit architecture in which nearby neurons are more strongly interconnected than distant ones. Collectively, our results can be accounted for by integrator circuit models characterized by multiple dimensions of slow firing rate dynamics. PMID:21857656
Bio-Inspired Neural Model for Learning Dynamic Models
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Suri, Ronald
2009-01-01
A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.
Sakas, D E; Panourias, I G; Simpson, B A
2007-01-01
Operative Neuromodulation is the field of altering electrically or chemically the signal transmission in the nervous system by implanted devices in order to excite, inhibit or tune the activities of neurons or neural networks and produce therapeutic effects. The present article reviews relevant literature on procedures or devices applied either in contact with the cerebral cortex or cranial nerves or in deep sites inside the brain in order to treat various refractory neurological conditions such as: a) chronic pain (facial, somatic, deafferentation, phantom limb), b) movement disorders (Parkinson's disease, dystonia, Tourette syndrome), c) epilepsy, d) psychiatric disease, e) hearing deficits, and f) visual loss. These data indicate that in operative neuromodulation, a new field emerges that is based on neural networks research and on advances in digitised stereometric brain imaging which allow precise localisation of cerebral neural networks and their relay stations; this field can be described as Neural networks surgery because it aims to act extrinsically or intrinsically on neural networks and to alter therapeutically the neural signal transmission with the use of implantable electrical or electronic devices. The authors also review neurotechnology literature relevant to neuroengineering, nanotechnologies, brain computer interfaces, hybrid cultured probes, neuromimetics, neuroinformatics, neurocomputation, and computational neuromodulation; the latter field is dedicated to the study of the biophysical and mathematical characteristics of electrochemical neuromodulation. The article also brings forward particularly interesting lines of research such as the carbon nanofibers electrode arrays for simultaneous electrochemical recording and stimulation, closed-loop systems for responsive neuromodulation, and the intracortical electrodes for restoring hearing or vision. The present review of cerebral neuromodulatory procedures highlights the transition from the
Endothelial cells regulate neural crest and second heart field morphogenesis
Milgrom-Hoffman, Michal; Michailovici, Inbal; Ferrara, Napoleone; Zelzer, Elazar; Tzahor, Eldad
2014-01-01
ABSTRACT Cardiac and craniofacial developmental programs are intricately linked during early embryogenesis, which is also reflected by a high frequency of birth defects affecting both regions. The molecular nature of the crosstalk between mesoderm and neural crest progenitors and the involvement of endothelial cells within the cardio–craniofacial field are largely unclear. Here we show in the mouse that genetic ablation of vascular endothelial growth factor receptor 2 (Flk1) in the mesoderm results in early embryonic lethality, severe deformation of the cardio–craniofacial field, lack of endothelial cells and a poorly formed vascular system. We provide evidence that endothelial cells are required for migration and survival of cranial neural crest cells and consequently for the deployment of second heart field progenitors into the cardiac outflow tract. Insights into the molecular mechanisms reveal marked reduction in Transforming growth factor beta 1 (Tgfb1) along with changes in the extracellular matrix (ECM) composition. Our collective findings in both mouse and avian models suggest that endothelial cells coordinate cardio–craniofacial morphogenesis, in part via a conserved signaling circuit regulating ECM remodeling by Tgfb1. PMID:24996922
Endothelial cells regulate neural crest and second heart field morphogenesis.
Milgrom-Hoffman, Michal; Michailovici, Inbal; Ferrara, Napoleone; Zelzer, Elazar; Tzahor, Eldad
2014-07-04
Cardiac and craniofacial developmental programs are intricately linked during early embryogenesis, which is also reflected by a high frequency of birth defects affecting both regions. The molecular nature of the crosstalk between mesoderm and neural crest progenitors and the involvement of endothelial cells within the cardio-craniofacial field are largely unclear. Here we show in the mouse that genetic ablation of vascular endothelial growth factor receptor 2 (Flk1) in the mesoderm results in early embryonic lethality, severe deformation of the cardio-craniofacial field, lack of endothelial cells and a poorly formed vascular system. We provide evidence that endothelial cells are required for migration and survival of cranial neural crest cells and consequently for the deployment of second heart field progenitors into the cardiac outflow tract. Insights into the molecular mechanisms reveal marked reduction in Transforming growth factor beta 1 (Tgfb1) along with changes in the extracellular matrix (ECM) composition. Our collective findings in both mouse and avian models suggest that endothelial cells coordinate cardio-craniofacial morphogenesis, in part via a conserved signaling circuit regulating ECM remodeling by Tgfb1.
Renormalization of Collective Modes in Large-Scale Neural Dynamics
NASA Astrophysics Data System (ADS)
Moirogiannis, Dimitrios; Piro, Oreste; Magnasco, Marcelo O.
2017-03-01
The bulk of studies of coupled oscillators use, as is appropriate in Physics, a global coupling constant controlling all individual interactions. However, because as the coupling is increased, the number of relevant degrees of freedom also increases, this setting conflates the strength of the coupling with the effective dimensionality of the resulting dynamics. We propose a coupling more appropriate to neural circuitry, where synaptic strengths are under biological, activity-dependent control and where the coupling strength and the dimensionality can be controlled separately. Here we study a set of N→ ∞ strongly- and nonsymmetrically-coupled, dissipative, powered, rotational dynamical systems, and derive the equations of motion of the reduced system for dimensions 2 and 4. Our setting highlights the statistical structure of the eigenvectors of the connectivity matrix as the fundamental determinant of collective behavior, inheriting from this structure symmetries and singularities absent from the original microscopic dynamics.
A neural network approach to dynamic task assignment of multirobots.
Zhu, Anmin; Yang, Simon X
2006-09-01
In this paper, a neural network approach to task assignment, based on a self-organizing map (SOM), is proposed for a multirobot system in dynamic environments subject to uncertainties. It is capable of dynamically controlling a group of mobile robots to achieve multiple tasks at different locations, so that the desired number of robots will arrive at every target location from arbitrary initial locations. In the proposed approach, the robot motion planning is integrated with the task assignment, thus the robots start to move once the overall task is given. The robot navigation can be dynamically adjusted to guarantee that each target location has the desired number of robots, even under uncertainties such as when some robots break down. The proposed approach is capable of dealing with changing environments. The effectiveness and efficiency of the proposed approach are demonstrated by simulation studies.
Renormalization of Collective Modes in Large-Scale Neural Dynamics
NASA Astrophysics Data System (ADS)
Moirogiannis, Dimitrios; Piro, Oreste; Magnasco, Marcelo O.
2017-05-01
The bulk of studies of coupled oscillators use, as is appropriate in Physics, a global coupling constant controlling all individual interactions. However, because as the coupling is increased, the number of relevant degrees of freedom also increases, this setting conflates the strength of the coupling with the effective dimensionality of the resulting dynamics. We propose a coupling more appropriate to neural circuitry, where synaptic strengths are under biological, activity-dependent control and where the coupling strength and the dimensionality can be controlled separately. Here we study a set of N→ ∞ strongly- and nonsymmetrically-coupled, dissipative, powered, rotational dynamical systems, and derive the equations of motion of the reduced system for dimensions 2 and 4. Our setting highlights the statistical structure of the eigenvectors of the connectivity matrix as the fundamental determinant of collective behavior, inheriting from this structure symmetries and singularities absent from the original microscopic dynamics.
Aitchison, Laurence; Lengyel, Máté
2016-12-01
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural
Lengyel, Máté
2016-01-01
Probabilistic inference offers a principled framework for understanding both behaviour and cortical computation. However, two basic and ubiquitous properties of cortical responses seem difficult to reconcile with probabilistic inference: neural activity displays prominent oscillations in response to constant input, and large transient changes in response to stimulus onset. Indeed, cortical models of probabilistic inference have typically either concentrated on tuning curve or receptive field properties and remained agnostic as to the underlying circuit dynamics, or had simplistic dynamics that gave neither oscillations nor transients. Here we show that these dynamical behaviours may in fact be understood as hallmarks of the specific representation and algorithm that the cortex employs to perform probabilistic inference. We demonstrate that a particular family of probabilistic inference algorithms, Hamiltonian Monte Carlo (HMC), naturally maps onto the dynamics of excitatory-inhibitory neural networks. Specifically, we constructed a model of an excitatory-inhibitory circuit in primary visual cortex that performed HMC inference, and thus inherently gave rise to oscillations and transients. These oscillations were not mere epiphenomena but served an important functional role: speeding up inference by rapidly spanning a large volume of state space. Inference thus became an order of magnitude more efficient than in a non-oscillatory variant of the model. In addition, the network matched two specific properties of observed neural dynamics that would otherwise be difficult to account for using probabilistic inference. First, the frequency of oscillations as well as the magnitude of transients increased with the contrast of the image stimulus. Second, excitation and inhibition were balanced, and inhibition lagged excitation. These results suggest a new functional role for the separation of cortical populations into excitatory and inhibitory neurons, and for the neural
dNSP: a biologically inspired dynamic Neural network approach to Signal Processing.
Cano-Izquierdo, José Manuel; Ibarrola, Julio; Pinzolas, Miguel; Almonacid, Miguel
2008-09-01
The arriving order of data is one of the intrinsic properties of a signal. Therefore, techniques dealing with this temporal relation are required for identification and signal processing tasks. To perform a classification of the signal according with its temporal characteristics, it would be useful to find a feature vector in which the temporal attributes were embedded. The correlation and power density spectrum functions are suitable tools to manage this issue. These functions are usually defined with statistical formulation. On the other hand, in biology there can be found numerous processes in which signals are processed to give a feature vector; for example, the processing of sound by the auditory system. In this work, the dNSP (dynamic Neural Signal Processing) architecture is proposed. This architecture allows representing a time-varying signal by a spatial (thus statical) vector. Inspired by the aforementioned biological processes, the dNSP performs frequency decomposition using an analogical parallel algorithm carried out by simple processing units. The architecture has been developed under the paradigm of a multilayer neural network, where the different layers are composed by units whose activation functions have been extracted from the theory of Neural Dynamic [Grossberg, S. (1988). Nonlinear neural networks principles, mechanisms and architectures. Neural Networks, 1, 17-61]. A theoretical study of the behavior of the dynamic equations of the units and their relationship with some statistical functions allows establishing a parallelism between the unit activations and correlation and power density spectrum functions. To test the capabilities of the proposed approach, several testbeds have been employed, i.e. the frequencial study of mathematical functions. As a possible application of the architecture, a highly interesting problem in the field of automatic control is addressed: the recognition of a controlled DC motor operating state.
Dann, Benjamin
2016-01-01
Recent models of movement generation in motor cortex have sought to explain neural activity not as a function of movement parameters, known as representational models, but as a dynamical system acting at the level of the population. Despite evidence supporting this framework, the evaluation of representational models and their integration with dynamical systems is incomplete in the literature. Using a representational velocity-tuning based simulation of center-out reaching, we show that incorporating variable latency offsets between neural activity and kinematics is sufficient to generate rotational dynamics at the level of neural populations, a phenomenon observed in motor cortex. However, we developed a covariance-matched permutation test (CMPT) that reassigns neural data between task conditions independently for each neuron while maintaining overall neuron-to-neuron relationships, revealing that rotations based on the representational model did not uniquely depend on the underlying condition structure. In contrast, rotations based on either a dynamical model or motor cortex data depend on this relationship, providing evidence that the dynamical model more readily explains motor cortex activity. Importantly, implementing a recurrent neural network we demonstrate that both representational tuning properties and rotational dynamics emerge, providing evidence that a dynamical system can reproduce previous findings of representational tuning. Finally, using motor cortex data in combination with the CMPT, we show that results based on small numbers of neurons or conditions should be interpreted cautiously, potentially informing future experimental design. Together, our findings reinforce the view that representational models lack the explanatory power to describe complex aspects of single neuron and population level activity. PMID:27814352
Forecasting financial asset processes: stochastic dynamics via learning neural networks.
Giebel, S; Rainer, M
2010-01-01
Models for financial asset dynamics usually take into account their inherent unpredictable nature by including a suitable stochastic component into their process. Unknown (forward) values of financial assets (at a given time in the future) are usually estimated as expectations of the stochastic asset under a suitable risk-neutral measure. This estimation requires the stochastic model to be calibrated to some history of sufficient length in the past. Apart from inherent limitations, due to the stochastic nature of the process, the predictive power is also limited by the simplifying assumptions of the common calibration methods, such as maximum likelihood estimation and regression methods, performed often without weights on the historic time series, or with static weights only. Here we propose a novel method of "intelligent" calibration, using learning neural networks in order to dynamically adapt the parameters of the stochastic model. Hence we have a stochastic process with time dependent parameters, the dynamics of the parameters being themselves learned continuously by a neural network. The back propagation in training the previous weights is limited to a certain memory length (in the examples we consider 10 previous business days), which is similar to the maximal time lag of autoregressive processes. We demonstrate the learning efficiency of the new algorithm by tracking the next-day forecasts for the EURTRY and EUR-HUF exchange rates each.
Autonomic neural control of dynamic cerebral autoregulation in humans
NASA Technical Reports Server (NTRS)
Zhang, Rong; Zuckerman, Julie H.; Iwasaki, Kenichi; Wilson, Thad E.; Crandall, Craig G.; Levine, Benjamin D.
2002-01-01
BACKGROUND: The purpose of the present study was to determine the role of autonomic neural control of dynamic cerebral autoregulation in humans. METHODS AND RESULTS: We measured arterial pressure and cerebral blood flow (CBF) velocity in 12 healthy subjects (aged 29+/-6 years) before and after ganglion blockade with trimethaphan. CBF velocity was measured in the middle cerebral artery using transcranial Doppler. The magnitude of spontaneous changes in mean blood pressure and CBF velocity were quantified by spectral analysis. The transfer function gain, phase, and coherence between these variables were estimated to quantify dynamic cerebral autoregulation. After ganglion blockade, systolic and pulse pressure decreased significantly by 13% and 26%, respectively. CBF velocity decreased by 6% (P<0.05). In the very low frequency range (0.02 to 0.07 Hz), mean blood pressure variability decreased significantly (by 82%), while CBF velocity variability persisted. Thus, transfer function gain increased by 81%. In addition, the phase lead of CBF velocity to arterial pressure diminished. These changes in transfer function gain and phase persisted despite restoration of arterial pressure by infusion of phenylephrine and normalization of mean blood pressure variability by oscillatory lower body negative pressure. CONCLUSIONS: These data suggest that dynamic cerebral autoregulation is altered by ganglion blockade. We speculate that autonomic neural control of the cerebral circulation is tonically active and likely plays a significant role in the regulation of beat-to-beat CBF in humans.
Mindfulness and dynamic functional neural connectivity in children and adolescents.
Marusak, Hilary A; Elrahal, Farrah; Peters, Craig A; Kundu, Prantik; Lombardo, Michael V; Calhoun, Vince D; Goldberg, Elimelech K; Cohen, Cindy; Taub, Jeffrey W; Rabinak, Christine A
2017-09-05
Interventions that promote mindfulness consistently show salutary effects on cognition and emotional wellbeing in adults, and more recently, in children and adolescents. However, we lack understanding of the neurobiological mechanisms underlying mindfulness in youth that should allow for more judicious application of these interventions in clinical and educational settings. Using multi-echo multi-band fMRI, we examined dynamic (i.e., time-varying) and conventional static resting-state connectivity between core neurocognitive networks (i.e., salience/emotion, default mode, central executive) in 42 children and adolescents (ages 6-17). We found that trait mindfulness in youth relates to dynamic but not static resting-state connectivity. Specifically, more mindful youth transitioned more between brain states over the course of the scan, spent overall less time in a certain connectivity state, and showed a state-specific reduction in connectivity between salience/emotion and central executive networks. The number of state transitions mediated the link between higher mindfulness and lower anxiety, providing new insights into potential neural mechanisms underlying benefits of mindfulness on psychological health in youth. Our results provide new evidence that mindfulness in youth relates to functional neural dynamics and interactions between neurocognitive networks, over time. Copyright © 2017. Published by Elsevier B.V.
Autonomic neural control of dynamic cerebral autoregulation in humans
NASA Technical Reports Server (NTRS)
Zhang, Rong; Zuckerman, Julie H.; Iwasaki, Kenichi; Wilson, Thad E.; Crandall, Craig G.; Levine, Benjamin D.
2002-01-01
BACKGROUND: The purpose of the present study was to determine the role of autonomic neural control of dynamic cerebral autoregulation in humans. METHODS AND RESULTS: We measured arterial pressure and cerebral blood flow (CBF) velocity in 12 healthy subjects (aged 29+/-6 years) before and after ganglion blockade with trimethaphan. CBF velocity was measured in the middle cerebral artery using transcranial Doppler. The magnitude of spontaneous changes in mean blood pressure and CBF velocity were quantified by spectral analysis. The transfer function gain, phase, and coherence between these variables were estimated to quantify dynamic cerebral autoregulation. After ganglion blockade, systolic and pulse pressure decreased significantly by 13% and 26%, respectively. CBF velocity decreased by 6% (P<0.05). In the very low frequency range (0.02 to 0.07 Hz), mean blood pressure variability decreased significantly (by 82%), while CBF velocity variability persisted. Thus, transfer function gain increased by 81%. In addition, the phase lead of CBF velocity to arterial pressure diminished. These changes in transfer function gain and phase persisted despite restoration of arterial pressure by infusion of phenylephrine and normalization of mean blood pressure variability by oscillatory lower body negative pressure. CONCLUSIONS: These data suggest that dynamic cerebral autoregulation is altered by ganglion blockade. We speculate that autonomic neural control of the cerebral circulation is tonically active and likely plays a significant role in the regulation of beat-to-beat CBF in humans.
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans.
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies.
Determining the receptive field of a neural filter
NASA Astrophysics Data System (ADS)
Suzuki, Kenji
2004-12-01
In this paper, a method for determining the receptive field and the structure of hidden layers of a neural filter (NF) was developed and evaluated. With the proposed method, redundant units are removed from input and hidden layers in an NF based on the influence of removal of units on the error between output and teaching images. By performing the removal of units and retraining for recovery of the loss of the removal repeatedly, the receptive field and a reduced structure of hidden layers are determined. Experiments with NFs were performed for acquiring the function of a known filter, for the reduction of noise in natural images and for the reduction of noise in medical image sequences. By use of the proposed method, redundant units were able to be removed from NFs, while the performance of the NFs was maintained. Experimental results suggested that, with the proposed method, a reasonable receptive field for a given image-processing task could be determined, i.e., the receptive field of the NF trained to obtain the function of a filter corresponded to the kernel of the filter, and the receptive fields of the NFs for noise reduction gathered around the object pixels in the input regions of the NFs.
Derivation of a neural field model from a network of theta neurons.
Laing, Carlo R
2014-07-01
Neural field models are used to study macroscopic spatiotemporal patterns in the cortex. Their derivation from networks of model neurons normally involves a number of assumptions, which may not be correct. Here we present an exact derivation of a neural field model from an infinite network of theta neurons, the canonical form of a type I neuron. We demonstrate the existence of a "bump" solution in both a discrete network of neurons and in the corresponding neural field model.
Memory formation: from network structure to neural dynamics.
Feldt, Sarah; Wang, Jane X; Hetrick, Vaughn L; Berke, Joshua D; Zochowski, Michal
2010-05-13
Understanding the neural correlates of brain function is an extremely challenging task, since any cognitive process is distributed over a complex and evolving network of neurons that comprise the brain. In order to quantify observed changes in neuronal dynamics during hippocampal memory formation, we present metrics designed to detect directional interactions and the formation of functional neuronal ensembles. We apply these metrics to both experimental and model-derived data in an attempt to link anatomical network changes with observed changes in neuronal dynamics during hippocampal memory formation processes. We show that the developed model provides a consistent explanation of the anatomical network modifications that underlie the activity changes observed in the experimental data.
Adaptive neural information processing with dynamical electrical synapses.
Xiao, Lei; Zhang, Dan-Ke; Li, Yuan-Qing; Liang, Pei-Ji; Wu, Si
2013-01-01
The present study investigates a potential computational role of dynamical electrical synapses in neural information process. Compared with chemical synapses, electrical synapses are more efficient in modulating the concerted activity of neurons. Based on the experimental data, we propose a phenomenological model for short-term facilitation of electrical synapses. The model satisfactorily reproduces the phenomenon that the neuronal correlation increases although the neuronal firing rates attenuate during the luminance adaptation. We explore how the stimulus information is encoded in parallel by firing rates and correlated activity of neurons, and find that dynamical electrical synapses mediate a transition from the firing rate code to the correlation one during the luminance adaptation. The latter encodes the stimulus information by using the concerted, but lower neuronal firing rate, and hence is economically more efficient.
Neural network potentials for dynamics and thermodynamics of gold nanoparticles
NASA Astrophysics Data System (ADS)
Chiriki, Siva; Jindal, Shweta; Bulusu, Satya S.
2017-02-01
For understanding the dynamical and thermodynamical properties of metal nanoparticles, one has to go beyond static and structural predictions of a nanoparticle. Accurate description of dynamical properties may be computationally intensive depending on the size of nanoparticle. Herein, we demonstrate the use of atomistic neural network potentials, obtained by fitting quantum mechanical data, for extensive molecular dynamics simulations of gold nanoparticles. The fitted potential was tested by performing global optimizations of size selected gold nanoparticles (Aun, 17 ≤ n ≤ 58). We performed molecular dynamics simulations in canonical (NVT) and microcanonical (NVE) ensembles on Au17, Au34, Au58 for a total simulation time of around 3 ns for each nanoparticle. Our study based on both NVT and NVE ensembles indicate that there is a dynamical coexistence of solid-like and liquid-like phases near melting transition. We estimate the probability at finite temperatures for set of isomers lying below 0.5 eV from the global minimum structure. In the case of Au17 and Au58, the properties can be estimated using global minimum structure at room temperature, while for Au34, global minimum structure is not a dominant structure even at low temperatures.
Optimal path-finding through mental exploration based on neural energy field gradients.
Wang, Yihong; Wang, Rubin; Zhu, Yating
2017-02-01
Rodent animal can accomplish self-locating and path-finding task by forming a cognitive map in the hippocampus representing the environment. In the classical model of the cognitive map, the system (artificial animal) needs large amounts of physical exploration to study spatial environment to solve path-finding problems, which costs too much time and energy. Although Hopfield's mental exploration model makes up for the deficiency mentioned above, the path is still not efficient enough. Moreover, his model mainly focused on the artificial neural network, and clear physiological meanings has not been addressed. In this work, based on the concept of mental exploration, neural energy coding theory has been applied to the novel calculation model to solve the path-finding problem. Energy field is constructed on the basis of the firing power of place cell clusters, and the energy field gradient can be used in mental exploration to solve path-finding problems. The study shows that the new mental exploration model can efficiently find the optimal path, and present the learning process with biophysical meaning as well. We also analyzed the parameters of the model which affect the path efficiency. This new idea verifies the importance of place cell and synapse in spatial memory and proves that energy coding is effective to study cognitive activities. This may provide the theoretical basis for the neural dynamics mechanism of spatial memory.
Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making
Schöner, Gregor; Gail, Alexander
2012-01-01
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for
Sensorimotor learning biases choice behavior: a learning neural field model for decision making.
Klaes, Christian; Schneegans, Sebastian; Schöner, Gregor; Gail, Alexander
2012-01-01
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for
Dynamic analysis of a general class of winner-take-all competitive neural networks.
Fang, Yuguang; Cohen, Michael A; Kincaid, Thomas G
2010-05-01
This paper studies a general class of dynamical neural networks with lateral inhibition, exhibiting winner-take-all (WTA) behavior. These networks are motivated by a metal-oxide-semiconductor field effect transistor (MOSFET) implementation of neural networks, in which mutual competition plays a very important role. We show that for a fairly general class of competitive neural networks, WTA behavior exists. Sufficient conditions for the network to have a WTA equilibrium are obtained, and rigorous convergence analysis is carried out. The conditions for the network to have the WTA behavior obtained in this paper provide design guidelines for the network implementation and fabrication. We also demonstrate that whenever the network gets into the WTA region, it will stay in that region and settle down exponentially fast to the WTA point. This provides a speeding procedure for the decision making: as soon as it gets into the region, the winner can be declared. Finally, we show that this WTA neural network has a self-resetting property, and a resetting principle is proposed.
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Procyk, Emmanuel; Dominey, Peter Ford
2016-01-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex.
Enel, Pierre; Procyk, Emmanuel; Quilodran, René; Dominey, Peter Ford
2016-06-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a
Track and Field Dynamics. Second Edition.
ERIC Educational Resources Information Center
Ecker, Tom
Track and field coaching is considered an art embodying three sciences--physiology, psychology, and dynamics. It is the area of dynamics, the branch of physics that deals with the action of force on bodies, that is central to this book. Although the book does not cover the entire realm of dynamics, the laws and principles that relate directly to…
Track and Field Dynamics. Second Edition.
ERIC Educational Resources Information Center
Ecker, Tom
Track and field coaching is considered an art embodying three sciences--physiology, psychology, and dynamics. It is the area of dynamics, the branch of physics that deals with the action of force on bodies, that is central to this book. Although the book does not cover the entire realm of dynamics, the laws and principles that relate directly to…
Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of
Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of
Multiplex visibility graphs to investigate recurrent neural network dynamics
NASA Astrophysics Data System (ADS)
Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert
2017-03-01
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.
Multiplex visibility graphs to investigate recurrent neural network dynamics
Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert
2017-01-01
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods. PMID:28281563
A reflexive neural network for dynamic biped walking control.
Geng, Tao; Porr, Bernd; Wörgötter, Florentin
2006-05-01
Biped walking remains a difficult problem, and robot models can greatly facilitate our understanding of the underlying biomechanical principles as well as their neuronal control. The goal of this study is to specifically demonstrate that stable biped walking can be achieved by combining the physical properties of the walking robot with a small, reflex-based neuronal network governed mainly by local sensor signals. Building on earlier work (Taga, 1995; Cruse, Kindermann, Schumm, Dean, & Schmitz, 1998), this study shows that human-like gaits emerge without specific position or trajectory control and that the walker is able to compensate small disturbances through its own dynamical properties. The reflexive controller used here has the following characteristics, which are different from earlier approaches: (1) Control is mainly local. Hence, it uses only two signals (anterior extreme angle and ground contact), which operate at the interjoint level. All other signals operate only at single joints. (2) Neither position control nor trajectory tracking control is used. Instead, the approximate nature of the local reflexes on each joint allows the robot mechanics itself (e.g., its passive dynamics) to contribute substantially to the overall gait trajectory computation. (3) The motor control scheme used in the local reflexes of our robot is more straightforward and has more biological plausibility than that of other robots, because the outputs of the motor neurons in our reflexive controller are directly driving the motors of the joints rather than working as references for position or velocity control. As a consequence, the neural controller and the robot mechanics are closely coupled as a neuromechanical system, and this study emphasizes that dynamically stable biped walking gaits emerge from the coupling between neural computation and physical computation. This is demonstrated by different walking experiments using a real robot as well as by a Poincaré map analysis
Neural dynamics of image representation in the primary visual cortex.
Yan, Xiaogang; Khambhati, Ankit; Liu, Lei; Lee, Tai Sing
2012-01-01
Horizontal connections in the primary visual cortex have been hypothesized to play a number of computational roles: association field for contour completion, surface interpolation, surround suppression, and saliency computation. Here, we argue that horizontal connections might also serve a critical role for computing the appropriate codes for image representation. That the early visual cortex or V1 explicitly represents the image we perceive has been a common assumption in computational theories of efficient coding (Olshausen and Field (1996)), yet such a framework for understanding the circuitry in V1 has not been seriously entertained in the neurophysiological community. In fact, a number of recent fMRI and neurophysiological studies cast doubt on the neural validity of such an isomorphic representation (Cornelissen et al., 2006; von der Heydt et al., 2003). In this study, we investigated, neurophysiologically, how V1 neurons respond to uniform color surfaces and show that spiking activities of neurons can be decomposed into three components: a bottom-up feedforward input, an articulation of color tuning and a contextual modulation signal that is inversely proportional to the distance away from the bounding contrast border. We demonstrate through computational simulations that the behaviors of a model for image representation are consistent with many aspects of our neural observations. We conclude that the hypothesis of isomorphic representation of images in V1 remains viable and this hypothesis suggests an additional new interpretation of the functional roles of horizontal connections in the primary visual cortex.
A stochastic-field description of finite-size spiking neural networks
Longtin, André
2017-01-01
Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics. PMID:28787447
A stochastic-field description of finite-size spiking neural networks.
Dumont, Grégory; Payeur, Alexandre; Longtin, André
2017-08-01
Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity-the density of active neurons per unit time-is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics.
Bojak, Ingo; Stoyanov, Zhivko V.; Liley, David T. J.
2015-01-01
Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex. PMID:25767438
Social decisions affect neural activity to perceived dynamic gaze.
Latinus, Marianne; Love, Scott A; Rossi, Alejandra; Parada, Francisco J; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin; Puce, Aina
2015-11-01
Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a 'default mode' that may focus on spatial information; a 'socially aware mode' that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified.
Neural dynamic optimization for autonomous aerial vehicle trajectory design
NASA Astrophysics Data System (ADS)
Xu, Peng; Verma, Ajay; Mayer, Richard J.
2007-04-01
Online aerial vehicle trajectory design and reshaping are crucial for a class of autonomous aerial vehicles such as reusable launch vehicles in order to achieve flexibility in real-time flying operations. An aerial vehicle is modeled as a nonlinear multi-input-multi-output (MIMO) system. The inputs include the control parameters and current system states that include velocity and position coordinates of the vehicle. The outputs are the new system states. An ideal trajectory control design system generates a series of control commands to achieve a desired trajectory under various disturbances and vehicle model uncertainties including aerodynamic perturbations caused by geometric damage to the vehicle. Conventional approaches suffer from the nonlinearity of the MIMO system, and the high-dimensionality of the system state space. In this paper, we apply a Neural Dynamic Optimization (NDO) based approach to overcome these difficulties. The core of an NDO model is a multilayer perceptron (MLP) neural network, which generates the control parameters online. The inputs of the MLP are the time-variant states of the MIMO systems. The outputs of the MLP and the control parameters will be used by the MIMO to generate new system states. By such a formulation, an NDO model approximates the time-varying optimal feedback solution.
Social decisions affect neural activity to perceived dynamic gaze
Latinus, Marianne; Love, Scott A.; Rossi, Alejandra; Parada, Francisco J.; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin
2015-01-01
Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a ‘default mode’ that may focus on spatial information; a ‘socially aware mode’ that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified. PMID:25925272
Resolution enhancement in neural networks with dynamical synapses
Fung, C. C. Alan; Wang, He; Lam, Kin; Wong, K. Y. Michael; Wu, Si
2013-01-01
Conventionally, information is represented by spike rates in the neural system. Here, we consider the ability of temporally modulated activities in neuronal networks to carry information extra to spike rates. These temporal modulations, commonly known as population spikes, are due to the presence of synaptic depression in a neuronal network model. We discuss its relevance to an experiment on transparent motions in macaque monkeys by Treue et al. in 2000. They found that if the moving directions of objects are too close, the firing rate profile will be very similar to that with one direction. As the difference in the moving directions of objects is large enough, the neuronal system would respond in such a way that the network enhances the resolution in the moving directions of the objects. In this paper, we propose that this behavior can be reproduced by neural networks with dynamical synapses when there are multiple external inputs. We will demonstrate how resolution enhancement can be achieved, and discuss the conditions under which temporally modulated activities are able to enhance information processing performances in general. PMID:23781197
Hellyer, Peter John; Clopath, Claudia; Kehagia, Angie A; Turkheimer, Federico E; Leech, Robert
2017-08-01
In recent years, there have been many computational simulations of spontaneous neural dynamics. Here, we describe a simple model of spontaneous neural dynamics that controls an agent moving in a simple virtual environment. These dynamics generate interesting brain-environment feedback interactions that rapidly destabilize neural and behavioral dynamics demonstrating the need for homeostatic mechanisms. We investigate roles for homeostatic plasticity both locally (local inhibition adjusting to balance excitatory input) as well as more globally (regional "task negative" activity that compensates for "task positive", sensory input in another region) balancing neural activity and leading to more stable behavior (trajectories through the environment). Our results suggest complementary functional roles for both local and macroscale mechanisms in maintaining neural and behavioral dynamics and a novel functional role for macroscopic "task-negative" patterns of activity (e.g., the default mode network).
Nonlinear adaptive trajectory tracking using dynamic neural networks.
Poznyak, A S; Yu, W; Sanchez, E N; Perez, J P
1999-01-01
In this paper the adaptive nonlinear identification and trajectory tracking are discussed via dynamic neural networks. By means of a Lyapunov-like analysis we determine stability conditions for the identification error. Then we analyze the trajectory tracking error by a local optimal controller. An algebraic Riccati equation and a differential one are used for the identification and the tracking error analysis. As our main original contributions, we establish two theorems: the first one gives a bound for the identification error and the second one establishes a bound for the tracking error. We illustrate the effectiveness of these results by two examples: the second-order relay system with multiple isolated equilibrium points and the chaotic system given by Duffing equation.
Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1997-01-01
A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.
Nonlinear dynamics of direction-selective recurrent neural media.
Xie, Xiaohui; Giese, Martin A
2002-05-01
The direction selectivity of cortical neurons can be accounted for by asymmetric lateral connections. Such lateral connectivity leads to a network dynamics with characteristic properties that can be exploited for distinguishing in neurophysiological experiments this mechanism for direction selectivity from other possible mechanisms. We present a mathematical analysis for a class of direction-selective neural models with asymmetric lateral connections. Contrasting with earlier theoretical studies that have analyzed approximations of the network dynamics by neglecting nonlinearities using methods from linear systems theory, we study the network dynamics with nonlinearity taken into consideration. We show that asymmetrically coupled networks can stabilize stimulus-locked traveling pulse solutions that are appropriate for the modeling of the responses of direction-selective neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of these solutions breaks down, giving rise to lurching activity waves with specific spatiotemporal periodicity. These solutions, and the bifurcation by which they arise, cannot be easily accounted for by classical models for direction selectivity.
Random dynamics of the Morris-Lecar neural model.
Tateno, Takashi; Pakdaman, Khashayar
2004-09-01
Determining the response characteristics of neurons to fluctuating noise-like inputs similar to realistic stimuli is essential for understanding neuronal coding. This study addresses this issue by providing a random dynamical system analysis of the Morris-Lecar neural model driven by a white Gaussian noise current. Depending on parameter selections, the deterministic Morris-Lecar model can be considered as a canonical prototype for widely encountered classes of neuronal membranes, referred to as class I and class II membranes. In both the transitions from excitable to oscillating regimes are associated with different bifurcation scenarios. This work examines how random perturbations affect these two bifurcation scenarios. It is first numerically shown that the Morris-Lecar model driven by white Gaussian noise current tends to have a unique stationary distribution in the phase space. Numerical evaluations also reveal quantitative and qualitative changes in this distribution in the vicinity of the bifurcations of the deterministic system. However, these changes notwithstanding, our numerical simulations show that the Lyapunov exponents of the system remain negative in these parameter regions, indicating that no dynamical stochastic bifurcations take place. Moreover, our numerical simulations confirm that, regardless of the asymptotic dynamics of the deterministic system, the random Morris-Lecar model stabilizes at a unique stationary stochastic process. In terms of random dynamical system theory, our analysis shows that additive noise destroys the above-mentioned bifurcation sequences that characterize class I and class II regimes in the Morris-Lecar model. The interpretation of this result in terms of neuronal coding is that, despite the differences in the deterministic dynamics of class I and class II membranes, their responses to noise-like stimuli present a reliable feature. Copyright 2004 American Institute of Physics
A dynamical systems view of motor preparation: Implications for neural prosthetic system design
Shenoy, Krishna V.; Kaufman, Matthew T.; Sahani, Maneesh; Churchland, Mark M.
2013-01-01
Neural prosthetic systems aim to help disabled patients suffering from a range of neurological injuries and disease by using neural activity from the brain to directly control assistive devices. This approach in effect bypasses the dysfunctional neural circuitry, such as an injured spinal cord. To do so, neural prostheses depend critically on a scientific understanding of the neural activity that drives them. We review here several recent studies aimed at understanding the neural processes in premotor cortex that precede arm movements and lead to the initiation of movement. These studies were motivated by hypotheses and predictions conceived of within a dynamical systems perspective. This perspective concentrates on describing the neural state using as few degrees of freedom as possible and on inferring the rules that govern the motion of that neural state. Although quite general, this perspective has led to a number of specific predictions that have been addressed experimentally. It is hoped that the resulting picture of the dynamical role of preparatory and movement-related neural activity will be particularly helpful to the development of neural prostheses, which can themselves be viewed as dynamical systems under the control of the larger dynamical system to which they are attached. PMID:21763517
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
2015-01-01
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
Hidden Conditional Neural Fields for Continuous Phoneme Speech Recognition
NASA Astrophysics Data System (ADS)
Fujii, Yasuhisa; Yamamoto, Kazumasa; Nakagawa, Seiichi
In this paper, we propose Hidden Conditional Neural Fields (HCNF) for continuous phoneme speech recognition, which are a combination of Hidden Conditional Random Fields (HCRF) and a Multi-Layer Perceptron (MLP), and inherit their merits, namely, the discriminative property for sequences from HCRF and the ability to extract non-linear features from an MLP. HCNF can incorporate many types of features from which non-linear features can be extracted, and is trained by sequential criteria. We first present the formulation of HCNF and then examine three methods to further improve automatic speech recognition using HCNF, which is an objective function that explicitly considers training errors, provides a hierarchical tandem-style feature and includes a deep non-linear feature extractor for the observation function. We show that HCNF can be trained realistically without any initial model and outperforms HCRF and the triphone hidden Markov model trained by the minimum phone error (MPE) manner using experimental results for continuous English phoneme recognition on the TIMIT core test set and Japanese phoneme recognition on the IPA 100 test set.
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator
Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus
2017-01-01
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.
Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus
2017-01-01
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Track and Field: Technique Through Dynamics.
ERIC Educational Resources Information Center
Ecker, Tom
This book was designed to aid in applying the laws of dynamics to the sport of track and field, event by event. It begins by tracing the history of the discoveries of the laws of motion and the principles of dynamics, with explanations of commonly used terms derived from the vocabularies of the physical sciences. The principles and laws of…
Track and Field: Technique Through Dynamics.
ERIC Educational Resources Information Center
Ecker, Tom
This book was designed to aid in applying the laws of dynamics to the sport of track and field, event by event. It begins by tracing the history of the discoveries of the laws of motion and the principles of dynamics, with explanations of commonly used terms derived from the vocabularies of the physical sciences. The principles and laws of…
Autonomic neural control of heart rate during dynamic exercise: revisited
White, Daniel W; Raven, Peter B
2014-01-01
The accepted model of autonomic control of heart rate (HR) during dynamic exercise indicates that the initial increase is entirely attributable to the withdrawal of parasympathetic nervous system (PSNS) activity and that subsequent increases in HR are entirely attributable to increases in cardiac sympathetic activity. In the present review, we sought to re-evaluate the model of autonomic neural control of HR in humans during progressive increases in dynamic exercise workload. We analysed data from both new and previously published studies involving baroreflex stimulation and pharmacological blockade of the autonomic nervous system. Results indicate that the PSNS remains functionally active throughout exercise and that increases in HR from rest to maximal exercise result from an increasing workload-related transition from a 4 : 1 vagal–sympathetic balance to a 4 : 1 sympatho–vagal balance. Furthermore, the beat-to-beat autonomic reflex control of HR was found to be dependent on the ability of the PSNS to modulate the HR as it was progressively restrained by increasing workload-related sympathetic nerve activity. In conclusion: (i) increases in exercise workload-related HR are not caused by a total withdrawal of the PSNS followed by an increase in sympathetic tone; (ii) reciprocal antagonism is key to the transition from vagal to sympathetic dominance, and (iii) resetting of the arterial baroreflex causes immediate exercise-onset reflexive increases in HR, which are parasympathetically mediated, followed by slower increases in sympathetic tone as workloads are increased. PMID:24756637
Control of Complex Dynamic Systems by Neural Networks
NASA Technical Reports Server (NTRS)
Spall, James C.; Cristion, John A.
1993-01-01
This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.
Moving to higher ground: The dynamic field theory and the dynamics of visual cognition
Johnson, Jeffrey S.; Spencer, John P.; Schöner, Gregor
2009-01-01
In the present report, we describe a new dynamic field theory that captures the dynamics of visuo-spatial cognition. This theory grew out of the dynamic systems approach to motor control and development, and is grounded in neural principles. The initial application of dynamic field theory to issues in visuo-spatial cognition extended concepts of the motor approach to decision making in a sensori-motor context, and, more recently, to the dynamics of spatial cognition. Here we extend these concepts still further to address topics in visual cognition, including visual working memory for non-spatial object properties, the processes that underlie change detection, and the ‘binding problem’ in vision. In each case, we demonstrate that the general principles of the dynamic field approach can unify findings in the literature and generate novel predictions. We contend that the application of these concepts to visual cognition avoids the pitfalls of reductionist approaches in cognitive science, and points toward a formal integration of brains, bodies, and behavior. PMID:19173013
Moving to higher ground: The dynamic field theory and the dynamics of visual cognition.
Johnson, Jeffrey S; Spencer, John P; Schöner, Gregor
2008-08-01
In the present report, we describe a new dynamic field theory that captures the dynamics of visuo-spatial cognition. This theory grew out of the dynamic systems approach to motor control and development, and is grounded in neural principles. The initial application of dynamic field theory to issues in visuo-spatial cognition extended concepts of the motor approach to decision making in a sensori-motor context, and, more recently, to the dynamics of spatial cognition. Here we extend these concepts still further to address topics in visual cognition, including visual working memory for non-spatial object properties, the processes that underlie change detection, and the 'binding problem' in vision. In each case, we demonstrate that the general principles of the dynamic field approach can unify findings in the literature and generate novel predictions. We contend that the application of these concepts to visual cognition avoids the pitfalls of reductionist approaches in cognitive science, and points toward a formal integration of brains, bodies, and behavior.
Advanced Neural Network Modeling of Synthetic Jet Flow Fields
2006-03-01
The purpose of this research was to continue development of a neural network -based, lumped deterministic source term (LDST) approximation module for...main exploration involved the grid sensitivity of the neural network model. A second task was originally planned on the portability of the approach to
Traveling waves and breathers in an excitatory-inhibitory neural field
NASA Astrophysics Data System (ADS)
Folias, Stefanos E.
2017-03-01
We study existence and stability of traveling activity bump solutions in an excitatory-inhibitory (E-I) neural field with Heaviside firing rate functions by deriving existence conditions for traveling bumps and an Evans function to analyze their spectral stability. Subsequently, we show that these existence and stability results reduce, in the limit of wave speed c →0 , to the equivalent conditions developed for the stationary bump case. Using the results for the stationary bump case, we show that drift bifurcations of stationary bumps serve as a mechanism for generating traveling bump solutions in the E-I neural field as parameters are varied. Furthermore, we explore the interrelations between stationary and traveling types of bumps and breathers (time-periodic oscillatory bumps) by bridging together analytical and simulation results for stationary and traveling bumps and their bifurcations in a region of parameter space. Interestingly, we find evidence for a codimension-2 drift-Hopf bifurcation occurring as two parameters, inhibitory time constant τ and I-to-I synaptic connection strength w¯i i, are varied and show that the codimension-2 point serves as an organizing center for the dynamics of these four types of spatially localized solutions. Additionally, we describe a case involving subcritical bifurcations that lead to traveling waves and breathers as τ is varied.
Heart fields: spatial polarity and temporal dynamics.
Abu-Issa, Radwan
2014-02-01
In chick and mouse, heart fields undergo dynamic morphological spatiotemporal changes during heart tube formation. Here, the dynamic change in spatial polarity of such fields is discussed and a new perspective on the heart fields is proposed. The heart progenitor cells delaminate through the primitive streak and migrate in a semicircular trajectory craniolaterally forming the bilateral heart fields as part of the splanchnic mesoderm. They switch their polarity from anteroposterior to mediolateral. The anterior intestinal portal posterior descent inverts the newly formed heart field mediolateral polarity into lateromedial by 125° bending. The heart fields revert back to their original anteroposterior polarity and fuse at the midline forming a semi heart tube by completing their half circle movement. Several names and roles were assigned to different portions of the heart fields: posterior versus anterior, first versus second, and primary versus secondary heart field. The posterior and anterior heart fields define basically physical fields that form the inflow-outflow axis of the heart tube. The first and second heart fields are, in contrast, temporal fields of differentiating cardiomyocytes expressing myosin light chain 2a and undifferentiated and proliferating precardiac mesoderm expressing Isl1 gene, respectively. The two markers present a complementary pattern and are expressed transiently in all myocardial lineages. Thus, Isl1 is not restricted to a portion of the heart field or one of the two heart lineages as has been often assumed.
Magnetic Field Control of Combustion Dynamics
NASA Astrophysics Data System (ADS)
Barmina, I.; Valdmanis, R.; Zake, M.; Kalis, H.; Marinaki, M.; Strautins, U.
2016-08-01
Experimental studies and mathematical modelling of the effects of magnetic field on combustion dynamics at thermo-chemical conversion of biomass are carried out with the aim of providing control of the processes developing in the reaction zone of swirling flame. The joint research of the magnetic field effect on the combustion dynamics includes the estimation of this effect on the formation of the swirling flame dynamics, flame temperature and composition, providing analysis of the magnetic field effects on the flame characteristics. The results of experiments have shown that the magnetic field exerts the influence on the flow velocity components by enhancing a swirl motion in the flame reaction zone with swirl-enhanced mixing of the axial flow of volatiles with cold air swirl, by cooling the flame reaction zone and by limiting the thermo-chemical conversion of volatiles. Mathematical modelling of magnetic field effect on the formation of the flame dynamics confirms that the electromagnetic force, which is induced by the electric current surrounding the flame, leads to field-enhanced increase of flow vorticity by enhancing mixing of the reactants. The magnetic field effect on the flame temperature and rate of reactions leads to conclusion that field-enhanced increase of the flow vorticity results in flame cooling by limiting the chemical conversion of the reactants.
Generalized activity equations for spiking neural network dynamics
Buice, Michael A.; Chow, Carson C.
2013-01-01
Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales—the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances. PMID:24298252
Filling the Gap on Developmental Change: Tests of a Dynamic Field Theory of Spatial Cognition
ERIC Educational Resources Information Center
Schutte, Anne R.; Spencer, John P.
2010-01-01
In early childhood, there is a developmental transition in spatial memory biases. Before the transition, children's memory responses are biased toward the midline of a space, while after the transition responses are biased away from midline. The Dynamic Field Theory (DFT) posits that changes in neural interaction and changes in how children…
Exploring neural cell dynamics with digital holographic microscopy.
Marquet, P; Depeursinge, C; Magistretti, P J
2013-01-01
In this review, we summarize how the new concept of digital optics applied to the field of holographic microscopy has allowed the development of a reliable and flexible digital holographic quantitative phase microscopy (DH-QPM) technique at the nanoscale particularly suitable for cell imaging. Particular emphasis is placed on the original biological information provided by the quantitative phase signal. We present the most relevant DH-QPM applications in the field of cell biology, including automated cell counts, recognition, classification, three-dimensional tracking, discrimination between physiological and pathophysiological states, and the study of cell membrane fluctuations at the nanoscale. In the last part, original results show how DH-QPM can address two important issues in the field of neurobiology, namely, multiple-site optical recording of neuronal activity and noninvasive visualization of dendritic spine dynamics resulting from a full digital holographic microscopy tomographic approach.
Autonomic neural control of heart rate during dynamic exercise: revisited.
White, Daniel W; Raven, Peter B
2014-06-15
The accepted model of autonomic control of heart rate (HR) during dynamic exercise indicates that the initial increase is entirely attributable to the withdrawal of parasympathetic nervous system (PSNS) activity and that subsequent increases in HR are entirely attributable to increases in cardiac sympathetic activity. In the present review, we sought to re-evaluate the model of autonomic neural control of HR in humans during progressive increases in dynamic exercise workload. We analysed data from both new and previously published studies involving baroreflex stimulation and pharmacological blockade of the autonomic nervous system. Results indicate that the PSNS remains functionally active throughout exercise and that increases in HR from rest to maximal exercise result from an increasing workload-related transition from a 4 : 1 vagal-sympathetic balance to a 4 : 1 sympatho-vagal balance. Furthermore, the beat-to-beat autonomic reflex control of HR was found to be dependent on the ability of the PSNS to modulate the HR as it was progressively restrained by increasing workload-related sympathetic nerve activity. (i) increases in exercise workload-related HR are not caused by a total withdrawal of the PSNS followed by an increase in sympathetic tone; (ii) reciprocal antagonism is key to the transition from vagal to sympathetic dominance, and (iii) resetting of the arterial baroreflex causes immediate exercise-onset reflexive increases in HR, which are parasympathetically mediated, followed by slower increases in sympathetic tone as workloads are increased. © 2014 The Authors. The Journal of Physiology © 2014 The Physiological Society.
Neural Dynamics of Learning Sound—Action Associations
McNamara, Adam; Buccino, Giovanni; Menz, Mareike M.; Gläscher, Jan; Wolbers, Thomas; Baumgärtner, Annette; Binkofski, Ferdinand
2008-01-01
A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44) were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI) study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of ‘strategic covert naming’ when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent) response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL) and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI) analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of the
Neural dynamics of learning sound-action associations.
McNamara, Adam; Buccino, Giovanni; Menz, Mareike M; Gläscher, Jan; Wolbers, Thomas; Baumgärtner, Annette; Binkofski, Ferdinand
2008-01-01
A motor component is pre-requisite to any communicative act as one must inherently move to communicate. To learn to make a communicative act, the brain must be able to dynamically associate arbitrary percepts to the neural substrate underlying the pre-requisite motor activity. We aimed to investigate whether brain regions involved in complex gestures (ventral pre-motor cortex, Brodmann Area 44) were involved in mediating association between novel abstract auditory stimuli and novel gestural movements. In a functional resonance imaging (fMRI) study we asked participants to learn associations between previously unrelated novel sounds and meaningless gestures inside the scanner. We use functional connectivity analysis to eliminate the often present confound of 'strategic covert naming' when dealing with BA44 and to rule out effects of non-specific reductions in signal. Brodmann Area 44, a region incorporating Broca's region showed strong, bilateral, negative correlation of BOLD (blood oxygen level dependent) response with learning of sound-action associations during data acquisition. Left-inferior-parietal-lobule (l-IPL) and bilateral loci in and around visual area V5, right-orbital-frontal-gyrus, right-hippocampus, left-para-hippocampus, right-head-of-caudate, right-insula and left-lingual-gyrus also showed decreases in BOLD response with learning. Concurrent with these decreases in BOLD response, an increasing connectivity between areas of the imaged network as well as the right-middle-frontal-gyrus with rising learning performance was revealed by a psychophysiological interaction (PPI) analysis. The increasing connectivity therefore occurs within an increasingly energy efficient network as learning proceeds. Strongest learning related connectivity between regions was found when analysing BA44 and l-IPL seeds. The results clearly show that BA44 and l-IPL is dynamically involved in linking gesture and sound and therefore provides evidence that one of the mechanisms
The influence of electric fields on hippocampal neural progenitor cells.
Ariza, Carlos Atico; Fleury, Asha T; Tormos, Christian J; Petruk, Vadim; Chawla, Sagar; Oh, Jisun; Sakaguchi, Donald S; Mallapragada, Surya K
2010-12-01
The differentiation and proliferation of neural stem/progenitor cells (NPCs) depend on various in vivo environmental factors or cues, which may include an endogenous electrical field (EF), as observed during nervous system development and repair. In this study, we investigate the morphologic, phenotypic, and mitotic alterations of adult hippocampal NPCs that occur when exposed to two EFs of estimated endogenous strengths. NPCs treated with a 437 mV/mm direct current (DC) EF aligned perpendicularly to the EF vector and had a greater tendency to differentiate into neurons, but not into oligodendrocytes or astrocytes, compared to controls. Furthermore, NPC process growth was promoted perpendicularly and inhibited anodally in the 437 mV/mm DC EF. Yet fewer cells were observed in the DC EF, which in part was due to a decrease in cell viability. The other EF applied was a 46 mV/mm alternating current (AC) EF. However, the 46 mV/mm AC EF showed no major differences in alignment or differentiation, compared to control conditions. For both EF treatments, the percent of mitotic cells during the last 14 h of the experiment were statistically similar to controls. Reported here, to our knowledge, is the first evidence of adult NPC differentiation affected in an EF in vitro. Further investigation and application of EFs on stem cells is warranted to elucidate the utility of EFs to control phenotypic behavior. With progress, the use of EFs may be engineered to control differentiation and target the growth of transplanted cells in a stem cell-based therapy to treat nervous system disorders.
Research on quasi-dynamic calibration model of plastic sensitive element based on neural networks
NASA Astrophysics Data System (ADS)
Wang, Fang; Kong, Deren; Yang, Lixia; Zhang, Zouzou
2017-08-01
Quasi-dynamic calibration accuracy of the plastic sensitive element depends on the accuracy of the fitting model between pressure and deformation. By using the excellent nonlinear mapping ability of RBF (Radial Basis Function) neural network, a calibration model is established which use the peak pressure as the input and use the deformation of the plastic sensitive element as the output in this paper. The calibration experiments of a batch of copper cylinders are carried out on the quasi-dynamic pressure calibration device, which pressure range is within the range of 200MPa to 700MPa. The experiment data are acquired according to the standard pressure monitoring system. The network train and study are done to quasi dynamic calibration model based on neural network by using MATLAB neural network toolbox. Taking the testing samples as the research object, the prediction accuracy of neural network model is compared with the exponential fitting model and the second-order polynomial fitting model. The results show that prediction of the neural network model is most close to the testing samples, and the accuracy of prediction model based on neural network is better than 0.5%, respectively one order higher than the second-order polynomial fitting model and two orders higher than the exponential fitting model. The quasi-dynamic calibration model between pressure peak and deformation of plastic sensitive element, which is based on neural network, provides important basis for creating higher accuracy quasi-dynamic calibration table.
Neural RNA as a principal dynamic information carrier in a neuron
NASA Astrophysics Data System (ADS)
Berezin, Andrey A.
1999-11-01
Quantum mechanical approach has been used to develop a model of the neural ribonucleic acid molecule dynamics. Macro and micro Fermi-Pasta-Ulam recurrence has been considered as a principle information carrier in a neuron.
Islet1 derivatives in the heart are of both neural crest and second heart field origin
Engleka, Kurt A.; Manderfield, Lauren J.; Brust, Rachael D.; Li, Li; Cohen, Ashley; Dymecki, Susan M.; Epstein, Jonathan A.
2012-01-01
Rationale Islet1 (Isl1) has been proposed as a marker of cardiac progenitor cells derived from the second heart field and is utilized to identify and purify cardiac progenitors from murine and human specimens for ex vivo expansion. The use of Isl1 as a specific second heart field marker is dependent on its exclusion from other cardiac lineages such as neural crest. Objective Determine if Isl1 is expressed by cardiac neural crest. Methods and Results We used an intersectional fate-mapping system employing the RC::FrePe allele which reports dual Flpe and Cre recombination. Combining Isl11Cre/+, a SHF driver, and Wnt1::Flpe, a neural crest driver, with Rc::FrePe reveals that some Isl1 derivatives in the cardiac outflow tract derive from Wnt1-expressing neural crest progenitors. In contrast, no overlap was observed between Wnt1-derived neural crest and an alternative second heart field driver, Mef2c-AHF-Cre. Conclusions Isl1 is not restricted to second heart field progenitors in the developing heart but also labels cardiac neural crest. The intersection of Isl1 and Wnt1 lineages within the heart provides a caveat to using Isl1 as an exclusive second heart field cardiac progenitor marker and suggests that some Isl1-expressing progenitor cells derived from embryos, ES or iPS cultures may be of neural crest lineage. PMID:22394517
Nanoscale live cell optical imaging of the dynamics of intracellular microvesicles in neural cells.
Lee, Sohee; Heo, Chaejeong; Suh, Minah; Lee, Young Hee
2013-11-01
Recent advances in biotechnology and imaging technology have provided great opportunities to investigate cellular dynamics. Conventional imaging methods such as transmission electron microscopy, scanning electron microscopy, and atomic force microscopy are powerful techniques for cellular imaging, even at the nanoscale level. However, these techniques have limitations applications in live cell imaging because of the experimental preparation required, namely cell fixation, and the innately small field of view. In this study, we developed a nanoscale optical imaging (NOI) system that combines a conventional optical microscope with a high resolution dark-field condenser (Cytoviva, Inc.) and halogen illuminator. The NOI system's maximum resolution for live cell imaging is around 100 nm. We utilized NOI to investigate the dynamics of intracellular microvesicles of neural cells without immunocytological analysis. In particular, we studied direct, active random, and moderate random dynamic motions of intracellular microvesicles and visualized lysosomal vesicle changes after treatment of cells with a lysosomal inhibitor (NH4Cl). Our results indicate that the NOI system is a feasible, high-resolution optical imaging system for live small organelles that does not require complicated optics or immunocytological staining processes.
Hellyer, Peter J.; Scott, Gregory; Shanahan, Murray; Sharp, David J.
2015-01-01
Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome. PMID:26085630
Hellyer, Peter J; Scott, Gregory; Shanahan, Murray; Sharp, David J; Leech, Robert
2015-06-17
Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome.
Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.
NASA Astrophysics Data System (ADS)
Sasaki, Hironori
This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically
Phantom field dynamics in loop quantum cosmology
Samart, Daris; Gumjudpai, Burin
2007-08-15
We consider a dynamical system of phantom scalar field under exponential potential in the background of loop quantum cosmology. In our analysis, there is neither stable node nor repeller unstable node but only two saddle points, hence no big rip singularity. Physical solutions always possess potential energy greater than the magnitude of the negative kinetic energy. We found that the universe bounces after accelerating even in the domination of the phantom field. After bouncing, the universe finally enters the oscillatory regime.
Imaging electric field dynamics with graphene optoelectronics
Horng, Jason; Balch, Halleh B.; McGuire, Allister F.; Tsai, Hsin-Zon; Forrester, Patrick R.; Crommie, Michael F.; Cui, Bianxiao; Wang, Feng
2016-01-01
The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts, a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena. PMID:27982125
Imaging electric field dynamics with graphene optoelectronics
NASA Astrophysics Data System (ADS)
Horng, Jason; Balch, Halleh B.; McGuire, Allister F.; Tsai, Hsin-Zon; Forrester, Patrick R.; Crommie, Michael F.; Cui, Bianxiao; Wang, Feng
2016-12-01
The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts, a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena.
The neural dynamics of song syntax in songbirds
NASA Astrophysics Data System (ADS)
Jin, Dezhe
2010-03-01
Songbird is ``the hydrogen atom'' of the neuroscience of complex, learned vocalizations such as human speech. Songs of Bengalese finch consist of sequences of syllables. While syllables are temporally stereotypical, syllable sequences can vary and follow complex, probabilistic syntactic rules, which are rudimentarily similar to grammars in human language. Songbird brain is accessible to experimental probes, and is understood well enough to construct biologically constrained, predictive computational models. In this talk, I will discuss the structure and dynamics of neural networks underlying the stereotypy of the birdsong syllables and the flexibility of syllable sequences. Recent experiments and computational models suggest that a syllable is encoded in a chain network of projection neurons in premotor nucleus HVC (proper name). Precisely timed spikes propagate along the chain, driving vocalization of the syllable through downstream nuclei. Through a computational model, I show that that variable syllable sequences can be generated through spike propagations in a network in HVC in which the syllable-encoding chain networks are connected into a branching chain pattern. The neurons mutually inhibit each other through the inhibitory HVC interneurons, and are driven by external inputs from nuclei upstream of HVC. At a branching point that connects the final group of a chain to the first groups of several chains, the spike activity selects one branch to continue the propagation. The selection is probabilistic, and is due to the winner-take-all mechanism mediated by the inhibition and noise. The model predicts that the syllable sequences statistically follow partially observable Markov models. Experimental results supporting this and other predictions of the model will be presented. We suggest that the syntax of birdsong syllable sequences is embedded in the connection patterns of HVC projection neurons.
The relevance of network micro-structure for neural dynamics
Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan
2013-01-01
The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits. PMID:23761758
Whole-Brain Neural Dynamics of Probabilistic Reward Prediction
Symmonds, Mkael
2017-01-01
Predicting future reward is paramount to performing an optimal action. Although a number of brain areas are known to encode such predictions, a detailed account of how the associated representations evolve over time is lacking. Here, we address this question using human magnetoencephalography (MEG) and multivariate analyses of instantaneous activity in reconstructed sources. We overtrained participants on a simple instrumental reward learning task where geometric cues predicted a distribution of possible rewards, from which a sample was revealed 2000 ms later. We show that predicted mean reward (i.e., expected value), and predicted reward variability (i.e., economic risk), are encoded distinctly. Early on, representations of mean reward are seen in parietal and visual areas, and later in frontal regions with orbitofrontal cortex emerging last. Strikingly, an encoding of reward variability emerges simultaneously in parietal/sensory and frontal sources and later than mean reward encoding. An orbitofrontal variability encoding emerged around the same time as that seen for mean reward. Crucially, cross-prediction showed that mean reward and variability representations are distinct and also revealed that instantaneous representations become more stable over time. Across sources, the best fitting metric for variability signals was coefficient of variation (rather than SD or variance), but distinct best metrics were seen for individual brain regions. Our data demonstrate how a dynamic encoding of probabilistic reward prediction unfolds in the brain both in time and space. SIGNIFICANCE STATEMENT Predicting future reward is paramount to optimal behavior. To gain insight into the underlying neural computations, we investigate how reward representations in the brain arise over time. Using magnetoencephalography, we show that a representation of predicted mean reward emerges early in parietal/sensory regions and later in frontal cortex. In contrast, predicted reward
Whole-Brain Neural Dynamics of Probabilistic Reward Prediction.
Bach, Dominik R; Symmonds, Mkael; Barnes, Gareth; Dolan, Raymond J
2017-04-05
Predicting future reward is paramount to performing an optimal action. Although a number of brain areas are known to encode such predictions, a detailed account of how the associated representations evolve over time is lacking. Here, we address this question using human magnetoencephalography (MEG) and multivariate analyses of instantaneous activity in reconstructed sources. We overtrained participants on a simple instrumental reward learning task where geometric cues predicted a distribution of possible rewards, from which a sample was revealed 2000 ms later. We show that predicted mean reward (i.e., expected value), and predicted reward variability (i.e., economic risk), are encoded distinctly. Early on, representations of mean reward are seen in parietal and visual areas, and later in frontal regions with orbitofrontal cortex emerging last. Strikingly, an encoding of reward variability emerges simultaneously in parietal/sensory and frontal sources and later than mean reward encoding. An orbitofrontal variability encoding emerged around the same time as that seen for mean reward. Crucially, cross-prediction showed that mean reward and variability representations are distinct and also revealed that instantaneous representations become more stable over time. Across sources, the best fitting metric for variability signals was coefficient of variation (rather than SD or variance), but distinct best metrics were seen for individual brain regions. Our data demonstrate how a dynamic encoding of probabilistic reward prediction unfolds in the brain both in time and space.SIGNIFICANCE STATEMENT Predicting future reward is paramount to optimal behavior. To gain insight into the underlying neural computations, we investigate how reward representations in the brain arise over time. Using magnetoencephalography, we show that a representation of predicted mean reward emerges early in parietal/sensory regions and later in frontal cortex. In contrast, predicted reward variability
Cortical geometry as a determinant of brain activity eigenmodes: Neural field analysis
NASA Astrophysics Data System (ADS)
Gabay, Natasha C.; Robinson, P. A.
2017-09-01
Perturbation analysis of neural field theory is used to derive eigenmodes of neural activity on a cortical hemisphere, which have previously been calculated numerically and found to be close analogs of spherical harmonics, despite heavy cortical folding. The present perturbation method treats cortical folding as a first-order perturbation from a spherical geometry. The first nine spatial eigenmodes on a population-averaged cortical hemisphere are derived and compared with previous numerical solutions. These eigenmodes contribute most to brain activity patterns such as those seen in electroencephalography and functional magnetic resonance imaging. The eigenvalues of these eigenmodes are found to agree with the previous numerical solutions to within their uncertainties. Also in agreement with the previous numerics, all eigenmodes are found to closely resemble spherical harmonics. The first seven eigenmodes exhibit a one-to-one correspondence with their numerical counterparts, with overlaps that are close to unity. The next two eigenmodes overlap the corresponding pair of numerical eigenmodes, having been rotated within the subspace spanned by that pair, likely due to second-order effects. The spatial orientations of the eigenmodes are found to be fixed by gross cortical shape rather than finer-scale cortical properties, which is consistent with the observed intersubject consistency of functional connectivity patterns. However, the eigenvalues depend more sensitively on finer-scale cortical structure, implying that the eigenfrequencies and consequent dynamical properties of functional connectivity depend more strongly on details of individual cortical folding. Overall, these results imply that well-established tools from perturbation theory and spherical harmonic analysis can be used to calculate the main properties and dynamics of low-order brain eigenmodes.
A hardware implementation of artificial neural networks using field programmable gate arrays
NASA Astrophysics Data System (ADS)
Won, E.
2007-11-01
An artificial neural network algorithm is implemented using a low-cost field programmable gate array hardware. One hidden layer is used in the feed-forward neural network structure in order to discriminate one class of patterns from the other class in real time. In this work, the training of the network is performed in the off-line computing environment and the results of the training are configured to the hardware in order to minimize the latency of the neural computation. With five 8-bit input patterns, six hidden nodes, and one 8-bit output, the implemented hardware neural network makes decisions on a set of input patterns in 11 clock cycles, or less than 200 ns with a 60 MHz clock. The result from the hardware neural computation is well predictable based on the off-line computation. This implementation may be used in level 1 hardware triggers in high energy physics experiments.
Topological field theory of dynamical systems
Ovchinnikov, Igor V.
2012-09-15
Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the 'edge of chaos.' Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.
Topological field theory of dynamical systems.
Ovchinnikov, Igor V
2012-09-01
Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the "edge of chaos." Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.
Topological field theory of dynamical systems
NASA Astrophysics Data System (ADS)
Ovchinnikov, Igor V.
2012-09-01
Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the "edge of chaos." Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.
Durstewitz, Daniel
2017-06-01
The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects
2017-01-01
The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects
Phase field approximation of dynamic brittle fracture
NASA Astrophysics Data System (ADS)
Schlüter, Alexander; Willenbücher, Adrian; Kuhn, Charlotte; Müller, Ralf
2014-11-01
Numerical methods that are able to predict the failure of technical structures due to fracture are important in many engineering applications. One of these approaches, the so-called phase field method, represents cracks by means of an additional continuous field variable. This strategy avoids some of the main drawbacks of a sharp interface description of cracks. For example, it is not necessary to track or model crack faces explicitly, which allows a simple algorithmic treatment. The phase field model for brittle fracture presented in Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) assumes quasi-static loading conditions. However dynamic effects have a great impact on the crack growth in many practical applications. Therefore this investigation presents an extension of the quasi-static phase field model for fracture from Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) to the dynamic case. First of all Hamilton's principle is applied to derive a coupled set of Euler-Lagrange equations that govern the mechanical behaviour of the body as well as the crack growth. Subsequently the model is implemented in a finite element scheme which allows to solve several test problems numerically. The numerical examples illustrate the capabilities of the developed approach to dynamic fracture in brittle materials.
Mean-field behavior of cluster dynamics
NASA Astrophysics Data System (ADS)
Persky, N.; Ben-Av, R.; Kanter, I.; Domany, E.
1996-09-01
The dynamic behavior of cluster algorithms is analyzed in the classical mean-field limit. Rigorous analytical results below Tc establish that the dynamic exponent has the value zSW=1 for the Swendsen-Wang algorithm and zW=0 for the Wolff algorithm. An efficient Monte Carlo implementation is introduced, adapted for using these algorithms for fully connected graphs. Extensive simulations both above and below Tc demonstrate scaling and evaluate the finite-size scaling function by means of a rather impressive collapse of the data.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
A Neural Information Field Approach to Computational Cognition
2016-11-18
the awards are slightly modified from those on the respective websites. Allen Newell Award ICCM – P. Duggins, T.C. Stewart , X.Choo, and C...Optimizing Semantic Pointer Representations for Symbol- like Processing in Spiking Neural Networks. PLoS ONE. 2016. P. Duggins, T.C. Stewart , X.Choo, and C...Gosmann, T.C. Stewart , T. Wennekers, and C. Eliasmith. Towards a cognitively realistic representation of word associations. In 38th Cognitive Science
Analyzing multicomponent receptive fields from neural responses to natural stimuli
Rowekamp, Ryan; Sharpee, Tatyana O
2011-01-01
The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916
Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.
Lomp, Oliver; Richter, Mathis; Zibner, Stephan K U; Schöner, Gregor
2016-01-01
Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs.
Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar
Lomp, Oliver; Richter, Mathis; Zibner, Stephan K. U.; Schöner, Gregor
2016-01-01
Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs. PMID:27853431
Dynamic Magnetic Field Applications for Materials Processing
NASA Technical Reports Server (NTRS)
Mazuruk, K.; Grugel, Richard N.; Motakef, S.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Magnetic fields, variable in time and space, can be used to control convection in electrically conducting melts. Flow induced by these fields has been found to be beneficial for crystal growth applications. It allows increased crystal growth rates, and improves homogeneity and quality. Particularly beneficial is the natural convection damping capability of alternating magnetic fields. One well-known example is the rotating magnetic field (RMF) configuration. RMF induces liquid motion consisting of a swirling basic flow and a meridional secondary flow. In addition to crystal growth applications, RMF can also be used for mixing non-homogeneous melts in continuous metal castings. These applied aspects have stimulated increasing research on RMF-induced fluid dynamics. A novel type of magnetic field configuration consisting of an axisymmetric magnetostatic wave, designated the traveling magnetic field (TMF), has been recently proposed. It induces a basic flow in the form of a single vortex. TMF may find use in crystal growth techniques such as the vertical Bridgman (VB), float zone (FZ), and the traveling heater method. In this review, both methods, RMF and TMF are presented. Our recent theoretical and experimental results include such topics as localized TMF, natural convection dumping using TMF in a vertical Bridgman configuration, the traveling heater method, and the Lorentz force induced by TMF as a function of frequency. Experimentally, alloy mixing results, with and without applied TMF, will be presented. Finally, advantages of the traveling magnetic field, in comparison to the more mature rotating magnetic field method, will be discussed.
Dynamic Magnetic Field Applications for Materials Processing
NASA Technical Reports Server (NTRS)
Mazuruk, K.; Grugel, Richard N.; Motakef, S.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Magnetic fields, variable in time and space, can be used to control convection in electrically conducting melts. Flow induced by these fields has been found to be beneficial for crystal growth applications. It allows increased crystal growth rates, and improves homogeneity and quality. Particularly beneficial is the natural convection damping capability of alternating magnetic fields. One well-known example is the rotating magnetic field (RMF) configuration. RMF induces liquid motion consisting of a swirling basic flow and a meridional secondary flow. In addition to crystal growth applications, RMF can also be used for mixing non-homogeneous melts in continuous metal castings. These applied aspects have stimulated increasing research on RMF-induced fluid dynamics. A novel type of magnetic field configuration consisting of an axisymmetric magnetostatic wave, designated the traveling magnetic field (TMF), has been recently proposed. It induces a basic flow in the form of a single vortex. TMF may find use in crystal growth techniques such as the vertical Bridgman (VB), float zone (FZ), and the traveling heater method. In this review, both methods, RMF and TMF are presented. Our recent theoretical and experimental results include such topics as localized TMF, natural convection dumping using TMF in a vertical Bridgman configuration, the traveling heater method, and the Lorentz force induced by TMF as a function of frequency. Experimentally, alloy mixing results, with and without applied TMF, will be presented. Finally, advantages of the traveling magnetic field, in comparison to the more mature rotating magnetic field method, will be discussed.
Hybrid computing using a neural network with dynamic external memory.
Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis
2016-10-27
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.
Nonequilibrium dynamics of emergent field configurations
NASA Astrophysics Data System (ADS)
Howell, Rafael Cassidy
The processes by which nonlinear physical systems approach thermal equilibrium is of great importance in many areas of science. Central to this is the mechanism by which energy is transferred between the many degrees of freedom comprising these systems. With this in mind, in this research the nonequilibrium dynamics of nonperturbative fluctuations within Ginzburg-Landau models are investigated. In particular, two questions are addressed. In both cases the system is initially prepared in one of two minima of a double-well potential. First, within the context of a (2 + 1) dimensional field theory, we investigate whether emergent spatio-temporal coherent structures play a dynamcal role in the equilibration of the field. We find that the answer is sensitive to the initial temperature of the system. At low initial temperatures, the dynamics are well approximated with a time-dependent mean-field theory. For higher temperatures, the strong nonlinear coupling between the modes in the field does give rise to the synchronized emergence of coherent spatio-temporal configurations, identified with oscillons. These are long-lived coherent field configurations characterized by their persistent oscillatory behavior at their core. This initial global emergence is seen to be a consequence of resonant behavior in the long wavelength modes in the system. A second question concerns the emergence of disorder in a highly viscous system modeled by a (3 + 1) dimensional field theory. An integro-differential Boltzmann equation is derived to model the thermal nucleation of precursors of one phase within the homogeneous background. The fraction of the volume populated by these precursors is computed as a function of temperature. This model is capable of describing the onset of percolation, characterizing the approach to criticality (i.e. disorder). It also provides a nonperturbative correction to the critical temperature based on the nonequilibrium dynamics of the system.
Population Dynamics and Convective Cloud Fields
NASA Astrophysics Data System (ADS)
Nober, F. J.; Graf, H.-F.
2003-04-01
A cumulus cloud field model has been coupled to an atmospheric general circulation model (AGCM). The results, which show a good performance of the model within the AGCM and a qualitative good agreement to observation concerning the statistical information of cloud fields are presented. While most of the current cumulus convection parameterisations are formulated as massflux schemes (determing the overall massflux of all cumulus clouds in one AGCM grid column) the presented cloud field model determines for each AGCM grid column, where convection takes place, an explicit spectrum of different clouds. Therefore the information about the actual cumulus convection state in a grid column is not restricted to an avereged massflux but includes the number of different cloud types which in principle are able to develope under the given vertical condition. The degree to which part each cloud type participates in the whole cloud field is determined by the cloud field model with respect to the special vertical state in the grid column. The choice of the cloud model to define the different cloud types is very flexible. Very simple cloud models are possible but also more complex ones that describe more realistic clouds (including dynamic and microphysical information) than simple massflux approaches do. The cloud field model takes into account the interaction between all non-convective processes calculated by the AGCM and (which makes the procedure self constistent) the cloud-cloud interaction between each cloud type and each other. The final calculation of the cloud field is done following an approach from population dynamics (Lotka-Volterra-Equation). The tests of the model in the ECHAM5 AGCM (running in single column mode) shows that the model produces reliable convective feedbacks (i.e. integral convective heating, convective transport, etc.). The additional information of the cloud field structure (power law behavior of cloud size distribution, cloud tops for each cloud type
Dynamic stability conditions for Lotka-Volterra recurrent neural networks with delays.
Yi, Zhang; Tan, K K
2002-07-01
The Lotka-Volterra model of neural networks, derived from the membrane dynamics of competing neurons, have found successful applications in many "winner-take-all" types of problems. This paper studies the dynamic stability properties of general Lotka-Volterra recurrent neural networks with delays. Conditions for nondivergence of the neural networks are derived. These conditions are based on local inhibition of networks, thereby allowing these networks to possess a multistability property. Multistability is a necessary property of a network that will enable important neural computations such as those governing the decision making process. Under these nondivergence conditions, a compact set that globally attracts all the trajectories of a network can be computed explicitly. If the connection weight matrix of a network is symmetric in some sense, and the delays of the network are in L2 space, we can prove that the network will have the property of complete stability.
The earth's gravity field and ocean dynamics
NASA Technical Reports Server (NTRS)
Mather, R. S.
1978-01-01
An analysis of the signal-to-noise ratio of the best gravity field available shows that a basis exists for the recovery of the dominant parameters of the quasi-stationary sea surface topography. Results obtained from the analysis of GEOS-3 show that it is feasible to recover the quasi-stationary dynamic sea surface topography as a function of wavelength. The gravity field models required for synoptic ocean circulation modeling are less exacting in that constituents affecting radial components of orbital position need not be known through shorter wavelengths.
Dynamic versus static neural network model for rainfall forecasting at Klang River Basin, Malaysia
NASA Astrophysics Data System (ADS)
El-Shafie, A.; Noureldin, A.; Taha, M. R.; Hussain, A.
2011-07-01
Rainfall is considered as one of the major component of the hydrological process, it takes significant part of evaluating drought and flooding events. Therefore, it is important to have accurate model for rainfall forecasting. Recently, several data-driven modeling approaches have been investigated to perform such forecasting task such as Multi-Layer Perceptron Neural Networks (MLP-NN). In fact, the rainfall time series modeling involves an important temporal dimension. On the other hand, the classical MLP-NN is a static and memoryless network architecture that is effective for complex nonlinear static mapping. This research focuses on investigating the potential of introducing a neural network that could address the temporal relationships of the rainfall series. Two different static neural networks and one dynamic neural network namely; Multi-Layer Peceptron Neural network (MLP-NN), Radial Basis Function Neural Network (RBFNN) and Input Delay Neural Network (IDNN), respectively, have been examined in this study. Those models had been developed for two time horizon in monthly and weekly rainfall basis forecasting at Klang River, Malaysia. Data collected over 12 yr (1997-2008) on weekly basis and 22 yr (1987-2008) for monthly basis were used to develop and examine the performance of the proposed models. Comprehensive comparison analyses were carried out to evaluate the performance of the proposed static and dynamic neural network. Results showed that MLP-NN neural network model able to follow the similar trend of the actual rainfall, yet it still relatively poor. RBFNN model achieved better accuracy over the MLP-NN model. Moreover, the forecasting accuracy of the IDNN model outperformed during training and testing stage which prove a consistent level of accuracy with seen and unseen data. Furthermore, the IDNN significantly enhance the forecasting accuracy if compared with the other static neural network model as they could memorize the sequential or time varying
Neural field theory of nonlinear wave-wave and wave-neuron processes
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Roy, N.
2015-06-01
Systematic expansion of neural field theory equations in terms of nonlinear response functions is carried out to enable a wide variety of nonlinear wave-wave and wave-neuron processes to be treated systematically in systems involving multiple neural populations. The results are illustrated by analyzing second-harmonic generation, and they can also be applied to wave-wave coalescence, multiharmonic generation, facilitation, depression, refractoriness, and other nonlinear processes.
Modeling emotional dynamics : currency versus field.
Sallach, D .L.; Decision and Information Sciences; Univ. of Chicago
2008-08-01
Randall Collins has introduced a simplified model of emotional dynamics in which emotional energy, heightened and focused by interaction rituals, serves as a common denominator for social exchange: a generic form of currency, except that it is active in a far broader range of social transactions. While the scope of this theory is attractive, the specifics of the model remain unconvincing. After a critical assessment of the currency theory of emotion, a field model of emotion is introduced that adds expressiveness by locating emotional valence within its cognitive context, thereby creating an integrated orientation field. The result is a model which claims less in the way of motivational specificity, but is more satisfactory in modeling the dynamic interaction between cognitive and emotional orientations at both individual and social levels.
Gradient calculations for dynamic recurrent neural networks: a survey.
Pearlmutter, B A
1995-01-01
Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.
Molecular dynamics in high electric fields
NASA Astrophysics Data System (ADS)
Apostol, M.; Cune, L. C.
2016-06-01
Molecular rotation spectra, generated by the coupling of the molecular electric-dipole moments to an external time-dependent electric field, are discussed in a few particular conditions which can be of some experimental interest. First, the spherical-pendulum molecular model is reviewed, with the aim of introducing an approximate method which consists in the separation of the azimuthal and zenithal motions. Second, rotation spectra are considered in the presence of a static electric field. Two particular cases are analyzed, corresponding to strong and weak fields. In both cases the classical motion of the dipoles consists of rotations and vibrations about equilibrium positions; this motion may exhibit parametric resonances. For strong fields a large macroscopic electric polarization may appear. This situation may be relevant for polar matter (like pyroelectrics, ferroelectrics), or for heavy impurities embedded in a polar solid. The dipolar interaction is analyzed in polar condensed matter, where it is shown that new polarization modes appear for a spontaneous macroscopic electric polarization (these modes are tentatively called "dipolons"); one of the polarization modes is related to parametric resonances. The extension of these considerations to magnetic dipoles is briefly discussed. The treatment is extended to strong electric fields which oscillate with a high frequency, as those provided by high-power lasers. It is shown that the effect of such fields on molecular dynamics is governed by a much weaker, effective, renormalized, static electric field.
Evolving Dynamics of the Supergranular Flow Field
NASA Astrophysics Data System (ADS)
De Rosa, M. L.; Lisle, J. P.; Toomre, J.
2000-05-01
We study several large (45-degree square) fields of supergranules for as long as they remain visible on the solar disk (about 6 days) to characterize the dynamics of the supergranular flow field and its interaction with surrounding photospheric magnetic field elements. These flow fields are determined by applying correlation tracking methods to time series of mesogranules seen in full-disk SOI-MDI velocity images. We have shown previously that mesogranules observed in this way are systematically advected by the larger scale supergranular flow field in which they are embedded. Applying correlation tracking methods to such time series yields the positions of the supergranular outflows quite well, even for locations close to disk center. These long-duration datasets contain several instances where individual supergranules are recognizable for time scales as long as 50 hours, though most cells persist for about 25 hours that is often quoted as a supergranular lifetime. Many supergranule merging and splitting events are observed, as well as other evolving flow patterns such as lanes of converging and diverging fluid. By comparing the flow fields with the corresponding images of magnetic fields, we confirm the result that small-scale photospheric magnetic field elements are quickly advected to the intercellular lanes to form a network between the supergranular outflows. In addition, we characterize the influence of larger-scale regions of magnetic flux, such as active regions, on the flow fields. Furthermore, we have measured even larger-scale flows by following the motions of the supergranules, but these flow fields contain a high noise component and are somewhat difficult to interpret. This research was supported by NASA through grants NAG 5-8133 and NAG 5-7996, and by NSF through grant ATM-9731676.
Dynamic social power modulates neural basis of math calculation.
Harada, Tokiko; Bridge, Donna J; Chiao, Joan Y
2012-01-01
Both situational (e.g., perceived power) and sustained social factors (e.g., cultural stereotypes) are known to affect how people academically perform, particularly in the domain of mathematics. The ability to compute even simple mathematics, such as addition, relies on distinct neural circuitry within the inferior parietal and inferior frontal lobes, brain regions where magnitude representation and addition are performed. Despite prior behavioral evidence of social influence on academic performance, little is known about whether or not temporarily heightening a person's sense of power may influence the neural bases of math calculation. Here we primed female participants with either high or low power (LP) and then measured neural response while they performed exact and approximate math problems. We found that priming power affected math performance; specifically, females primed with high power (HP) performed better on approximate math calculation compared to females primed with LP. Furthermore, neural response within the left inferior frontal gyrus (IFG), a region previously associated with cognitive interference, was reduced for females in the HP compared to LP group. Taken together, these results indicate that even temporarily heightening a person's sense of social power can increase their math performance, possibly by reducing cognitive interference during math performance.
Neural Dynamics of Autistic Behaviors: Cognitive, Emotional, and Timing Substrates
ERIC Educational Resources Information Center
Grossberg, Stephen; Seidman, Don
2006-01-01
What brain mechanisms underlie autism, and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the Imbalanced Spectrally Timed Adaptive Resonance Theory (iSTART) model, that proposes how cognitive, emotional, timing, and motor processes that involve brain regions such as the prefrontal and temporal…
Dynamic social power modulates neural basis of math calculation
Harada, Tokiko; Bridge, Donna J.; Chiao, Joan Y.
2013-01-01
Both situational (e.g., perceived power) and sustained social factors (e.g., cultural stereotypes) are known to affect how people academically perform, particularly in the domain of mathematics. The ability to compute even simple mathematics, such as addition, relies on distinct neural circuitry within the inferior parietal and inferior frontal lobes, brain regions where magnitude representation and addition are performed. Despite prior behavioral evidence of social influence on academic performance, little is known about whether or not temporarily heightening a person's sense of power may influence the neural bases of math calculation. Here we primed female participants with either high or low power (LP) and then measured neural response while they performed exact and approximate math problems. We found that priming power affected math performance; specifically, females primed with high power (HP) performed better on approximate math calculation compared to females primed with LP. Furthermore, neural response within the left inferior frontal gyrus (IFG), a region previously associated with cognitive interference, was reduced for females in the HP compared to LP group. Taken together, these results indicate that even temporarily heightening a person's sense of social power can increase their math performance, possibly by reducing cognitive interference during math performance. PMID:23390415
Neural Dynamics of Autistic Behaviors: Cognitive, Emotional, and Timing Substrates
ERIC Educational Resources Information Center
Grossberg, Stephen; Seidman, Don
2006-01-01
What brain mechanisms underlie autism, and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the Imbalanced Spectrally Timed Adaptive Resonance Theory (iSTART) model, that proposes how cognitive, emotional, timing, and motor processes that involve brain regions such as the prefrontal and temporal…
Mixing Dynamics Induced by Traveling Magnetic Fields
NASA Technical Reports Server (NTRS)
Grugel, Richard N.; Mazuruk, Konstantin; Rose, M. Franklin (Technical Monitor)
2001-01-01
Microstructural and compositional homogeneity in metals and alloys can only be achieved if the initial melt is homogeneous prior to the onset of solidification processing. Naturally induced convection may initially facilitate this requirement but upon the onset of solidification significant compositional variations generally arise leading to undesired segregation. Application of alternating magnetic fields to promote a uniform bulk liquid concentration during solidification processing has been suggested. To investigate such possibilities an initial study of using traveling magnetic fields (TMF) to promote melt homogenization is reported in this work. Theoretically, the effect of TMF-induced convection on mixing phenomena is studied in the laminar regime of flow. Experimentally, with and without applied fields, both 1) mixing dynamics by optically monitoring the spreading of an initially localized dye in transparent fluids and, 2) compositional variations in metal alloys have been investigated.
Motor expertise modulates neural oscillations and temporal dynamics of cognitive control.
Wang, Chun-Hao; Yang, Cheng-Ta; Moreau, David; Muggleton, Neil G
2017-09-01
The field of motor expertise in athletes has recently been receiving increasing levels of investigation. However, there has been less investigation of how dynamic changes in behavior and in neural activity as a result of sporting participation might result in superiority for athletes in domain-general cognition. We used a flanker task to investigate conflict-related behavioral measures, such as mean reaction time (RT) and RT variability, in conjunction with electroencephalographic (EEG) measures, including N2d, theta activity power, and inter-trial phase coherence (ITPC). These measures were compared for 18 badminton players, an interceptive sport requiring the performance of skills in a fast-changing and unpredictable environment, and 18 athletic controls (14 track-and-field athletes and 4 dragon boat athletes), with high fitness levels but no requirement for skills such as responses to their opponents. Results showed that badminton players made faster and less variable responses on the flanker task than athletic controls, regardless of stimulus congruency levels. For EEG measures, both badminton players and athletic controls showed comparable modulations of conflicting on midfrontal N2 and theta power. However, such an effect on ITPC values was found only for the badminton players. The behavior-EEG correlation seen suggests that smaller changes in RT variability induced by conflicting process in badminton players may be attributable to greater stability in the neural processes in these individuals. Because these findings were independent from aerobic fitness levels, it seems such differences are likely due to training-induced adaptations, consistent with the idea of specific transfer from cognitive components involved in sport training to domain-general cognition. Copyright © 2017 Elsevier Inc. All rights reserved.
Field measurements and neural network modeling of water quality parameters
NASA Astrophysics Data System (ADS)
Qishlaqi, Afishin; Kordian, Sediqeh; Parsaie, Abbas
2017-01-01
Rivers are one of the main resources for water supplying the agricultural, industrial, and urban use; therefore, unremitting surveying the quality of them is necessary. Recently, artificial neural networks have been proposed as a powerful tool for modeling and predicting the water quality parameters in natural streams. In this paper, to predict water quality parameters of Tireh River located at South West of Iran, a multilayer neural network model (MLP) was developed. The T.D.S, Ec, pH, HCO3, Cl, Na, So4, Mg, and Ca as main parameters of water quality parameters were measured and predicted using the MLP model. The architecture of the proposed MLP model included two hidden layers that at the first and second hidden layers, eight and six neurons were considered. The tangent sigmoid and pure-line functions were selected as transfer function for the neurons in hidden and output layers, respectively. The results showed that the MLP model has suitable performance to predict water quality parameters of Tireh River. For assessing the performance of the MLP model in the water quality prediction along the studied area, in addition to existing sampling stations, another 14 stations along were considered by authors. Evaluating the performance of developed MLP model to map relation between the water quality parameters along the studied area showed that it has suitable accuracy and minimum correlation between the results of MLP model and measured data was 0.85.
Indirect biological measures of consciousness from field studies of brains as dynamical systems.
Freeman, Walter J
2007-11-01
Consciousness fully supervenes when the 1.5 kgm mass of protoplasm in the head directs the body into material and social environments and engages in reciprocity. While consciousness is not susceptible to direct measurement, a limited form exercised in animals and pre-lingual children can be measured indirectly with biological assays of arousal, intention and attention. In this essay consciousness is viewed as operating simultaneously in a field at all levels ranging from subatomic to social. The relations and transpositions between levels require sophisticated mathematical treatments that are largely still to be devised. In anticipation of those developments the available experimental data are reviewed concerning the state variables in several levels that collectively constitute the substrate of biological consciousness. The basic metaphors are described that represent the neural machinery of transposition in consciousness. The processes are sketched by which spatiotemporal neural activity patterns emerge as fields that may represent the contents of consciousness. The results of dynamical analysis are discussed in terms serving to distinguish between the neural point processes dictated by the neuron doctrine vs. continuously variable neural fields generated by neural masses in cortex.
Lebedev, Dmitry V; Steil, Jochen J; Ritter, Helge J
2005-04-01
We introduce a new type of neural network--the dynamic wave expansion neural network (DWENN)--for path generation in a dynamic environment for both mobile robots and robotic manipulators. Our model is parameter-free, computationally efficient, and its complexity does not explicitly depend on the dimensionality of the configuration space. We give a review of existing neural networks for trajectory generation in a time-varying domain, which are compared to the presented model. We demonstrate several representative simulative comparisons as well as the results of long-run comparisons in a number of randomly-generated scenes, which reveal that the proposed model yields dominantly shorter paths, especially in highly-dynamic environments.
Parametric models to relate spike train and LFP dynamics with neural information processing
Banerjee, Arpan; Dean, Heather L.; Pesaran, Bijan
2012-01-01
Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial
DYNAMICAL FIELD LINE CONNECTIVITY IN MAGNETIC TURBULENCE
Ruffolo, D.; Matthaeus, W. H.
2015-06-20
Point-to-point magnetic connectivity has a stochastic character whenever magnetic fluctuations cause a field line random walk, but this can also change due to dynamical activity. Comparing the instantaneous magnetic connectivity from the same point at two different times, we provide a nonperturbative analytic theory for the ensemble average perpendicular displacement of the magnetic field line, given the power spectrum of magnetic fluctuations. For simplicity, the theory is developed in the context of transverse turbulence, and is numerically evaluated for the noisy reduced MHD model. Our formalism accounts for the dynamical decorrelation of magnetic fluctuations due to wave propagation, local nonlinear distortion, random sweeping, and convection by a bulk wind flow relative to the observer. The diffusion coefficient D{sub X} of the time-differenced displacement becomes twice the usual field line diffusion coefficient D{sub x} at large time displacement t or large distance z along the mean field (corresponding to a pair of uncorrelated random walks), though for a low Kubo number (in the quasilinear regime) it can oscillate at intermediate values of t and z. At high Kubo number the dynamical decorrelation decays mainly from the nonlinear term and D{sub X} tends monotonically toward 2D{sub x} with increasing t and z. The formalism and results presented here are relevant to a variety of astrophysical processes, such as electron transport and heating patterns in coronal loops and the solar transition region, changing magnetic connection to particle sources near the Sun or at a planetary bow shock, and thickening of coronal hole boundaries.
Dynamic versus static neural network model for rainfall forecasting at Klang River Basin, Malaysia
NASA Astrophysics Data System (ADS)
El-Shafie, A.; Noureldin, A.; Taha, M.; Hussain, A.; Mukhlisin, M.
2012-04-01
Rainfall is considered as one of the major components of the hydrological process; it takes significant part in evaluating drought and flooding events. Therefore, it is important to have an accurate model for rainfall forecasting. Recently, several data-driven modeling approaches have been investigated to perform such forecasting tasks as multi-layer perceptron neural networks (MLP-NN). In fact, the rainfall time series modeling involves an important temporal dimension. On the other hand, the classical MLP-NN is a static and has a memoryless network architecture that is effective for complex nonlinear static mapping. This research focuses on investigating the potential of introducing a neural network that could address the temporal relationships of the rainfall series. Two different static neural networks and one dynamic neural network, namely the multi-layer perceptron neural network (MLP-NN), radial basis function neural network (RBFNN) and input delay neural network (IDNN), respectively, have been examined in this study. Those models had been developed for the two time horizons for monthly and weekly rainfall forecasting at Klang River, Malaysia. Data collected over 12 yr (1997-2008) on a weekly basis and 22 yr (1987-2008) on a monthly basis were used to develop and examine the performance of the proposed models. Comprehensive comparison analyses were carried out to evaluate the performance of the proposed static and dynamic neural networks. Results showed that the MLP-NN neural network model is able to follow trends of the actual rainfall, however, not very accurately. RBFNN model achieved better accuracy than the MLP-NN model. Moreover, the forecasting accuracy of the IDNN model was better than that of static network during both training and testing stages, which proves a consistent level of accuracy with seen and unseen data.
Curley, J Lowry; Jennings, Scott R; Moore, Michael J
2011-02-11
Increasingly, patterned cell culture environments are becoming a relevant technique to study cellular characteristics, and many researchers believe in the need for 3D environments to represent in vitro experiments which better mimic in vivo qualities. Studies in fields such as cancer research, neural engineering, cardiac physiology, and cell-matrix interaction have shown cell behavior differs substantially between traditional monolayer cultures and 3D constructs. Hydrogels are used as 3D environments because of their variety, versatility and ability to tailor molecular composition through functionalization. Numerous techniques exist for creation of constructs as cell-supportive matrices, including electrospinning, elastomer stamps, inkjet printing, additive photopatterning, static photomask projection-lithography, and dynamic mask microstereolithography. Unfortunately, these methods involve multiple production steps and/or equipment not readily adaptable to conventional cell and tissue culture methods. The technique employed in this protocol adapts the latter two methods, using a digital micromirror device (DMD) to create dynamic photomasks for crosslinking geometrically specific poly-(ethylene glycol) (PEG) hydrogels, induced through UV initiated free radical polymerization. The resulting "2.5D" structures provide a constrained 3D environment for neural growth. We employ a dual-hydrogel approach, where PEG serves as a cell-restrictive region supplying structure to an otherwise shapeless but cell-permissive self-assembling gel made from either Puramatrix or agarose. The process is a quick simple one step fabrication which is highly reproducible and easily adapted for use with conventional cell culture methods and substrates. Whole tissue explants, such as embryonic dorsal root ganglia (DRG), can be incorporated into the dual hydrogel constructs for experimental assays such as neurite outgrowth. Additionally, dissociated cells can be encapsulated in the
Quantum perceptron over a field and neural network architecture selection in a quantum computer.
da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa
2016-04-01
In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. Copyright © 2016 Elsevier Ltd. All rights reserved.
Quantum dynamics in strong fluctuating fields
NASA Astrophysics Data System (ADS)
Goychuk, Igor; Hänggi, Peter
A large number of multifaceted quantum transport processes in molecular systems and physical nanosystems, such as e.g. nonadiabatic electron transfer in proteins, can be treated in terms of quantum relaxation processes which couple to one or several fluctuating environments. A thermal equilibrium environment can conveniently be modelled by a thermal bath of harmonic oscillators. An archetype situation provides a two-state dissipative quantum dynamics, commonly known under the label of a spin-boson dynamics. An interesting and nontrivial physical situation emerges, however, when the quantum dynamics evolves far away from thermal equilibrium. This occurs, for example, when a charge transferring medium possesses nonequilibrium degrees of freedom, or when a strong time-dependent control field is applied externally. Accordingly, certain parameters of underlying quantum subsystem acquire stochastic character. This may occur, for example, for the tunnelling coupling between the donor and acceptor states of the transferring electron, or for the corresponding energy difference between electronic states which assume via the coupling to the fluctuating environment an explicit stochastic or deterministic time-dependence. Here, we review the general theoretical framework which is based on the method of projector operators, yielding the quantum master equations for systems that are exposed to strong external fields. This allows one to investigate on a common basis, the influence of nonequilibrium fluctuations and periodic electrical fields on those already mentioned dynamics and related quantum transport processes. Most importantly, such strong fluctuating fields induce a whole variety of nonlinear and nonequilibrium phenomena. A characteristic feature of such dynamics is the absence of thermal (quantum) detailed balance.ContentsPAGE1. Introduction5262. Quantum dynamics in stochastic fields531 2.1. Stochastic Liouville equation531 2.2. Non-Markovian vs. Markovian discrete
Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data
NASA Astrophysics Data System (ADS)
Deng, Xinyi
2016-08-01
A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in
Empirical modeling ENSO dynamics with complex-valued artificial neural networks
NASA Astrophysics Data System (ADS)
Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry
2016-04-01
The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Spatiotemporal multi-resolution approximation of the Amari type neural field model.
Aram, P; Freestone, D R; Dewar, M; Scerri, K; Jirsa, V; Grayden, D B; Kadirkamanathan, V
2013-02-01
Neural fields are spatially continuous state variables described by integro-differential equations, which are well suited to describe the spatiotemporal evolution of cortical activations on multiple scales. Here we develop a multi-resolution approximation (MRA) framework for the integro-difference equation (IDE) neural field model based on semi-orthogonal cardinal B-spline wavelets. In this way, a flexible framework is created, whereby both macroscopic and microscopic behavior of the system can be represented simultaneously. State and parameter estimation is performed using the expectation maximization (EM) algorithm. A synthetic example is provided to demonstrate the framework.
Optical sensed image fusion with dynamic neural networks
NASA Astrophysics Data System (ADS)
Shkvarko, Yuri V.; Ibarra-Manzano, Oscar G.; Jaime-Rivas, Rene; Andrade-Lucio, Jose A.; Alvarado-Mendez, Edgar; Rojas-Laguna, R.; Torres-Cisneros, Miguel; Alvarez-Jaime, J. A.
2001-08-01
The neural network-based technique for improving the quality of the image fusion is proposed as required for the remote sensing (RS) imagery. We prose to exit information about the point spread functions of the corresponding RS imaging systems combining it with prior realistic knowledge about the properties of the scene contained in the maximum entropy (ME) a priori image model. Applying the aggregate regularization method to solve the fusion tasks aimed to achieve the best resolution and noise suppression performances of the overall resulting image solves the problem. The proposed fusion method assumes the availability to control the design parameters, which influence the overall restoration performances. Computationally, the fusion method is implemented using the maximum entropy Hopfield-type neural network with adjustable parameters. Simulations illustrate the improved performances of the developed MENN-based image fusion method.
Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks
1992-05-30
course, to on-going changes brought about by learning processes. As research in neurodynamics proceeded, the concept of reverberatory information flows...Microstructure of Cognition . Vol. 1: Foundations, M.I.T. Press, Cambridge, Massachusetts, pp. 354-361, 1986. 100 I Schwarz, G., "Estimating the dimension of a...Continually Running Fully Recurrent Neural Networks, ICS Report 8805, Institute of Cognitive Science, University of California at San Diego, 1988. 10 II
Neural Correlates of Dynamically Evolving Interpersonal Ties Predict Prosocial Behavior
Fahrenfort, Johannes J.; van Winden, Frans; Pelloux, Benjamin; Stallen, Mirre; Ridderinkhof, K. Richard
2011-01-01
There is a growing interest for the determinants of human choice behavior in social settings. Upon initial contact, investment choices in social settings can be inherently risky, as the degree to which the other person will reciprocate is unknown. Nevertheless, people have been shown to exhibit prosocial behavior even in one-shot laboratory settings where all interaction has been taken away. A logical step has been to link such behavior to trait empathy-related neurobiological networks. However, as a social interaction unfolds, the degree of uncertainty with respect to the expected payoff of choice behavior may change as a function of the interaction. Here we attempt to capture this factor. We show that the interpersonal tie one develops with another person during interaction – rather than trait empathy – motivates investment in a public good that is shared with an anonymous interaction partner. We examined how individual differences in trait empathy and interpersonal ties modulate neural responses to imposed monetary sharing. After, but not before interaction in a public good game, sharing prompted activation of neural systems associated with reward (striatum), empathy (anterior insular cortex and anterior cingulate cortex) as well as altruism, and social significance [posterior superior temporal sulcus (pSTS)]. Although these activations could be linked to both empathy and interpersonal ties, only tie-related pSTS activation predicted prosocial behavior during subsequent interaction, suggesting a neural substrate for keeping track of social relevance. PMID:22403524
Neural correlates of dynamically evolving interpersonal ties predict prosocial behavior.
Fahrenfort, Johannes J; van Winden, Frans; Pelloux, Benjamin; Stallen, Mirre; Ridderinkhof, K Richard
2012-01-01
There is a growing interest for the determinants of human choice behavior in social settings. Upon initial contact, investment choices in social settings can be inherently risky, as the degree to which the other person will reciprocate is unknown. Nevertheless, people have been shown to exhibit prosocial behavior even in one-shot laboratory settings where all interaction has been taken away. A logical step has been to link such behavior to trait empathy-related neurobiological networks. However, as a social interaction unfolds, the degree of uncertainty with respect to the expected payoff of choice behavior may change as a function of the interaction. Here we attempt to capture this factor. We show that the interpersonal tie one develops with another person during interaction - rather than trait empathy - motivates investment in a public good that is shared with an anonymous interaction partner. We examined how individual differences in trait empathy and interpersonal ties modulate neural responses to imposed monetary sharing. After, but not before interaction in a public good game, sharing prompted activation of neural systems associated with reward (striatum), empathy (anterior insular cortex and anterior cingulate cortex) as well as altruism, and social significance [posterior superior temporal sulcus (pSTS)]. Although these activations could be linked to both empathy and interpersonal ties, only tie-related pSTS activation predicted prosocial behavior during subsequent interaction, suggesting a neural substrate for keeping track of social relevance.
A neural network dynamics that resembles protein evolution
NASA Astrophysics Data System (ADS)
Ferrán, Edgardo A.; Ferrara, Pascual
1992-06-01
We use neutral networks to classify proteins according to their sequence similarities. A network composed by 7 × 7 neurons, was trained with the Kohonen unsupervised learning algorithm using, as inputs, matrix patterns derived from the bipeptide composition of cytochrome c proteins belonging to 76 different species. As a result of the training, the network self-organized the activation of its neurons into topologically ordered maps, wherein phylogenetically related sequences were positioned close to each other. The evolution of the topological map during learning, in a representative computational experiment, roughly resembles the way in which one species evolves into several others. For instance, sequences corresponding to vertebrates, initially grouped together into one neuron, were placed in a contiguous zone of the final neural map, with sequences of fishes, amphibia, reptiles, birds and mammals associated to different neurons. Some apparent wrong classifications are due to the fact that some proteins have a greater degree of sequence identity than the one expected by phylogenetics. In the final neural map, each synaptic vector may be considered as the pattern corresponding to the ancestor of all the proteins that are attached to that neuron. Although it may be also tempting to link real time with learning epochs and to use this relationship to calibrate the molecular evolutionary clock, this is not correct because the evolutionary time schedule obtained with the neural network depends highly on the discrete way in which the winner neighborhood is decreased during learning.
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min
2013-08-01
Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the
NASA Astrophysics Data System (ADS)
Wan, Tat C.; Kabuka, Mansur R.
1994-05-01
With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.
Robustness analysis of uncertain dynamical neural networks with multiple time delays.
Senan, Sibel
2015-10-01
This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Direct imaging of neural currents using ultra-low field magnetic resonance techniques
Volegov, Petr L.; Matlashov, Andrei N.; Mosher, John C.; Espy, Michelle A.; Kraus, Jr., Robert H.
2009-08-11
Using resonant interactions to directly and tomographically image neural activity in the human brain using magnetic resonance imaging (MRI) techniques at ultra-low field (ULF), the present inventors have established an approach that is sensitive to magnetic field distributions local to the spin population in cortex at the Larmor frequency of the measurement field. Because the Larmor frequency can be readily manipulated (through varying B.sub.m), one can also envision using ULF-DNI to image the frequency distribution of the local fields in cortex. Such information, taken together with simultaneous acquisition of MEG and ULF-NMR signals, enables non-invasive exploration of the correlation between local fields induced by neural activity in cortex and more `distant` measures of brain activity such as MEG and EEG.
Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech
Khalighinejad, Bahar; Cruzatto da Silva, Guilherme
2017-01-01
Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders. SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for
Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech.
Khalighinejad, Bahar; Cruzatto da Silva, Guilherme; Mesgarani, Nima
2017-02-22
Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders.SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for
Dynamical Properties of Distant Field Galaxies
NASA Astrophysics Data System (ADS)
Kron, Richard
1996-07-01
A sample of several hundred field galaxies to I = 24 {c.f. to I=22 z 0.5 from Lilly et al. 1995} is being observed with the Keck telescope in a major new survey. The distribution of high redshift galaxies bears directly on models for cosmology {via the number-redshift relation} and galaxy evolution. The unique aspect of the new survey is the inclusion of useful line widths from the spectra, providing, for the first time, a sample of high-redshift galaxies with kinematical information. With the addition of an image size scale provided by WFPC2, dynamical parameters can be determined. WFPC2 imaging, the existing faint photometric and spectroscopic database in the Koo-Kron redshift survey fields {Munn et al. 1995}, and new Keck spectroscopy combine to form a powerful new database that will advance our studies of cosmology and galaxy evolution at high redshift. We propose to obtain WFPC2 images {F606W and F814W} in one region of the sky, adjacent to existing deep WFPC2 images. This combined region is designed to match the field-of-view of the Keck multiple-object spectrograph. This strategy enables a factor-of-three higher rate of acquisition of spectra for galaxies with measured HST image parameters. Keck spectroscopy in this field has already been successfully initiated, and further Keck observations will proceed in parallel with the new WFPC2 exposures.
Optical Trapping Dynamics in Interference Field
NASA Astrophysics Data System (ADS)
Viera, Luis Alfredo; Lira, Ignacio; Soto, Leopoldo; Pavez, Cristián
2008-04-01
A model that predicts a particle trapping time in an two beams interference laser fields is proposed. This interference consist in a sinusoidal intensity pattern, which is used to translate the particle from the dark fringes to the bright ones. The particle is submerged in a viscous fluid. The model takes into account the irradiance, the wavelength, the fringewidth, the medium viscosity and the size and approximated shape of the particle. From the classical separation of optical trapping force in gradient and scattering force, only the gradient force is considered, expressed in terms of the electric field. Opposing to this force, the drag force is considered in terms of the Stokes force. The expression for the gradient force is the Maxwell equations solution for an homogeneous dielectric dipole in an electric field. For the Stokes force, the RBC is considered an oblate spheroid flowing edgewise. An experimental set up has been designed for the displacement of a single RBC (Red Blood Cell) in blood plasma due to an interference laser field, produced by Argon Ion laser, using several irradiances. To the best knowledge, is the only dynamic model of optical trapping that predicts the particle trapping time and position without experimental results, and is made it in a simple analytical way. This analysis can be extended to other particles of arbitrary shape and trap configurations.
Disappearing inflaton potential via heavy field dynamics
Kitajima, Naoya; Takahashi, Fuminobu E-mail: fumi@tuhep.phys.tohoku.ac.jp
2016-02-01
We propose a possibility that the inflaton potential is significantly modified after inflation due to heavy field dynamics. During inflation such a heavy scalar field may be stabilized at a value deviated from the low-energy minimum. In extreme cases, the inflaton potential vanishes and the inflaton becomes almost massless at some time after inflation. Such transition of the inflaton potential has interesting implications for primordial density perturbations, reheating, creation of unwanted relics, dark radiation, and experimental search for light degrees of freedom. To be concrete, we consider a chaotic inflation in supergravity where the inflaton mass parameter is promoted to a modulus field, finding that the inflaton becomes stable after the transition and contributes to dark matter. Another example is a hilltop inflation (also called new inflation) by the MSSM Higgs field which acquires a large expectation value just after inflation, but it returns to the origin after the transition and finally rolls down to the electroweak vacuum. Interestingly, the smallness of the electroweak scale compared to the Planck scale is directly related to the flatness of the inflaton potential.
Perry, Gavin; Litvak, Vladimir; Singh, Krish D.; Friston, Karl J.
2016-01-01
Abstract This article describes the first application of a generic (empirical) Bayesian analysis of between‐subject effects in the dynamic causal modeling (DCM) of electrophysiological (MEG) data. It shows that (i) non‐invasive (MEG) data can be used to characterize subject‐specific differences in cortical microcircuitry and (ii) presents a validation of DCM with neural fields that exploits intersubject variability in gamma oscillations. We find that intersubject variability in visually induced gamma responses reflects changes in the excitation‐inhibition balance in a canonical cortical circuit. Crucially, this variability can be explained by subject‐specific differences in intrinsic connections to and from inhibitory interneurons that form a pyramidal‐interneuron gamma network. Our approach uses Bayesian model reduction to evaluate the evidence for (large sets of) nested models—and optimize the corresponding connectivity estimates at the within and between‐subject level. We also consider Bayesian cross‐validation to obtain predictive estimates for gamma‐response phenotypes, using a leave‐one‐out procedure. Hum Brain Mapp 37:4597–4614, 2016. © The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27593199
Parameter estimation of breast tumour using dynamic neural network from thermal pattern.
Saniei, Elham; Setayeshi, Saeed; Akbari, Mohammad Esmaeil; Navid, Mitra
2016-11-01
This article presents a new approach for estimating the depth, size, and metabolic heat generation rate of a tumour. For this purpose, the surface temperature distribution of a breast thermal image and the dynamic neural network was used. The research consisted of two steps: forward and inverse. For the forward section, a finite element model was created. The Pennes bio-heat equation was solved to find surface and depth temperature distributions. Data from the analysis, then, were used to train the dynamic neural network model (DNN). Results from the DNN training/testing confirmed those of the finite element model. For the inverse section, the trained neural network was applied to estimate the depth temperature distribution (tumour position) from the surface temperature profile, extracted from the thermal image. Finally, tumour parameters were obtained from the depth temperature distribution. Experimental findings (20 patients) were promising in terms of the model's potential for retrieving tumour parameters.
Classification data mining method based on dynamic RBF neural networks
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping
2009-04-01
With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.
From behavior to neural dynamics: An integrated theory of attention
Buschman, Timothy J.; Kastner, Sabine
2015-01-01
The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one’s current behavior. We refer to these mechanisms as ‘attention’. Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain’s large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits and neurons. We then integrate these disparate results into a unified theory of attention. PMID:26447577
From Behavior to Neural Dynamics: An Integrated Theory of Attention.
Buschman, Timothy J; Kastner, Sabine
2015-10-07
The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one's current behavior. We refer to these mechanisms as "attention." Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain's large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits, and neurons. We then integrate these disparate results into a unified theory of attention.
NASA Astrophysics Data System (ADS)
Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer
2016-04-01
In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.
Poza, Jesús; Gómez, Carlos; García, María; Tola-Arribas, Miguel A; Carreres, Alicia; Cano, Mónica; Hornero, Roberto
2017-03-09
An accurate characterization of neural dynamics in mild cognitive impairment (MCI) is of paramount importance to gain further insights into the underlying neural mechanisms in Alzheimer's disease (AD). Nevertheless, there has been relatively little research on brain dynamics in prodromal AD. As a consequence, its neural substrates remain unclear. In the present research, electroencephalographic (EEG) recordings from patients with dementia due to AD, subjects with MCI due to AD and healthy controls (HC) were analyzed using relative power (RP) in conventional EEG frequency bands and a novel parameter useful to explore the spatio-temporal fluctuations of neural dynamics: the spectral flux (SF). Our results suggest that dementia due to AD is associated with a significant slowing of EEG activity and several significant alterations in spectral fluctuations at low (i.e. theta) and high (i.e. beta and gamma) frequency bands compared to HC (p < 0.05). Furthermore, subjects with MCI due to AD exhibited a specific frequency-dependent pattern of spatio-temporal abnormalities, which can help to identify neural mechanisms involved in cognitive impairment preceding AD. Classification analyses using linear discriminant analysis with a leave-one-out cross-validation procedure showed that the combination of RP and within-electrode SF at the beta band was useful to obtain a 77.3 % of accuracy to discriminate between HC and AD patients. In the case of the comparison between HC and MCI subjects, the classification accuracy reached a value of 79.2 %, combining within-electrode SF at beta and gamma bands. SF has proven to be a useful measure to obtain an original description of brain dynamics at different stages of AD. Consequently, SF may contribute to gain a more comprehensive understanding into neural substrates underlying MCI, as well as to develop potential early AD biomarkers. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Dynamical Neural Network Model of Hippocampus with Excitatory and Inhibitory Neurons
NASA Astrophysics Data System (ADS)
Omori, Toshiaki; Horiguchi, Tsuyoshi
2004-03-01
We propose a dynamical neural network model with excitatory neurons and inhibitory neurons for memory function in hippocampus and investigate the effect of inhibitory neurons on memory recall. The results by numerical simulations show that the introduction of inhibitory neurons improves the stability of the memory recall in the proposed model by suppressing the bursting of neurons.
ERIC Educational Resources Information Center
Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo
2010-01-01
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…
A Neural Network Model of the Structure and Dynamics of Human Personality
ERIC Educational Resources Information Center
Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.
2010-01-01
We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…
A Neural Network Model of the Structure and Dynamics of Human Personality
ERIC Educational Resources Information Center
Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.
2010-01-01
We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…
ERIC Educational Resources Information Center
Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo
2010-01-01
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…
Neural network integration of field observations for soil endocrine disruptor characterisation.
Aitkenhead, M J; Rhind, S M; Zhang, Z L; Kyle, C E; Coull, M C
2014-01-15
A neural network approach was used to predict the presence and concentration of a range of endocrine disrupting compounds (EDCs), based on field observations. Soil sample concentrations of endocrine disrupting compounds (EDCs) and site environmental characteristics, drawn from the National Soil Inventory of Scotland (NSIS) database, were used. Neural network models were trained to predict soil EDC concentrations using field observations for 184 sites. The results showed that presence/absence and concentration of several of the EDCs, mostly no longer in production, could be predicted with some accuracy. We were able to predict concentrations of seven of 31 compounds with r(2) values greater than 0.25 for log-normalised values and of eight with log-normalised predictions converted to a linear scale. Additional statistical analyses were carried out, including Root Mean Square Error (RMSE), Mean Error (ME), Willmott's index of agreement, Percent Bias (PBIAS) and ratio of root mean square to standard deviation (RSR). These analyses allowed us to demonstrate that the neural network models were making meaningful predictions of EDC concentration. We identified the main predictive input parameters in each case, based on a sensitivity analysis of the trained neural network model. We also demonstrated the capacity of the method for predicting the presence and level of EDC concentration in the field, identified further developments required to make this process as rapid and operator-friendly as possible and discussed the potential value of a system for field surveys of soil composition. © 2013.
Kang, Hongki; Kim, Jee-Yeon; Choi, Yang-Kyu; Nam, Yoonkey
2017-03-28
In this research, a high performance silicon nanowire field-effect transistor (transconductance as high as 34 µS and sensitivity as 84 nS/mV) is extensively studied and directly compared with planar passive microelectrode arrays for neural recording application. Electrical and electrochemical characteristics are carefully characterized in a very well-controlled manner. We especially focused on the signal amplification capability and intrinsic noise of the transistors. A neural recording system using both silicon nanowire field-effect transistor-based active-type microelectrode array and platinum black microelectrode-based passive-type microelectrode array are implemented and compared. An artificial neural spike signal is supplied as input to both arrays through a buffer solution and recorded simultaneously. Recorded signal intensity by the silicon nanowire transistor was precisely determined by an electrical characteristic of the transistor, transconductance. Signal-to-noise ratio was found to be strongly dependent upon the intrinsic 1/f noise of the silicon nanowire transistor. We found how signal strength is determined and how intrinsic noise of the transistor determines signal-to-noise ratio of the recorded neural signals. This study provides in-depth understanding of the overall neural recording mechanism using silicon nanowire transistors and solid design guideline for further improvement and development.
Kang, Hongki; Kim, Jee-Yeon; Choi, Yang-Kyu; Nam, Yoonkey
2017-01-01
In this research, a high performance silicon nanowire field-effect transistor (transconductance as high as 34 µS and sensitivity as 84 nS/mV) is extensively studied and directly compared with planar passive microelectrode arrays for neural recording application. Electrical and electrochemical characteristics are carefully characterized in a very well-controlled manner. We especially focused on the signal amplification capability and intrinsic noise of the transistors. A neural recording system using both silicon nanowire field-effect transistor-based active-type microelectrode array and platinum black microelectrode-based passive-type microelectrode array are implemented and compared. An artificial neural spike signal is supplied as input to both arrays through a buffer solution and recorded simultaneously. Recorded signal intensity by the silicon nanowire transistor was precisely determined by an electrical characteristic of the transistor, transconductance. Signal-to-noise ratio was found to be strongly dependent upon the intrinsic 1/f noise of the silicon nanowire transistor. We found how signal strength is determined and how intrinsic noise of the transistor determines signal-to-noise ratio of the recorded neural signals. This study provides in-depth understanding of the overall neural recording mechanism using silicon nanowire transistors and solid design guideline for further improvement and development. PMID:28350370
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M. )
1996-01-01
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.'s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the [open quotes]brainstate[close quotes] using logs as predicters and core permeability as [open quotes]truth[close quotes]. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M.
1996-12-31
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.`s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the {open_quotes}brainstate{close_quotes} using logs as predicters and core permeability as {open_quotes}truth{close_quotes}. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Dynamical Field Line Connectivity in Magnetic Turbulence
NASA Astrophysics Data System (ADS)
Ruffolo, D. J.; Matthaeus, W. H.
2014-12-01
Point-to-point magnetic connectivity has a stochastic character whenever magnetic fluctuations cause a field line random walk, with observable manifestations such as dropouts of solar energetic particles and upstream events at Earth's bow shock. This can also change due to dynamical activity. Comparing the instantaneous magnetic connectivity to the same point at two different times, we provide a nonperturbative analytic theory for the ensemble average perpendicular displacement of the magnetic field line, given the power spectrum of magnetic fluctuations. For simplicity, the theory is developed in the context of transverse turbulence, and is numerically evaluated for two specific models: reduced magnetohydrodynanmics (RMHD), a quasi-two dimensional model of anisotropic turbulence that is applicable to low-beta plasmas, and two-dimensional (2D) plus slab turbulence, which is a good parameterization for solar wind turbulence. We take into account the dynamical decorrelation of magnetic fluctuations due to wave propagation, nonlinear distortion, random sweeping, and convection by a bulk wind flow relative to the observer. The mean squared time-differenced displacement increases with time and with parallel distance, becoming twice the field line random walk displacement at long distances and/or times, corresponding to a pair of uncorrelated random walks. These results are relevant to a variety of astrophysical processes, such as electron transport and heating patterns in coronal loops and the solar transition region, changing magnetic connection to particle sources near the Sun or at a planetary bow shock, and thickening of coronal hole boundaries. Partially supported by the Thailand Research Fund, the US NSF (AGS-1063439 and SHINE AGS-1156094), NASA (Heliophysics Theory NNX11AJ44G), and by the Solar Probe Plus Project through the ISIS Theory team.
Dynamic nuclear polarization at high magnetic fields
Maly, Thorsten; Debelouchina, Galia T.; Bajaj, Vikram S.; Hu, Kan-Nian; Joo, Chan-Gyu; Mak–Jurkauskas, Melody L.; Sirigiri, Jagadishwar R.; van der Wel, Patrick C. A.; Herzfeld, Judith; Temkin, Richard J.; Griffin, Robert G.
2009-01-01
Dynamic nuclear polarization (DNP) is a method that permits NMR signal intensities of solids and liquids to be enhanced significantly, and is therefore potentially an important tool in structural and mechanistic studies of biologically relevant molecules. During a DNP experiment, the large polarization of an exogeneous or endogeneous unpaired electron is transferred to the nuclei of interest (I) by microwave (μw) irradiation of the sample. The maximum theoretical enhancement achievable is given by the gyromagnetic ratios (γe/γl), being ∼660 for protons. In the early 1950s, the DNP phenomenon was demonstrated experimentally, and intensively investigated in the following four decades, primarily at low magnetic fields. This review focuses on recent developments in the field of DNP with a special emphasis on work done at high magnetic fields (≥5 T), the regime where contemporary NMR experiments are performed. After a brief historical survey, we present a review of the classical continuous wave (cw) DNP mechanisms—the Overhauser effect, the solid effect, the cross effect, and thermal mixing. A special section is devoted to the theory of coherent polarization transfer mechanisms, since they are potentially more efficient at high fields than classical polarization schemes. The implementation of DNP at high magnetic fields has required the development and improvement of new and existing instrumentation. Therefore, we also review some recent developments in μw and probe technology, followed by an overview of DNP applications in biological solids and liquids. Finally, we outline some possible areas for future developments. PMID:18266416
Travelling waves in a neural field model with refractoriness.
Meijer, Hil G E; Coombes, Stephen
2014-04-01
At one level of abstraction neural tissue can be regarded as a medium for turning local synaptic activity into output signals that propagate over large distances via axons to generate further synaptic activity that can cause reverberant activity in networks that possess a mixture of excitatory and inhibitory connections. This output is often taken to be a firing rate, and the mathematical form for the evolution equation of activity depends upon a spatial convolution of this rate with a fixed anatomical connectivity pattern. Such formulations often neglect the metabolic processes that would ultimately limit synaptic activity. Here we reinstate such a process, in the spirit of an original prescription by Wilson and Cowan (Biophys J 12:1-24, 1972), using a term that multiplies the usual spatial convolution with a moving time average of local activity over some refractory time-scale. This modulation can substantially affect network behaviour, and in particular give rise to periodic travelling waves in a purely excitatory network (with exponentially decaying anatomical connectivity), which in the absence of refractoriness would only support travelling fronts. We construct these solutions numerically as stationary periodic solutions in a co-moving frame (of both an equivalent delay differential model as well as the original delay integro-differential model). Continuation methods are used to obtain the dispersion curve for periodic travelling waves (speed as a function of period), and found to be reminiscent of those for spatially extended models of excitable tissue. A kinematic analysis (based on the dispersion curve) predicts the onset of wave instabilities, which are confirmed numerically.
Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI
Yu, Theodore; Sejnowski, Terrence J.; Cauwenberghs, Gert
2011-01-01
We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris–Lecar and Hodgkin–Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces. PMID:22227949
Field-driven dynamics of nematic microcapillaries
NASA Astrophysics Data System (ADS)
Khayyatzadeh, Pouya; Fu, Fred; Abukhdeir, Nasser Mohieddin
2015-12-01
Polymer-dispersed liquid-crystal (PDLC) composites long have been a focus of study for their unique electro-optical properties which have resulted in various applications such as switchable (transparent or translucent) windows. These composites are manufactured using desirable "bottom-up" techniques, such as phase separation of a liquid-crystal-polymer mixture, which enable production of PDLC films at very large scales. LC domains within PDLCs are typically spheroidal, as opposed to rectangular for an LCD panel, and thus exhibit substantially different behavior in the presence of an external field. The fundamental difference between spheroidal and rectangular nematic domains is that the former results in the presence of nanoscale orientational defects in LC order while the latter does not. Progress in the development and optimization of PDLC electro-optical properties has progressed at a relatively slow pace due to this increased complexity. In this work, continuum simulations are performed in order to capture the complex formation and electric field-driven switching dynamics of approximations of PDLC domains. Using a simplified elliptic cylinder (microcapillary) geometry as an approximation of spheroidal PDLC domains, the effects of geometry (aspect ratio), surface anchoring, and external field strength are studied through the use of the Landau-de Gennes model of the nematic LC phase.
Rigotti, Mattia; Rubin, Daniel Ben Dayan; Wang, Xiao-Jing; Fusi, Stefano
2010-01-01
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation. PMID:21048899
Schmidt, Helmut; Petkov, George; Richardson, Mark P.; Terry, John R.
2014-01-01
Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of computational modeling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit, which in the field of complexity sciences is known as dynamics on networks. In this study we describe the development and application of this framework using modular networks of Kuramoto oscillators. We use this framework to understand functional networks inferred from resting state EEG recordings of a cohort of 35 adults with heterogeneous idiopathic generalized epilepsies and 40 healthy adult controls. Taking emergent synchrony across the global network as a proxy for seizures, our study finds that the critical strength of coupling required to synchronize the global network is significantly decreased for the epilepsy cohort for functional networks inferred from both theta (3–6 Hz) and low-alpha (6–9 Hz) bands. We further identify left frontal regions as a potential driver of seizure activity within these networks. We also explore the ability of our method to identify individuals with epilepsy, observing up to 80 predictive power through use of receiver operating characteristic analysis. Collectively these findings demonstrate that a computer model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which should ultimately enable a more appropriate mechanistic stratification of people
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies.
Dynamic deep temperature recovery by acoustic thermography using neural networks
NASA Astrophysics Data System (ADS)
Anosov, A. A.; Belyaev, R. V.; Vilkov, V. A.; Kazanskii, A. S.; Mansfel'd, A. D.; Subochev, P. V.
2013-11-01
In an experiment, the deep temperature, which changed with time, was recovered for a model object, bovine liver. The liver was heated for 6 min by laser radiation (810 nm), transmitted via a light guide to a depth of 1 cm. During heating and subsequent cooling, the deep temperature was measured by acoustic thermography. For independent control, we used three electronic telemeters, the indications of which were also subsequently recovered. Deep temperature was recovered using a neural network with a time delay. During the last 2 min of heating, the mean square error of recovery for an averaging time of 50 s did not exceed 0.5°C. Such a result makes it possible to use this method for solving a number of medical problems.
Imaging second messenger dynamics in developing neural circuits
Dunn, Timothy A.; Feller, Marla B.
2010-01-01
A characteristic feature of developing neural circuits is that they are spontaneously active. There are several examples, including the retina, spinal cord and hippocampus, where spontaneous activity is highly correlated amongst neighboring cells, with large depolarizing events occurring with a periodicity on the order of minutes. One likely mechanism by which neurons can “decode” these slow oscillations is through activation of second messengers cascades that either influence transcriptional activity or drive posttranslational modifications. Here we describe recent experiments where imaging has been used to characterize slow oscillations in the cAMP/PKA second messenger cascade in retinal neurons. We review the latest techniques in imaging this specific second messenger cascade, its intimate relationship with changes in intracellular calcium concentration, and several hypotheses regarding its role in neurodevelopment. PMID:18383551
Dynamic Neural Processing of Linguistic Cues Related to Death
Ma, Yina; Qin, Jungang; Han, Shihui
2013-01-01
Behavioral studies suggest that humans evolve the capacity to cope with anxiety induced by the awareness of death’s inevitability. However, the neurocognitive processes that underlie online death-related thoughts remain unclear. Our recent functional MRI study found that the processing of linguistic cues related to death was characterized by decreased neural activity in human insular cortex. The current study further investigated the time course of neural processing of death-related linguistic cues. We recorded event-related potentials (ERP) to death-related, life-related, negative-valence, and neutral-valence words in a modified Stroop task that required color naming of words. We found that the amplitude of an early frontal/central negativity at 84–120 ms (N1) decreased to death-related words but increased to life-related words relative to neutral-valence words. The N1 effect associated with death-related and life-related words was correlated respectively with individuals’ pessimistic and optimistic attitudes toward life. Death-related words also increased the amplitude of a frontal/central positivity at 124–300 ms (P2) and of a frontal/central positivity at 300–500 ms (P3). However, the P2 and P3 modulations were observed for both death-related and negative-valence words but not for life-related words. The ERP results suggest an early inverse coding of linguistic cues related to life and death, which is followed by negative emotional responses to death-related information. PMID:23840787
Multiple-scale dynamics in neural systems: learning, synchronization and network oscillations
NASA Astrophysics Data System (ADS)
Zhigulin, Valentin P.
Many dynamical processes that take place in neural systems involve interactions between multiple temporal and/or spatial scales which lead to the emergence of new dynamical phenomena. Two of them are studied in this thesis: learning-induced robustness and enhancement of synchronization in small neural circuits; and emergence of global spatio-temporal dynamics from local interactions in neural networks.Chapter 2 presents the study of synchronization of two model neurons coupled through a synapse with spike-timing dependent plasticity (STDP). It shows that this form of learning leads to the enlargement of frequency locking zones and makes synchronization much more robust to noise than classical synchronization mediated by non-plastic synapses. A simple discrete-time map model is presented that enables deep understanding of this phenomenon and demonstrates its generality. Chapter 3 extends these results by demonstrating enhancement of synchronization in a hybrid circuit with living postsynaptic neuron. The robustness of STDP-mediated synchronization is further confirmed with simulations of stochastic plasticity.Chapter 4 studies the entrainment of a heterogeneous network of electrically coupled neurons by periodic stimulation. It demonstrates that, when compared to the case of non-plastic input synapses, inputs with STDP enhance coherence of network oscillations and improve robustness of synchronization to the variability of network properties. The observed mechanism may play a role in synchronization of hippocampal neural ensembles.Chapter 5 proposes a new type of artificial synaptic connection that combines fast reaction of an electrical synapse with plasticity of a chemical synapse. It shows that such synapse mediates regularization of chaos in a circuit of two chaotic bursting neurons and leads to structural stability of the regularized state. Such plastic electrical synapse may be used in the development of robust neural prosthetics.Chapter 6 suggests a new
Machine Learning for Dynamical Mean Field Theory
NASA Astrophysics Data System (ADS)
Arsenault, Louis-Francois; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Littlewood, P. B.; Millis, Andy
2014-03-01
Machine Learning (ML), an approach that infers new results from accumulated knowledge, is in use for a variety of tasks ranging from face and voice recognition to internet searching and has recently been gaining increasing importance in chemistry and physics. In this talk, we investigate the possibility of using ML to solve the equations of dynamical mean field theory which otherwise requires the (numerically very expensive) solution of a quantum impurity model. Our ML scheme requires the relation between two functions: the hybridization function describing the bare (local) electronic structure of a material and the self-energy describing the many body physics. We discuss the parameterization of the two functions for the exact diagonalization solver and present examples, beginning with the Anderson Impurity model with a fixed bath density of states, demonstrating the advantages and the pitfalls of the method. DOE contract DE-AC02-06CH11357.
Subduction dynamics: Constraints from gravity field observations
NASA Technical Reports Server (NTRS)
Mcadoo, D. C.
1985-01-01
Satellite systems do the best job of resolving the long wavelength components of the Earth's gravity field. Over the oceans, satellite-borne radar altimeters such as SEASAT provide the best resolution observations of the intermediate wavelength components. Satellite observations of gravity contributed to the understanding of the dynamics of subduction. Large, long wavelength geoidal highs generally occur over subduction zones. These highs are attributed to the superposition of two effects of subduction: (1) the positive mass anomalies of subducting slabs themselves; and (2) the surface deformations such as the trenches convectively inducted by these slabs as they sink into the mantle. Models of this subduction process suggest that the mantle behaves as a nonNewtonian fluid, its effective viscosity increases significantly with depth, and that large positive mass anomalies may occur beneath the seismically defined Benioff zones.
NASA Astrophysics Data System (ADS)
Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.
2013-12-01
Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.
Field-induced superdiffusion and dynamical heterogeneity.
Gradenigo, Giacomo; Bertin, Eric; Biroli, Giulio
2016-06-01
By analyzing two kinetically constrained models of supercooled liquids we show that the anomalous transport of a driven tracer observed in supercooled liquids is another facet of the phenomenon of dynamical heterogeneity. We focus on the Fredrickson-Andersen and the Bertin-Bouchaud-Lequeux models. By numerical simulations and analytical arguments we demonstrate that the violation of the Stokes-Einstein relation and the field-induced superdiffusion observed during a long preasymptotic regime have the same physical origin: while a fraction of probes do not move, others jump repeatedly because they are close to local mobile regions. The anomalous fluctuations observed out of equilibrium in the presence of a pulling force ε,σ_{x}^{2}(t)=〈x_{ε}^{2}(t)〉-〈x_{ε}(t)〉^{2}∼t^{3/2}, which are accompanied by the asymptotic decay α_{ε}(t)∼t^{-1/2} of the non-Gaussian parameter from nontrivial values to zero, are due to the splitting of the probes population in the two (mobile and immobile) groups and to dynamical correlations, a mechanism expected to happen generically in supercooled liquids.
Field-induced superdiffusion and dynamical heterogeneity
NASA Astrophysics Data System (ADS)
Gradenigo, Giacomo; Bertin, Eric; Biroli, Giulio
2016-06-01
By analyzing two kinetically constrained models of supercooled liquids we show that the anomalous transport of a driven tracer observed in supercooled liquids is another facet of the phenomenon of dynamical heterogeneity. We focus on the Fredrickson-Andersen and the Bertin-Bouchaud-Lequeux models. By numerical simulations and analytical arguments we demonstrate that the violation of the Stokes-Einstein relation and the field-induced superdiffusion observed during a long preasymptotic regime have the same physical origin: while a fraction of probes do not move, others jump repeatedly because they are close to local mobile regions. The anomalous fluctuations observed out of equilibrium in the presence of a pulling force ɛ ,σx2(t ) =
Study of the neural dynamics for understanding communication in terms of complex hetero systems.
Tsuda, Ichiro; Yamaguchi, Yoko; Hashimoto, Takashi; Okuda, Jiro; Kawasaki, Masahiro; Nagasaka, Yasuo
2015-01-01
The purpose of the research project was to establish a new research area named "neural information science for communication" by elucidating its neural mechanism. The research was performed in collaboration with applied mathematicians in complex-systems science and experimental researchers in neuroscience. The project included measurements of brain activity during communication with or without languages and analyses performed with the help of extended theories for dynamical systems and stochastic systems. The communication paradigm was extended to the interactions between human and human, human and animal, human and robot, human and materials, and even animal and animal.
Some new stability properties of dynamic neural networks with different time-scales.
Yu, Wen; Sandoval, Alejandro Cruz
2006-06-01
Dynamic neural networks with different time-scales include the aspects of fast and slow phenomenons. Some applications require that the equilibrium points of these networks to be stable. The main contribution of the paper is that Lyapunov function and singularly perturbed technique are combined to access several new stable properties of different time-scales neural networks. Exponential stability and asymptotic stability are obtained by sector and bound conditions. Compared to other papers, these conditions are simpler. Numerical examples are given to demonstrate the effectiveness of the theoretical results.
Graded information extraction by neural-network dynamics with multihysteretic neurons.
Tsuboshita, Yukihiro; Okamoto, Hiroshi
2009-09-01
A major goal in the study of neural networks is to create novel information-processing algorithms inferred from the real brain. Recent neurophysiological evidence of graded persistent activity suggests that the brain possesses neural mechanisms for retrieval of graded information, which could be described by the neural-network dynamics with attractors that are continuously dependent on the initial state. Theoretical studies have also demonstrated that model neurons with a multihysteretic response property can generate robust continuous attractors. Inspired by these lines of evidence, we proposed an algorithm given by the multihysteretic neuron-network dynamics, devised to retrieve graded information specific to a given topic (i.e., context, represented by the initial state). To demonstrate the validity of the proposed algorithm, we examined keyword extraction from documents, which is best fitted for evaluating the appropriateness of retrieval of graded information. The performance of keyword extraction by using our algorithm was significantly high (measured by the average precision of document retrieval, for which the appropriateness of keyword extraction is crucial) compared with standard document-retrieval methods. Moreover, our algorithm exhibited much higher performance than the neural-network dynamics with bistable neurons, which can also produce robust continuous attractors but only represent dichotomous information at the single-cell level. These findings indicate that the capability to manage graded information at the single-cell level was essential for obtaining a high performing algorithm.
Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica
2012-05-30
The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.
Latching dynamics in neural networks with synaptic depression.
Aguilar, Carlos; Chossat, Pascal; Krupa, Martin; Lavigne, Frédéric
2017-01-01
Prediction is the ability of the brain to quickly activate a target concept in response to a related stimulus (prime). Experiments point to the existence of an overlap between the populations of the neurons coding for different stimuli, and other experiments show that prime-target relations arise in the process of long term memory formation. The classical modelling paradigm is that long term memories correspond to stable steady states of a Hopfield network with Hebbian connectivity. Experiments show that short term synaptic depression plays an important role in the processing of memories. This leads naturally to a computational model of priming, called latching dynamics; a stable state (prime) can become unstable and the system may converge to another transiently stable steady state (target). Hopfield network models of latching dynamics have been studied by means of numerical simulation, however the conditions for the existence of this dynamics have not been elucidated. In this work we use a combination of analytic and numerical approaches to confirm that latching dynamics can exist in the context of a symmetric Hebbian learning rule, however lacks robustness and imposes a number of biologically unrealistic restrictions on the model. In particular our work shows that the symmetry of the Hebbian rule is not an obstruction to the existence of latching dynamics, however fine tuning of the parameters of the model is needed.
2013-01-01
The spread of activity in neural populations is a well-known phenomenon. To understand the propagation speed and the stability of stationary fronts in neural populations, the present work considers a neural field model that involves intracortical and cortico-cortical synaptic interactions. This includes distributions of axonal transmission speeds and nonlocal feedback delays as well as general classes of synaptic interactions. The work proves the spectral stability of standing and traveling fronts subject to general transmission speeds for large classes of spatial interactions and derives conditions for the front instabilities subjected to nonlocal feedback delays. Moreover, it turns out that the uniqueness of the stationary traveling fronts guarantees its exponential stability for vanishing feedback delay. Numerical simulations complement the analytical findings. PMID:23899051
Degradation Prediction Model Based on a Neural Network with Dynamic Windows
Zhang, Xinghui; Xiao, Lei; Kang, Jianshe
2015-01-01
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873
Degradation prediction model based on a neural network with dynamic windows.
Zhang, Xinghui; Xiao, Lei; Kang, Jianshe
2015-03-23
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data.
Neural population dynamics in human motor cortex during movements in people with ALS.
Pandarinath, Chethan; Gilja, Vikash; Blabe, Christine H; Nuyujukian, Paul; Sarma, Anish A; Sorice, Brittany L; Eskandar, Emad N; Hochberg, Leigh R; Henderson, Jaimie M; Shenoy, Krishna V
2015-06-23
The prevailing view of motor cortex holds that motor cortical neural activity represents muscle or movement parameters. However, recent studies in non-human primates have shown that neural activity does not simply represent muscle or movement parameters; instead, its temporal structure is well-described by a dynamical system where activity during movement evolves lawfully from an initial pre-movement state. In this study, we analyze neuronal ensemble activity in motor cortex in two clinical trial participants diagnosed with Amyotrophic Lateral Sclerosis (ALS). We find that activity in human motor cortex has similar dynamical structure to that of non-human primates, indicating that human motor cortex contains a similar underlying dynamical system for movement generation.
A neural network model of the structure and dynamics of human personality.
Read, Stephen J; Monroe, Brian M; Brownstein, Aaron L; Yang, Yu; Chopra, Gurveen; Miller, Lynn C
2010-01-01
We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two overarching motivational systems, an approach and an avoidance system, as well as a general disinhibition and constraint system. Each overarching motivational system influences more specific motives. Traits are modeled in terms of differences in the sensitivities of the motivational systems, the baseline activation of specific motives, and inhibitory strength. The result is a motive-based neural network model of personality based on research about the structure and neurobiology of human personality. The model provides an account of personality dynamics and person-situation interactions and suggests how dynamic processing approaches and dispositional, structural approaches can be integrated in a common framework.
Dynamical complexity in the C.elegans neural network
NASA Astrophysics Data System (ADS)
Antonopoulos, C. G.; Fokas, A. S.; Bountis, T. C.
2016-09-01
We model the neuronal circuit of the C.elegans soil worm in terms of a Hindmarsh-Rose system of ordinary differential equations, dividing its circuit into six communities which are determined via the Walktrap and Louvain methods. Using the numerical solution of these equations, we analyze important measures of dynamical complexity, namely synchronicity, the largest Lyapunov exponent, and the ΦAR auto-regressive integrated information theory measure. We show that ΦAR provides a useful measure of the information contained in the C.elegans brain dynamic network. Our analysis reveals that the C.elegans brain dynamic network generates more information than the sum of its constituent parts, and that attains higher levels of integrated information for couplings for which either all its communities are highly synchronized, or there is a mixed state of highly synchronized and desynchronized communities.
Dynamic Baysesian state-space model with a neural network for an online river flow prediction
NASA Astrophysics Data System (ADS)
Ham, Jonghwa; Hong, Yoon-Seok
2013-04-01
The usefulness of artificial neural networks in complex hydrological modeling has been demonstrated by successful applications. Several different types of neural network have been used for the hydrological modeling task but the multi-layer perceptron (MLP) neural network (also known as the feed-forward neural network) has enjoyed a predominant position because of its simplicity and its ability to provide good approximations. In many hydrological applications of MLP neural networks, the gradient descent-based batch learning algorithm such as back-propagation, quasi-Newton, Levenburg-Marquardt, and conjugate gradient algorithms has been used to optimize the cost function (usually by minimizing the error function in the prediction) by updating the parameters and structure in a neural network defined using a set of input-output training examples. Hydrological systems are highly with time-varying inputs and outputs, and are characterized by data that arrive sequentially. The gradient descent-based batch learning approaches that are implemented in MLP neural networks have significant disadvantages for online dynamic hydrological modeling because they could not update the model structure and parameter when a new set of hydrological measurement data becomes available. In addition, a large amount of training data is always required off-line with a long model training time. In this work, a dynamic nonlinear Bayesian state-space model with a multi-layer perceptron (MLP) neural network via a sequential Monte Carlo (SMC) learning algorithm is proposed for an online dynamic hydrological modeling. This proposed new method of modeling is herein known as MLP-SMC. The sequential Monte Carlo learning algorithm in the MLP-SMC is designed to evolve and adapt the weight of a MLP neural network sequentially in time on the arrival of each new item of hydrological data. The weight of a MLP neural network is treated as the unknown dynamic state variable in the dynamic Bayesian state
Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances
NASA Astrophysics Data System (ADS)
Fiete, Ila R.; Seung, H. Sebastian
2006-07-01
We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of “empiric” synapses driven by random spike trains from an external source.
Dynamic neural network-based robust observers for uncertain nonlinear systems.
Dinh, H T; Kamalapurkar, R; Bhasin, S; Dixon, W E
2014-12-01
A dynamic neural network (DNN) based robust observer for uncertain nonlinear systems is developed. The observer structure consists of a DNN to estimate the system dynamics on-line, a dynamic filter to estimate the unmeasurable state and a sliding mode feedback term to account for modeling errors and exogenous disturbances. The observed states are proven to asymptotically converge to the system states of high-order uncertain nonlinear systems through Lyapunov-based analysis. Simulations and experiments on a two-link robot manipulator are performed to show the effectiveness of the proposed method in comparison to several other state estimation methods.
Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances
Fiete, Ila R.; Seung, H. Sebastian
2006-07-28
We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source.
Magnetic field perturbation of neural recording and stimulating microelectrodes
NASA Astrophysics Data System (ADS)
Martinez-Santiesteban, Francisco M.; Swanson, Scott D.; Noll, Douglas C.; Anderson, David J.
2007-04-01
To improve the overall temporal and spatial resolution of brain mapping techniques, in animal models, some attempts have been reported to join electrophysiological methods with functional magnetic resonance imaging (fMRI). However, little attention has been paid to the image artefacts produced by the microelectrodes that compromise the anatomical or functional information of those studies. This work presents a group of simulations and MR images that show the limitations of wire microelectrodes and the potential advantages of silicon technology, in terms of image quality, in MRI environments. Magnetic field perturbations are calculated using a Fourier-based method for platinum (Pt) and tungsten (W) microwires as well as two different silicon technologies. We conclude that image artefacts produced by microelectrodes are highly dependent not only on the magnetic susceptibility of the materials used but also on the size, shape and orientation of the electrodes with respect to the main magnetic field. In addition silicon microelectrodes present better MRI characteristics than metallic microelectrodes. However, metallization layers added to silicon materials can adversely affect the quality of MR images. Therefore only those silicon microelectrodes that minimize the amount of metallic material can be considered MR-compatible and therefore suitable for possible simultaneous fMRI and electrophysiological studies. High resolution gradient echo images acquired at 2 T (TR/TE = 100/15 ms, voxel size = 100 × 100 × 100 µm3) of platinum-iridium (Pt-Ir, 90%-10%) and tungsten microwires show a complete signal loss that covers a volume significantly larger than the actual volume occupied by the microelectrodes: roughly 400 times larger for Pt-Ir and 180 for W, at the tip of the microelectrodes. Similar MR images of a single-shank silicon microelectrode only produce a partial volume effect on the voxels occupied by the probe with less than 50% of signal loss.
Xu, Bin; Yang, Chenguang; Pan, Yongping
2015-10-01
This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design.
Nonlinear dynamics in a neural network (parallel) processor
NASA Astrophysics Data System (ADS)
Perera, A. G. Unil; Matsik, S. G.; Betarbet, S. R.
1995-06-01
We consider an iterative map derived from the device equations for a silicon p+-n-n+ diode, which simulates a biological neuron. This map has been extended to a coupled neuron circuit consisting of two of these artificial neurons connected by a filter circuit, which could be used as a single channel of a parallel asynchronous processor. The extended map output is studied under different conditions to determine the effect of various parameters on the pulsing pattern. As the control parameter is increased, fixed points (both stable and unstable) as well as a limit cycle appear. On further increase, a Hopf bifurcation is seen causing the disappearance of the limit cycle. The increasing control parameter, which is related to a decrease in the bias applied to the circuit, also causes variation in the location of the fixed points. This variation could be important in applications to neural networks. The control parameter value at which the fixed point appear and the bifurcation occurs can be varied by changing the weightage of the filter circuit. The modeling outputs, are compared with the experimental outputs.
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.
Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong
2015-03-01
This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.
Li, Yongtao; Kurata, Shuhei; Morita, Shogo; Shimizu, So; Munetaka, Daigo; Nara, Shigetoshi
2008-09-01
Originating from a viewpoint that complex/chaotic dynamics would play an important role in biological system including brains, chaotic dynamics introduced in a recurrent neural network was applied to control. The results of computer experiment was successfully implemented into a novel autonomous roving robot, which can only catch rough target information with uncertainty by a few sensors. It was employed to solve practical two-dimensional mazes using adaptive neural dynamics generated by the recurrent neural network in which four prototype simple motions are embedded. Adaptive switching of a system parameter in the neural network results in stationary motion or chaotic motion depending on dynamical situations. The results of hardware implementation and practical experiment using it show that, in given two-dimensional mazes, the robot can successfully avoid obstacles and reach the target. Therefore, we believe that chaotic dynamics has novel potential capability in controlling, and could be utilized to practical engineering application.
Dynamically Partitionable Autoassociative Networks as a Solution to the Neural Binding Problem
Hayworth, Kenneth J.
2012-01-01
An outstanding question in theoretical neuroscience is how the brain solves the neural binding problem. In vision, binding can be summarized as the ability to represent that certain properties belong to one object while other properties belong to a different object. I review the binding problem in visual and other domains, and review its simplest proposed solution – the anatomical binding hypothesis. This hypothesis has traditionally been rejected as a true solution because it seems to require a type of one-to-one wiring of neurons that would be impossible in a biological system (as opposed to an engineered system like a computer). I show that this requirement for one-to-one wiring can be loosened by carefully considering how the neural representation is actually put to use by the rest of the brain. This leads to a solution where a symbol is represented not as a particular pattern of neural activation but instead as a piece of a global stable attractor state. I introduce the Dynamically Partitionable AutoAssociative Network (DPAAN) as an implementation of this solution and show how DPANNs can be used in systems which perform perceptual binding and in systems that implement syntax-sensitive rules. Finally I show how the core parts of the cognitive architecture ACT-R can be neurally implemented using a DPAAN as ACT-R’s global workspace. Because the DPAAN solution to the binding problem requires only “flat” neural representations (as opposed to the phase encoded representation hypothesized in neural synchrony solutions) it is directly compatible with the most well developed neural models of learning, memory, and pattern recognition. PMID:23060784
Dynamically partitionable autoassociative networks as a solution to the neural binding problem.
Hayworth, Kenneth J
2012-01-01
An outstanding question in theoretical neuroscience is how the brain solves the neural binding problem. In vision, binding can be summarized as the ability to represent that certain properties belong to one object while other properties belong to a different object. I review the binding problem in visual and other domains, and review its simplest proposed solution - the anatomical binding hypothesis. This hypothesis has traditionally been rejected as a true solution because it seems to require a type of one-to-one wiring of neurons that would be impossible in a biological system (as opposed to an engineered system like a computer). I show that this requirement for one-to-one wiring can be loosened by carefully considering how the neural representation is actually put to use by the rest of the brain. This leads to a solution where a symbol is represented not as a particular pattern of neural activation but instead as a piece of a global stable attractor state. I introduce the Dynamically Partitionable AutoAssociative Network (DPAAN) as an implementation of this solution and show how DPANNs can be used in systems which perform perceptual binding and in systems that implement syntax-sensitive rules. Finally I show how the core parts of the cognitive architecture ACT-R can be neurally implemented using a DPAAN as ACT-R's global workspace. Because the DPAAN solution to the binding problem requires only "flat" neural representations (as opposed to the phase encoded representation hypothesized in neural synchrony solutions) it is directly compatible with the most well developed neural models of learning, memory, and pattern recognition.
Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Robust fault detection of wind energy conversion systems based on dynamic neural networks.
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.
Modeling the influences of nanoparticles on neural field oscillations in thalamocortical networks.
Busse, Michael; Kraegeloh, Annette; Arzt, Eduard; Strauss, Daniel J
2012-01-01
The purpose of this study is twofold. First, we present a simplified multiscale modeling approach integrating activity on the scale of ionic channels into the spatiotemporal scale of neural field potentials: Resting upon a Hodgkin-Huxley based single cell model we introduced a neuronal feedback circuit based on the Llinás-model of thalamocortical activity and binding, where all cell specific intrinsic properties were adopted from patch-clamp measurements. In this paper, we expand this existing model by integrating the output to the spatiotemporal scale of field potentials. Those are supposed to originate from the parallel activity of a variety of synchronized thalamocortical columns at the quasi-microscopic level, where the involved neurons are gathered together in units. Second and more important, we study the possible effects of nanoparticles (NPs) that are supposed to interact with thalamic cells of our network model. In two preliminary studies we demonstrated in vitro and in vivo effects of NPs on the ionic channels of single neurons and thereafter on neuronal feedback circuits. By means of our new model we assumed now NPs induced changes on the ionic currents of the involved thalamic neurons. Here we found extensive diversified pattern formations of neural field potentials when comparing to the modeled activity without neuromodulating NPs addition. This model provides predictions about the influences of NPs on spatiotemporal neural field oscillations in thalamocortical networks. These predictions can be validated by high spatiotemporal resolution electrophysiological measurements like voltage sensitive dyes and multiarray recordings.
Thompson, Garth John; Pan, Wen-Ju; Keilholz, Shella Dawn
2015-07-01
Resting state functional magnetic resonance imaging (rsfMRI) results have indicated that network mapping can contribute to understanding behavior and disease, but it has been difficult to translate the maps created with rsfMRI to neuroelectrical states in the brain. Recently, dynamic analyses have revealed multiple patterns in the rsfMRI signal that are strongly associated with particular bands of neural activity. To further investigate these findings, simultaneously recorded invasive electrophysiology and rsfMRI from rats were used to examine two types of electrical activity (directly measured low-frequency/infraslow activity and band-limited power of higher frequencies) and two types of dynamic rsfMRI (quasi-periodic patterns or QPP, and sliding window correlation or SWC). The relationship between neural activity and dynamic rsfMRI was tested under three anesthetic states in rats: dexmedetomidine and high and low doses of isoflurane. Under dexmedetomidine, the lightest anesthetic, infraslow electrophysiology correlated with QPP but not SWC, whereas band-limited power in higher frequencies correlated with SWC but not QPP. Results were similar under isoflurane; however, the QPP was also correlated to band-limited power, possibly due to the burst-suppression state induced by the anesthetic agent. The results provide additional support for the hypothesis that the two types of dynamic rsfMRI are linked to different frequencies of neural activity, but isoflurane anesthesia may make this relationship more complicated. Understanding which neural frequency bands appear as particular dynamic patterns in rsfMRI may ultimately help isolate components of the rsfMRI signal that are of interest to disorders such as schizophrenia and attention deficit disorder.
Different dynamic resting state fMRI patterns are linked to different frequencies of neural activity
Thompson, Garth John; Pan, Wen-Ju
2015-01-01
Resting state functional magnetic resonance imaging (rsfMRI) results have indicated that network mapping can contribute to understanding behavior and disease, but it has been difficult to translate the maps created with rsfMRI to neuroelectrical states in the brain. Recently, dynamic analyses have revealed multiple patterns in the rsfMRI signal that are strongly associated with particular bands of neural activity. To further investigate these findings, simultaneously recorded invasive electrophysiology and rsfMRI from rats were used to examine two types of electrical activity (directly measured low-frequency/infraslow activity and band-limited power of higher frequencies) and two types of dynamic rsfMRI (quasi-periodic patterns or QPP, and sliding window correlation or SWC). The relationship between neural activity and dynamic rsfMRI was tested under three anesthetic states in rats: dexmedetomidine and high and low doses of isoflurane. Under dexmedetomidine, the lightest anesthetic, infraslow electrophysiology correlated with QPP but not SWC, whereas band-limited power in higher frequencies correlated with SWC but not QPP. Results were similar under isoflurane; however, the QPP was also correlated to band-limited power, possibly due to the burst-suppression state induced by the anesthetic agent. The results provide additional support for the hypothesis that the two types of dynamic rsfMRI are linked to different frequencies of neural activity, but isoflurane anesthesia may make this relationship more complicated. Understanding which neural frequency bands appear as particular dynamic patterns in rsfMRI may ultimately help isolate components of the rsfMRI signal that are of interest to disorders such as schizophrenia and attention deficit disorder. PMID:26041826
Dynamic modeling of physical phenomena for PRAs using neural networks
Benjamin, A.S.; Brown, N.N.; Paez, T.L.
1998-04-01
In most probabilistic risk assessments, there is a set of accident scenarios that involves the physical responses of a system to environmental challenges. Examples include the effects of earthquakes and fires on the operability of a nuclear reactor safety system, the effects of fires and impacts on the safety integrity of a nuclear weapon, and the effects of human intrusions on the transport of radionuclides from an underground waste facility. The physical responses of the system to these challenges can be quite complex, and their evaluation may require the use of detailed computer codes that are very time consuming to execute. Yet, to perform meaningful probabilistic analyses, it is necessary to evaluate the responses for a large number of variations in the input parameters that describe the initial state of the system, the environments to which it is exposed, and the effects of human interaction. Because the uncertainties of the system response may be very large, it may also be necessary to perform these evaluations for various values of modeling parameters that have high uncertainties, such as material stiffnesses, surface emissivities, and ground permeabilities. The authors have been exploring the use of artificial neural networks (ANNs) as a means for estimating the physical responses of complex systems to phenomenological events such as those cited above. These networks are designed as mathematical constructs with adjustable parameters that can be trained so that the results obtained from the networks will simulate the results obtained from the detailed computer codes. The intent is for the networks to provide an adequate simulation of the detailed codes over a significant range of variables while requiring only a small fraction of the computer processing time required by the detailed codes. This enables the authors to integrate the physical response analyses into the probabilistic models in order to estimate the probabilities of various responses.
Neural network architecture for cognitive navigation in dynamic environments.
Villacorta-Atienza, José Antonio; Makarov, Valeri A
2013-12-01
Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-01-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313
Autonomous dynamics in neural networks: the dHAN concept and associative thought processes
NASA Astrophysics Data System (ADS)
Gros, Claudius
2007-02-01
The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.
Nie, Xiaobing; Zheng, Wei Xing
2016-03-01
This paper addresses the problem of coexistence and dynamical behaviors of multiple equilibria for competitive neural networks. First, a general class of discontinuous nonmonotonic piecewise linear activation functions is introduced for competitive neural networks. Then based on the fixed point theorem and theory of strict diagonal dominance matrix, it is shown that under some conditions, such n -neuron competitive neural networks can have 5(n) equilibria, among which 3(n) equilibria are locally stable and the others are unstable. More importantly, it is revealed that the neural networks with the discontinuous activation functions introduced in this paper can have both more total equilibria and locally stable equilibria than the ones with other activation functions, such as the continuous Mexican-hat-type activation function and discontinuous two-level activation function. Furthermore, the 3(n) locally stable equilibria given in this paper are located in not only saturated regions, but also unsaturated regions, which is different from the existing results on multistability of neural networks with multiple level activation functions. A simulation example is provided to illustrate and validate the theoretical findings.
NASA Astrophysics Data System (ADS)
Ticchi, Alessandro; Faisal, Aldo A.; Brain; Behaviour Lab Team
2015-03-01
Experimental evidence at the behavioural-level shows that the brains are able to make Bayes-optimal inference and decisions (Kording and Wolpert 2004, Nature; Ernst and Banks, 2002, Nature), yet at the circuit level little is known about how neural circuits may implement Bayesian learning and inference (but see (Ma et al. 2006, Nat Neurosci)). Molecular sources of noise are clearly established to be powerful enough to pose limits to neural function and structure in the brain (Faisal et al. 2008, Nat Rev Neurosci; Faisal et al. 2005, Curr Biol). We propose a spking neuron model where we exploit molecular noise as a useful resource to implement close-to-optimal inference by sampling. Specifically, we derive a synaptic plasticity rule which, coupled with integrate-and-fire neural dynamics and recurrent inhibitory connections, enables a neural population to learn the statistical properties of the received sensory input (prior). Moreover, the proposed model allows to combine prior knowledge with additional sources of information (likelihood) from another neural population, and to implement in spiking neurons a Markov Chain Monte Carlo algorithm which generates samples from the inferred posterior distribution.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-05-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Bruton, C. P.; West, M. E.
2013-12-01
Earthquakes and seismicity have long been used to monitor volcanoes. In addition to time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia.
The neural dynamics of reward value and risk coding in the human orbitofrontal cortex.
Li, Yansong; Vanni-Mercier, Giovanna; Isnard, Jean; Mauguière, François; Dreher, Jean-Claude
2016-04-01
The orbitofrontal cortex is known to carry information regarding expected reward, risk and experienced outcome. Yet, due to inherent limitations in lesion and neuroimaging methods, the neural dynamics of these computations has remained elusive in humans. Here, taking advantage of the high temporal definition of intracranial recordings, we characterize the neurophysiological signatures of the intact orbitofrontal cortex in processing information relevant for risky decisions. Local field potentials were recorded from the intact orbitofrontal cortex of patients suffering from drug-refractory partial epilepsy with implanted depth electrodes as they performed a probabilistic reward learning task that required them to associate visual cues with distinct reward probabilities. We observed three successive signals: (i) around 400 ms after cue presentation, the amplitudes of the local field potentials increased with reward probability; (ii) a risk signal emerged during the late phase of reward anticipation and during the outcome phase; and (iii) an experienced value signal appeared at the time of reward delivery. Both the medial and lateral orbitofrontal cortex encoded risk and reward probability while the lateral orbitofrontal cortex played a dominant role in coding experienced value. The present study provides the first evidence from intracranial recordings that the human orbitofrontal cortex codes reward risk both during late reward anticipation and during the outcome phase at a time scale of milliseconds. Our findings offer insights into the rapid mechanisms underlying the ability to learn structural relationships from the environment. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Altered temporal dynamics of neural adaptation in the aging human auditory cortex.
Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas
2016-09-01
Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging.
Dynamic neural networking as a basis for plasticity in the control of heart rate.
Kember, G; Armour, J A; Zamir, M
2013-01-21
A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.
Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy.
Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S; Yuste, Rafael; Ahrens, Misha B
2016-03-01
Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning-removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416×832×160 μm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain.
The general property of dynamical quintessence field
NASA Astrophysics Data System (ADS)
Gong, Yungui
2014-04-01
We discuss the general dynamical behaviors of quintessence field, in particular, the general conditions for tracking and thawing solutions are discussed. We explain what the tracking solutions mean and in what sense the results depend on the initial conditions. Based on the definition of tracking solution, we give a simple explanation on the existence of a general relation between wϕ and Ωϕ which is independent of the initial conditions for the tracking solution. A more general tracker theorem which requires large initial values of the roll parameter is then proposed. To get thawing solutions, the initial value of the roll parameter needs to be small. The power-law and pseudo-Nambu Goldstone boson potentials are used to discuss the tracking and thawing solutions. A more general wϕ-Ωϕ relation is derived for the thawing solutions. Based on the asymptotical behavior of the wϕ-Ωϕ relation, the flow parameter is used to give an upper limit on wϕ‧ for the thawing solutions. If we use the observational constraint w<-0.8 and 0.2<Ω<0.4, then we require n≲1 for the inverse power-law potential V(ϕ)=V0( with tracking solutions and the initial value of the roll parameter |λi|<1.3 for the potentials with the thawing solutions.
The Dynamical Recollection of Interconnected Neural Networks Using Meta-heuristics
NASA Astrophysics Data System (ADS)
Kuremoto, Takashi; Watanabe, Shun; Kobayashi, Kunikazu; Feng, Laing-Bing; Obayashi, Masanao
The interconnected recurrent neural networks are well-known with their abilities of associative memory of characteristic patterns. For example, the traditional Hopfield network (HN) can recall stored pattern stably, meanwhile, Aihara's chaotic neural network (CNN) is able to realize dynamical recollection of a sequence of patterns. In this paper, we propose to use meta-heuristic (MH) methods such as the particle swarm optimization (PSO) and the genetic algorithm (GA) to improve traditional associative memory systems. Using PSO or GA, for CNN, optimal parameters are found to accelerate the recollection process and raise the rate of successful recollection, and for HN, optimized bias current is calculated to improve the network with dynamical association of a series of patterns. Simulation results of binary pattern association showed effectiveness of the proposed methods.
Pattwell, Siobhan S.; Liston, Conor; Jing, Deqiang; Ninan, Ipe; Yang, Rui R.; Witztum, Jonathan; Murdock, Mitchell H.; Dincheva, Iva; Bath, Kevin G.; Casey, B. J.; Deisseroth, Karl; Lee, Francis S.
2016-01-01
Fear can be highly adaptive in promoting survival, yet it can also be detrimental when it persists long after a threat has passed. Flexibility of the fear response may be most advantageous during adolescence when animals are prone to explore novel, potentially threatening environments. Two opposing adolescent fear-related behaviours—diminished extinction of cued fear and suppressed expression of contextual fear—may serve this purpose, but the neural basis underlying these changes is unknown. Using microprisms to image prefrontal cortical spine maturation across development, we identify dynamic BLA-hippocampal-mPFC circuit reorganization associated with these behavioural shifts. Exploiting this sensitive period of neural development, we modified existing behavioural interventions in an age-specific manner to attenuate adolescent fear memories persistently into adulthood. These findings identify novel strategies that leverage dynamic neurodevelopmental changes during adolescence with the potential to extinguish pathological fears implicated in anxiety and stress-related disorders. PMID:27215672
Dynamic transitions among multiple oscillators of synchronized bursts in cultured neural networks
NASA Astrophysics Data System (ADS)
Hoan Kim, June; Heo, Ryoun; Choi, Joon Ho; Lee, Kyoung J.
2014-04-01
Synchronized neural bursts are a salient dynamic feature of biological neural networks, having important roles in brain functions. This report investigates the deterministic nature behind seemingly random temporal sequences of inter-burst intervals generated by cultured networks of cortical cells. We found that the complex sequences were an intricate patchwork of several noisy ‘burst oscillators’, whose periods covered a wide dynamic range, from a few tens of milliseconds to tens of seconds. The transition from one type of oscillator to another favored a particular passage, while the dwelling time between two neighboring transitions followed an exponential distribution showing no memory. With different amounts of bicuculline or picrotoxin application, we could also terminate the oscillators, generate new ones or tune their periods.
Global dynamic evolution of the cold plasma inferred with neural networks
NASA Astrophysics Data System (ADS)
Zhelavskaya, Irina; Shprits, Yuri; Spasojevic, Maria
2017-04-01
The electron number density is a fundamental parameter of plasmas and is critical for the wave-particle interactions. Despite its global importance, the distribution of cold plasma and its dynamic dependence on solar wind conditions remains poorly quantified. Existing empirical models present statistical averages based on static geomagnetic parameters, but cannot reflect the dynamics of the highly structured and quickly varying plasmasphere environment, especially during times of high geomagnetic activity. Global imaging provides insights on the dynamics but quantitative inversion to electron number density has been lacking. We propose an empirical model for reconstruction of global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. We develop a neural network that is capable of globally reconstructing the dynamics of the cold plasma density distribution for L shells from 2 to 6 and all local times. We utilize the density database obtained using the NURD algorithm [Zhelavskaya et al., 2016] in conjunction with solar wind data and geomagnetic indices to train the neural network. This study demonstrates how the global dynamics can be reconstructed from local in-situ observations by using machine learning tools. We describe aspects of the validation process in detail and discuss the selected inputs to the model and their physical implication.
Emerging phenomena in neural networks with dynamic synapses and their computational implications
Torres, Joaquin J.; Kappen, Hilbert J.
2013-01-01
In this paper we review our research on the effect and computational role of dynamical synapses on feed-forward and recurrent neural networks. Among others, we report on the appearance of a new class of dynamical memories which result from the destabilization of learned memory attractors. This has important consequences for dynamic information processing allowing the system to sequentially access the information stored in the memories under changing stimuli. Although storage capacity of stable memories also decreases, our study demonstrated the positive effect of synaptic facilitation to recover maximum storage capacity and to enlarge the capacity of the system for memory recall in noisy conditions. Possibly, the new dynamical behavior can be associated with the voltage transitions between up and down states observed in cortical areas in the brain. We investigated the conditions for which the permanence times in the up state are power-law distributed, which is a sign for criticality, and concluded that the experimentally observed large variability of permanence times could be explained as the result of noisy dynamic synapses with large recovery times. Finally, we report how short-term synaptic processes can transmit weak signals throughout more than one frequency range in noisy neural networks, displaying a kind of stochastic multi-resonance. This effect is due to competition between activity-dependent synaptic fluctuations (due to dynamic synapses) and the existence of neuron firing threshold which adapts to the incoming mean synaptic input. PMID:23637657
Neural Dynamic Logic of Consciousness: The Knowledge Instinct
2007-09-07
Interaction between language and cognition is an active field of study (see [xii] for neurodynamics of this interaction and for more references). Here we do...mechanisms of cognition , emotion, and language, and will study multi-agent systems, in which each agent possesses complex neurodynamics of interaction...may reveal to what extent the hierarchy is inborn vs. adaptively learned. Studies of the neurodynamics of interacting language and cognition have
The effects of noise on binocular rivalry waves: a stochastic neural field model
NASA Astrophysics Data System (ADS)
Webber, Matthew A.; Bressloff, Paul C.
2013-03-01
We analyze the effects of extrinsic noise on traveling waves of visual perception in a competitive neural field model of binocular rivalry. The model consists of two one-dimensional excitatory neural fields, whose activity variables represent the responses to left-eye and right-eye stimuli, respectively. The two networks mutually inhibit each other, and slow adaptation is incorporated into the model by taking the network connections to exhibit synaptic depression. We first show how, in the absence of any noise, the system supports a propagating composite wave consisting of an invading activity front in one network co-moving with a retreating front in the other network. Using a separation of time scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the composite wave from its uniformly translating position at long time scales, and fluctuations in the wave profile around its instantaneous position at short time scales. We use our analysis to calculate the first-passage-time distribution for a stochastic rivalry wave to travel a fixed distance, which we find to be given by an inverse Gaussian. Finally, we investigate the effects of noise in the depression variables, which under an adiabatic approximation lead to quenched disorder in the neural fields during propagation of a wave.
Neural network interpolation of the magnetic field for the LISA Pathfinder Diagnostics Subsystem
NASA Astrophysics Data System (ADS)
Diaz-Aguilo, Marc; Lobo, Alberto; García-Berro, Enrique
2011-05-01
LISA Pathfinder is a science and technology demonstrator of the European Space Agency within the framework of its LISA mission, which aims to be the first space-borne gravitational wave observatory. The payload of LISA Pathfinder is the so-called LISA Technology Package, which is designed to measure relative accelerations between two test masses in nominal free fall. Its disturbances are monitored and dealt by the diagnostics subsystem. This subsystem consists of several modules, and one of these is the magnetic diagnostics system, which includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at the positions of the test masses. However, since the magnetometers are located far from the positions of the test masses, the magnetic field at their positions must be interpolated. It has been recently shown that because there are not enough magnetic channels, classical interpolation methods fail to derive reliable measurements at the positions of the test masses, while neural network interpolation can provide the required measurements at the desired accuracy. In this paper we expand these studies and we assess the reliability and robustness of the neural network interpolation scheme for variations of the locations and possible offsets of the magnetometers, as well as for changes in environmental conditions. We find that neural networks are robust enough to derive accurate measurements of the magnetic field at the positions of the test masses in most circumstances.
Prototype extraction in material attractor neural networks with stochastic dynamic learning
NASA Astrophysics Data System (ADS)
Fusi, Stefano
1995-04-01
Dynamic learning of random stimuli can be described as a random walk among the stable synaptic values. It is shown that prototype extraction can take place in material attractor neural networks when the stimuli are correlated and hierarchically organized. The network learns a set of attractors representing the prototypes in a completely unsupervised fashion and is able to modify its attractors when the input statistics change. Learning and forgetting rates are computed.
Neural dynamics of error processing in medial frontal cortex.
Mars, Rogier B; Coles, Michael G H; Grol, Meike J; Holroyd, Clay B; Nieuwenhuis, Sander; Hulstijn, Wouter; Toni, Ivan
2005-12-01
Adaptive behavior requires an organism to evaluate the outcome of its actions, such that future behavior can be adjusted accordingly and the appropriate response selected. During associative learning, the time at which such evaluative information is available changes as learning progresses, from the delivery of performance feedback early in learning to the execution of the response itself during learned performance. Here, we report a learning-dependent shift in the timing of activation in the rostral cingulate zone of the anterior cingulate cortex from external error feedback to internal error detection. This pattern of activity is seen only in the anterior cingulate, not in the pre-supplementary motor area. The dynamics of these reciprocal changes are consistent with the claim that the rostral cingulate zone is involved in response selection on the basis of the expected outcome of an action. Specifically, these data illustrate how the anterior cingulate receives evaluative information, indicating that an action has not produced the desired result.
New Concepts in Atrial Fibrillation: Neural Mechanisms and Calcium Dynamics
Chen, Peng-Sheng
2009-01-01
Synopsis Atrial fibrillation (AF) is a complex arrhythmia with multiple possible mechanisms. It requires a trigger for initiation and a favorable substrate for maintenance. Pulmonary vein myocardial sleeves have the potential to generate spontaneous activity, and such arrhythmogenic activity is surfaced by modulation of intracellular calcium dynamics. Direct autonomic nerve recordings in canine models demonstrate that simultaneous sympathovagal discharges are the most common triggers of paroxysmal atrial tachycardia and paroxysmal AF. Autonomic modulation as a potential therapeutic strategy has been targeted clinically and experimentally, but its effectiveness as an adjunctive therapeutic modality to catheter ablation of AF has been inconsistent. Further studies are warranted before application can be widely implied for therapies of clinical AF. PMID:19111762
Voytek, Bradley; Knight, Robert T
2015-06-15
Perception, cognition, and social interaction depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at multiple timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. In this article, we begin from the perspective that successful interregional communication relies upon the transient synchronization between distinct low-frequency (<80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. From this, we construct a theoretical framework for dynamic network communication, arguing that these networks reflect a balance between oscillatory coupling and local population spiking activity and that these two levels of activity interact. We theorize that when oscillatory coupling is too strong, spike timing within the local neuronal population becomes too synchronous; when oscillatory coupling is too weak, spike timing is too disorganized. Each results in specific disruptions to neural communication. These alterations in communication dynamics may underlie cognitive changes associated with healthy development and aging, in addition to neurological and psychiatric disorders. A number of neurological and psychiatric disorders-including Parkinson's disease, autism, depression, schizophrenia, and anxiety-are associated with abnormalities in oscillatory activity. Although aging, psychiatric and neurological disease, and experience differ in the biological changes to structural gray or white matter, neurotransmission, and gene expression, our framework suggests that any resultant cognitive and behavioral changes in normal or disordered states or their treatment are a product of how these physical processes affect dynamic network communication.
Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network
NASA Astrophysics Data System (ADS)
Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao
2009-10-01
A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.
NASA Astrophysics Data System (ADS)
Gao, Shigen; Dong, Hairong; Lyu, Shihang; Ning, Bin
2016-07-01
This paper studies decentralised neural adaptive control of a class of interconnected nonlinear systems, each subsystem is in the presence of input saturation and external disturbance and has independent system order. Using a novel truncated adaptation design, dynamic surface control technique and minimal-learning-parameters algorithm, the proposed method circumvents the problems of 'explosion of complexity' and 'dimension curse' that exist in the traditional backstepping design. Comparing to the methodology that neural weights are online updated in the controllers, only one scalar needs to be updated in the controllers of each subsystem when dealing with unknown systematic dynamics. Radial basis function neural networks (NNs) are used in the online approximation of unknown systematic dynamics. It is proved using Lyapunov stability theory that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded. The tracking errors of each subsystems, the amplitude of NN approximation residuals and external disturbances can be attenuated to arbitrarily small by tuning proper design parameters. Simulation results are given to demonstrate the effectiveness of the proposed method.
Global dynamic evolution of the cold plasma inferred with neural networks
NASA Astrophysics Data System (ADS)
Zhelavskaya, I. S.; Shprits, Y. Y.; Spasojevic, M.
2016-12-01
The electron number density is a fundamental parameter of plasmas and a critical parameter in the wave-particle interactions. However, the distribution of cold plasma and its dynamic dependence on solar wind conditions remains poorly quantified. Existing empirical models provide us with statistical averages based on static geomagnetic parameters, but cannot reflect the dynamics of the highly structured and quickly varying plasmasphere environment, especially during times of high geomagnetic activity. Global imaging provides insights on the dynamics but does not provide quantitative estimates of number density. Accurately calculating the evolving distribution from first principles has also proven elusive due to the sheer number of physical processes involved.In this study, we propose an empirical model for reconstruction of global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. We develop a neural network that is capable of globally reconstructing the dynamics of the cold plasma density distribution for L shells from 2 to 6 and all local times. First, we derive a plasma density database by using the NURD algorithm to identify the upper hybrid resonance band in plasma wave observations from Van Allen Probes [Zhelavskaya et al., 2016]. Then, we utilize the density database in conjunction with solar wind data and geomagnetic indices to train the neural network. To validate and test the model, we choose validation and test sets independently from the density database. We validate and test the neural network by measuring its performance on these sets and also by comparing the model predicted global evolution with global images of the He+ distribution in the Earth's plasmasphere from the IMAGE extreme ultraviolet (EUV) instrument.The present study demonstrates how we can reconstruct the global dynamics from local in-situ observations by using machine learning tools. We describe aspects of the validation process in
The simplest problem in the collective dynamics of neural networks: is synchrony stable?
NASA Astrophysics Data System (ADS)
Timme, Marc; Wolf, Fred
2008-07-01
For spiking neural networks we consider the stability problem of global synchrony, arguably the simplest non-trivial collective dynamics in such networks. We find that even this simplest dynamical problem—local stability of synchrony—is non-trivial to solve and requires novel methods for its solution. In particular, the discrete mode of pulsed communication together with the complicated connectivity of neural interaction networks requires a non-standard approach. The dynamics in the vicinity of the synchronous state is determined by a multitude of linear operators, in contrast to a single stability matrix in conventional linear stability theory. This unusual property qualitatively depends on network topology and may be neglected for globally coupled homogeneous networks. For generic networks, however, the number of operators increases exponentially with the size of the network. We present methods to treat this multi-operator problem exactly. First, based on the Gershgorin and Perron-Frobenius theorems, we derive bounds on the eigenvalues that provide important information about the synchronization process but are not sufficient to establish the asymptotic stability or instability of the synchronous state. We then present a complete analysis of asymptotic stability for topologically strongly connected networks using simple graph-theoretical considerations. For inhibitory interactions between dissipative (leaky) oscillatory neurons the synchronous state is stable, independent of the parameters and the network connectivity. These results indicate that pulse-like interactions play a profound role in network dynamical systems, and in particular in the dynamics of biological synchronization, unless the coupling is homogeneous and all-to-all. The concepts introduced here are expected to also facilitate the exact analysis of more complicated dynamical network states, for instance the irregular balanced activity in cortical neural networks.
Kostarigka, Artemis K; Rovithakis, George A
2012-01-01
An adaptive dynamic output feedback neural network controller for a class of multi-input/multi-output affine in the control uncertain nonlinear systems is designed, capable of guaranteeing prescribed performance bounds on the system's output as well as boundedness of all other closed loop signals. It is proved that simply guaranteeing a boundedness property for the states of a specifically defined augmented closed loop system is necessary and sufficient to solve the problem under consideration. The proposed dynamic controller is of switching type. However, its continuity is guaranteed, thus alleviating any issues related to the existence and uniqueness of solutions. Simulations on a planar two-link articulated manipulator illustrate the approach.
Using recurrent neural networks to optimize dynamical decoupling for quantum memory
NASA Astrophysics Data System (ADS)
August, Moritz; Ni, Xiaotong
2017-01-01
We utilize machine learning models that are based on recurrent neural networks to optimize dynamical decoupling (DD) sequences. Dynamical decoupling is a relatively simple technique for suppressing the errors in quantum memory for certain noise models. In numerical simulations, we show that with minimum use of prior knowledge and starting from random sequences, the models are able to improve over time and eventually output DD sequences with performance better than that of the well known DD families. Furthermore, our algorithm is easy to implement in experiments to find solutions tailored to the specific hardware, as it treats the figure of merit as a black box.
Wang, Shang; Garcia, Monica D; Lopez, Andrew L; Overbeek, Paul A; Larin, Kirill V; Larina, Irina V
2017-01-01
Neural tube closure is a critical feature of central nervous system morphogenesis during embryonic development. Failure of this process leads to neural tube defects, one of the most common forms of human congenital defects. Although molecular and genetic studies in model organisms have provided insights into the genes and proteins that are required for normal neural tube development, complications associated with live imaging of neural tube closure in mammals limit efficient morphological analyses. Here, we report the use of optical coherence tomography (OCT) for dynamic imaging and quantitative assessment of cranial neural tube closure in live mouse embryos in culture. Through time-lapse imaging, we captured two neural tube closure mechanisms in different cranial regions, zipper-like closure of the hindbrain region and button-like closure of the midbrain region. We also used OCT imaging for phenotypic characterization of a neural tube defect in a mouse mutant. These results suggest that the described approach is a useful tool for live dynamic analysis of normal neural tube closure and neural tube defects in the mouse model.
Wang, Shang; Garcia, Monica D.; Lopez, Andrew L.; Overbeek, Paul A.; Larin, Kirill V.; Larina, Irina V.
2016-01-01
Neural tube closure is a critical feature of central nervous system morphogenesis during embryonic development. Failure of this process leads to neural tube defects, one of the most common forms of human congenital defects. Although molecular and genetic studies in model organisms have provided insights into the genes and proteins that are required for normal neural tube development, complications associated with live imaging of neural tube closure in mammals limit efficient morphological analyses. Here, we report the use of optical coherence tomography (OCT) for dynamic imaging and quantitative assessment of cranial neural tube closure in live mouse embryos in culture. Through time-lapse imaging, we captured two neural tube closure mechanisms in different cranial regions, zipper-like closure of the hindbrain region and button-like closure of the midbrain region. We also used OCT imaging for phenotypic characterization of a neural tube defect in a mouse mutant. These results suggest that the described approach is a useful tool for live dynamic analysis of normal neural tube closure and neural tube defects in the mouse model. PMID:28101427
Neural processing of dynamic emotional facial expressions in psychopaths
Decety, Jean; Skelly, Laurie; Yoder, Keith J.; Kiehl, Kent A.
2014-01-01
Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, eighty adult incarcerated males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent fMRI scanning while viewing dynamic facial expressions of fear, sadness, happiness and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, STS) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex, regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions. PMID:24359488
Neural processing of dynamic emotional facial expressions in psychopaths.
Decety, Jean; Skelly, Laurie; Yoder, Keith J; Kiehl, Kent A
2014-02-01
Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, 80 incarcerated adult males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent functional magnetic resonance imaging (fMRI) scanning while viewing dynamic facial expressions of fear, sadness, happiness, and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, and superior temporal sulcus (STS)) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex (OFC)), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness, and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex (vmPFC), regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions.
Neural Dynamics Underlying Target Detection in the Human Brain
Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.
2014-01-01
Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944
Neural dynamics underlying target detection in the human brain.
Bansal, Arjun K; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R; Kreiman, Gabriel
2014-02-19
Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses.
Ito, Masato; Noda, Kuniaki; Hoshino, Yukiko; Tani, Jun
2006-04-01
This study presents experiments on the learning of object handling behaviors by a small humanoid robot using a dynamic neural network model, the recurrent neural network with parametric bias (RNNPB). The first experiment showed that after the robot learned different types of ball handling behaviors using human direct teaching, the robot was able to generate adequate ball handling motor sequences situated to the relative position between the robot's hands and the ball. The same scheme was applied to a block handling learning task where it was shown that the robot can switch among learned different block handling sequences, situated to the ways of interaction by human supporters. Our analysis showed that entrainment of the internal memory structures of the RNNPB through the interactions of the objects and the human supporters are the essential mechanisms for those observed situated behaviors of the robot.
Memory, sleep, and dynamic stabilization of neural circuitry: evolutionary perspectives.
Kavanau, J L
1996-01-01
Some aspects of the evolution of mechanisms for enhancement and maintenance of synaptic efficacy are treated. After the origin of use-dependent synaptic plasticity, frequent synaptic activation (dynamic stabilization, DS) probably prolonged transient efficacy enhancements induced by single activations. In many "primitive" invertebrates inhabiting essentially unvarying aqueous environments, DS of synapses occurs primarily in the course of frequent functional use. In advanced locomoting ectotherms encountering highly varied environments, DS is thought to occur both through frequent functional use and by spontaneous "non-utilitarian" activations that occur primarily during rest. Non-utilitarian activations are induced by endogenous oscillatory neuronal activity, the need for which might have been one of the sources of selective pressure for the evolution of neurons with oscillatory firing capacities. As non-sleeping animals evolved increasingly complex brains, ever greater amounts of circuitry encoding inherited and experiential information (memories) required maintenance. The selective pressure for the evolution of sleep may have been the need to depress perception and processing of sensory inputs to minimize interference with DS of this circuitry. As the higher body temperatures and metabolic rates of endothermy evolved, mere skeletal muscle hypotonia evidently did not suffice to prevent sleep-disrupting skeletal muscle contractions during DS of motor circuitry. Selection against sleep disruption may have led to the evolution of further decreases in muscle tone, paralleling the increase in metabolic rate, and culminating in the postural atonia of REM (rapid eye movement) sleep. Phasic variations in heart and respiratory rates during REM sleep may result from superposition of activations accomplishing non-utilitarian DS of redundant and modulatory motor circuitry on the rhythmic autonomic control mechanisms. Accompanying non-utilitarian DS of circuitry during sleep
Adaptations and pathologies linked to dynamic stabilization of neural circuitry.
Kavanau, J L
1999-05-01
Brain circuits for infrequently employed memories are reinforced largely during sleep by self-induced, electrical slow-waves, a process referred to as "dynamic stabilization" (DS). The essence of waking brain function in the absence of volitional activity is sensory input processing, an enormous amount of which is visual. These two functions: circuit reinforcement by DS and sensory information processing come into conflict when both occur at a high level, a conflict that may have been the selective pressure for sleep's origin. As brain waves are absent at the low temperatures of deep torpor, essential circuitry of hibernating small mammals would lose its competence if the animals did not warm up periodically to temperatures allowing sleep and circuit reinforcement. Blind, cave-dwelling vertebrates require no sleep because their sensory processing does not interfere with DS. Nor does such interference arise in continuously-swimming fishes, whose need to process visual information is reduced greatly by life in visually relatively featureless, pelagic habitats, and by schooling. Dreams are believed to have their origin in DS of memory circuits. They are thought to have illusory content when the circuits are partially degraded (incompetent), with synaptic efficacies weakened through infrequent use. Partially degraded circuits arise normally in the course of synaptic efficacy decay, or pathologically through abnormal regimens of DS. Organic delirium may result from breakdown of normal regimens of DS of circuitry during sleep, leaving many circuits incompetent. Activation of incompetent circuits during wakefulness apparently produces delirium and hallucinations. Some epileptic seizures may be induced by abnormal regimens of DS of motor circuitry. Regimens of remedial DS during seizures induced by electroconvulsive therapy (ECT) apparently produce temporary remission of delirium by restoring functional or 'dedicated' synaptic efficacies in incompetent circuitry. Sparing
Choice modulates the neural dynamics of prediction error processing during rewarded learning.
Peterson, David A; Lotz, Daniel T; Halgren, Eric; Sejnowski, Terrence J; Poizner, Howard
2011-01-15
Our ability to selectively engage with our environment enables us to guide our learning and to take advantage of its benefits. When facing multiple possible actions, our choices are a critical aspect of learning. In the case of learning from rewarding feedback, there has been substantial theoretical and empirical progress in elucidating the associated behavioral and neural processes, predominantly in terms of a reward prediction error, a measure of the discrepancy between actual versus expected reward. Nevertheless, the distinct influence of choice on prediction error processing and its neural dynamics remains relatively unexplored. In this study we used a novel paradigm to determine how choice influences prediction error processing and to examine whether there are correspondingly distinct neural dynamics. We recorded scalp electroencephalogram while healthy adults were administered a rewarded learning task in which choice trials were intermingled with control trials involving the same stimuli, motor responses, and probabilistic rewards. We used a temporal difference learning model of subjects' trial-by-trial choices to infer subjects' image valuations and corresponding prediction errors. As expected, choices were associated with lower overall prediction error magnitudes, most notably over the course of learning the stimulus-reward contingencies. Choices also induced a higher-amplitude relative positivity in the frontocentral event-related potential about 200 ms after reward signal onset that was negatively correlated with the differential effect of choice on the prediction error. Thus choice influences the neural dynamics associated with how reward signals are processed during learning. Behavioral, computational, and neurobiological models of rewarded learning should therefore accommodate a distinct influence for choice during rewarded learning. Copyright © 2010 Elsevier Inc. All rights reserved.
Population coding and decoding in a neural field: a computational study.
Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki
2002-05-01
This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.
The Emergent Executive: A Dynamic Field Theory of the Development of Executive Function
Buss, Aaron T.; Spencer, John P.
2015-01-01
A dynamic neural field (DNF) model is presented which provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function—the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 4-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. In the DNF model, rule-use and behavioral flexibility come about through a form of dimensional attention which modulates activity within different cortical fields tuned to specific feature dimensions. In particular, we capture developmental change by increasing the strength of excitatory and inhibitory neural interactions in the dimensional attention system as well as refining the connectivity between this system and the feature-specific cortical fields. Note that although this enables the model to effectively switch tasks, the dimensional attention system does not ‘know’ the details of task-specific performance. Rather, correct performance emerges as a property of system-wide neural interactions. We show how this captures children's behavior in quantitative detail across 12 versions of the DCCS task. Moreover, we successfully test a set of novel predictions with 3-year-old children from a version of the task not explained by other theories. PMID:24818836
Direct field measurement of the dynamic amplification in a bridge
NASA Astrophysics Data System (ADS)
Carey, Ciarán; OBrien, Eugene J.; Malekjafarian, Abdollah; Lydon, Myra; Taylor, Su
2017-02-01
In this paper, the level of dynamics, as described by the Assessment Dynamic Ratio (ADR), is measured directly through a field test on a bridge in the United Kingdom. The bridge was instrumented using fiber optic strain sensors and piezo-polymer weigh-in-motion sensors were installed in the pavement on the approach road. Field measurements of static and static-plus-dynamic strains were taken over 45 days. The results show that, while dynamic amplification is large for many loading events, these tend not to be the critical events. ADR, the allowance that should be made for dynamics in an assessment of safety, is small.
Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes
Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice
2014-01-01
The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion. PMID:24214921
Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes.
Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice; Tamietto, Marco
2014-11-01
The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion.
Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim
2015-01-01
The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.
Frontera, J L; Raices, M; Cervino, A S; Pozzi, A G; Paz, D A
2016-11-01
Neural stem cells (NSCs) of the olfactory epithelium (OE) are responsible for tissue maintenance and the neural regeneration after severe damage of the tissue. In the normal OE, NSCs are located in the basal layer, olfactory receptor neurons (ORNs) mainly in the middle layer, and sustentacular (SUS) cells in the most apical olfactory layer. In this work, we induced severe damage of the OE through treatment with a zinc sulfate (ZnSO4) solution directly in the medium, which resulted in the loss of ORNs and SUS cells, but retention of the basal layer. During recovery following injury, the OE exhibited increased proliferation of NSCs and rapid neural regeneration. After 24h of recovery, new ORNs and SUS cells were observed. Normal morphology and olfactory function were reached after 168h (7 days) of recovery after ZnSO4 treatment. Taken together, these data support the hypothesis that NSCs in the basal layer activate after OE injury and that these are sufficient for complete neural regeneration and olfactory function restoration. Our analysis provides histological and functional insights into the dynamics between olfactory neurogenesis and the neuronal integration into the neuronal circuitry of the olfactory bulb that restores the function of the olfactory system.
Caldesmon regulates actin dynamics to influence cranial neural crest migration in Xenopus
Nie, Shuyi; Kee, Yun; Bronner-Fraser, Marianne
2011-01-01
Caldesmon (CaD) is an important actin modulator that associates with actin filaments to regulate cell morphology and motility. Although extensively studied in cultured cells, there is little functional information regarding the role of CaD in migrating cells in vivo. Here we show that nonmuscle CaD is highly expressed in both premigratory and migrating cranial neural crest cells of Xenopus embryos. Depletion of CaD with antisense morpholino oligonucleotides causes cranial neural crest cells to migrate a significantly shorter distance, prevents their segregation into distinct migratory streams, and later results in severe defects in cartilage formation. Demonstrating specificity, these effects are rescued by adding back exogenous CaD. Interestingly, CaD proteins with mutations in the Ca2+-calmodulin–binding sites or ErK/Cdk1 phosphorylation sites fail to rescue the knockdown phenotypes, whereas mutation of the PAK phosphorylation site is able to rescue them. Analysis of neural crest explants reveals that CaD is required for the dynamic arrangements of actin and, thus, for cell shape changes and process formation. Taken together, these results suggest that the actin-modulating activity of CaD may underlie its critical function and is regulated by distinct signaling pathways during normal neural crest migration. PMID:21795398
Distributed dynamical computation in neural circuits with propagating coherent activity patterns.
Gong, Pulin; van Leeuwen, Cees
2009-12-01
Activity in neural circuits is spatiotemporally organized. Its spatial organization consists of multiple, localized coherent patterns, or patchy clusters. These patterns propagate across the circuits over time. This type of collective behavior has ubiquitously been observed, both in spontaneous activity and evoked responses; its function, however, has remained unclear. We construct a spatially extended, spiking neural circuit that generates emergent spatiotemporal activity patterns, thereby capturing some of the complexities of the patterns observed empirically. We elucidate what kind of fundamental function these patterns can serve by showing how they process information. As self-sustained objects, localized coherent patterns can signal information by propagating across the neural circuit. Computational operations occur when these emergent patterns interact, or collide with each other. The ongoing behaviors of these patterns naturally embody both distributed, parallel computation and cascaded logical operations. Such distributed computations enable the system to work in an inherently flexible and efficient way. Our work leads us to propose that propagating coherent activity patterns are the underlying primitives with which neural circuits carry out distributed dynamical computation.
An implantable wireless neural interface for recording cortical circuit dynamics in moving primates
NASA Astrophysics Data System (ADS)
Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto
2013-04-01
Objective. Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims and those living with severe neuromotor disease. Such systems must be chronically safe, durable and effective. Approach. We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based microelectrode array via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1 Hz to 7.8 kHz, 200× gain) and multiplexed by a custom application specific integrated circuit, digitized and then packaged for transmission. The neural data (24 Mbps) were transmitted by a wireless data link carried on a frequency-shift-key-modulated signal at 3.2 and 3.8 GHz to a receiver 1 m away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7 h continuous operation between recharge via an inductive transcutaneous wireless power link at 2 MHz. Main results. Device verification and early validation were performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance. We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight into how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile
An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates
Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto
2013-01-01
Objective Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims, and those living with severe neuromotor disease. Such systems must be chronically safe, durable, and effective. Approach We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous, and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based MEA via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1Hz to 7.8kHz, ×200 gain) and multiplexed by a custom application specific integrated circuit, digitized, and then packaged for transmission. The neural data (24 Mbps) was transmitted by a wireless data link carried on an frequency shift key modulated signal at 3.2GHz and 3.8GHz to a receiver 1 meter away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7-hour continuous operation between recharge via an inductive transcutaneous wireless power link at 2MHz. Main results Device verification and early validation was performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight on how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile patient use, have
Neural network simulation of soil NO3 dynamic under potato crop system
NASA Astrophysics Data System (ADS)
Goulet-Fortin, Jérôme; Morais, Anne; Anctil, François; Parent, Léon-Étienne; Bolinder, Martin
2013-04-01
Nitrate leaching is a major issue in sandy soils intensively cropped to potato. Modelling could test and improve management practices, particularly as regard to the optimal N application rates. Lack of input data is an important barrier for the application of classical process-based models to predict soil NO3 content (SNOC) and NO3 leaching (NOL). Alternatively, data driven models such as neural networks (NN) could better take into account indicators of spatial soil heterogeneity and plant growth pattern such as the leaf area index (LAI), hence reducing the amount of soil information required. The first objective of this study was to evaluate NN and hybrid models to simulate SNOC in the 0-40 cm soil layer considering inter-annual variations, spatial soil heterogeneity and differential N application rates. The second objective was to evaluate the same methodology to simulate seasonal NOL dynamic at 1 m deep. To this aim, multilayer perceptrons with different combinations of driving meteorological variables, functions of the LAI and state variables of external deterministic models have been trained and evaluated. The state variables from external models were: drainage estimated by the CLASS model and the soil temperature estimated by an ICBM subroutine. Results of SNOC simulations were compared to field data collected between 2004 and 2011 at several experimental plots under potato cropping systems in Québec, Eastern Canada. Results of NOL simulation were compared to data obtained in 2012 from 11 suction lysimeters installed in 2 experimental plots under potato cropping systems in the same region. The most performing model for SNOC simulation was obtained using a 4-input hybrid model composed of 1) cumulative LAI, 2) cumulative drainage, 3) soil temperature and 4) day of year. The most performing model for NOL simulation was obtained using a 5-input NN model composed of 1) N fertilization rate at spring, 2) LAI, 3) cumulative rainfall, 4) the day of year and 5) the
Unified description of the dynamics of quintessential scalar fields
Ureña-López, L. Arturo
2012-03-01
Using the dynamical system approach, we describe the general dynamics of cosmological scalar fields in terms of critical points and heteroclinic lines. It is found that critical points describe the initial and final states of the scalar field dynamics, but that heteroclinic lines give a more complete description of the evolution in between the critical points. In particular, the heteroclinic line that departs from the (saddle) critical point of perfect fluid-domination is the representative path in phase space of quintessence fields that may be viable dark energy candidates. We also discuss the attractor properties of the heteroclinic lines, and their importance for the description of thawing and freezing fields.
Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1997-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P
2017-03-01
In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller.
Araújo, Rui
2006-09-01
Mobile robots must be able to build their own maps to navigate in unknown worlds. Expanding a previously proposed method based on the fuzzy ART neural architecture (FARTNA), this paper introduces a new online method for learning maps of unknown dynamic worlds. For this purpose the new Prune-able fuzzy adaptive resonance theory neural architecture (PAFARTNA) is introduced. It extends the FARTNA self-organizing neural network with novel mechanisms that provide important dynamic adaptation capabilities. Relevant PAFARTNA properties are formulated and demonstrated. A method is proposed for the perception of object removals, and then integrated with PAFARTNA. The proposed methods are integrated into a navigation architecture. With the new navigation architecture the mobile robot is able to navigate in changing worlds, and a degree of optimality is maintained, associated to a shortest path planning approach implemented in real-time over the underlying global world model. Experimental results obtained with a Nomad 200 robot are presented demonstrating the feasibility and effectiveness of the proposed methods.
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
Sinapayen, Lana; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.
Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.
Kwong, C. K.; Fung, K. Y.; Jiang, Huimin; Chan, K. Y.
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884
A neural-network-based method of model reduction for the dynamic simulation of MEMS
NASA Astrophysics Data System (ADS)
Liang, Y. C.; Lin, W. Z.; Lee, H. P.; Lim, S. P.; Lee, K. H.; Feng, D. P.
2001-05-01
This paper proposes a neuro-network-based method for model reduction that combines the generalized Hebbian algorithm (GHA) with the Galerkin procedure to perform the dynamic simulation and analysis of nonlinear microelectromechanical systems (MEMS). An unsupervised neural network is adopted to find the principal eigenvectors of a correlation matrix of snapshots. It has been shown that the extensive computer results of the principal component analysis using the neural network of GHA can extract an empirical basis from numerical or experimental data, which can be used to convert the original system into a lumped low-order macromodel. The macromodel can be employed to carry out the dynamic simulation of the original system resulting in a dramatic reduction of computation time while not losing flexibility and accuracy. Compared with other existing model reduction methods for the dynamic simulation of MEMS, the present method does not need to compute the input correlation matrix in advance. It needs only to find very few required basis functions, which can be learned directly from the input data, and this means that the method possesses potential advantages when the measured data are large. The method is evaluated to simulate the pull-in dynamics of a doubly-clamped microbeam subjected to different input voltage spectra of electrostatic actuation. The efficiency and the flexibility of the proposed method are examined by comparing the results with those of the fully meshed finite-difference method.
Self-organised transients in a neural mass model of epileptogenic tissue dynamics.
Goodfellow, Marc; Schindler, Kaspar; Baier, Gerold
2012-02-01
Stimulation of human epileptic tissue can induce rhythmic, self-terminating responses on the EEG or ECoG. These responses play a potentially important role in localising tissue involved in the generation of seizure activity, yet the underlying mechanisms are unknown. However, in vitro evidence suggests that self-terminating oscillations in nervous tissue are underpinned by non-trivial spatio-temporal dynamics in an excitable medium. In this study, we investigate this hypothesis in spatial extensions to a neural mass model for epileptiform dynamics. We demonstrate that spatial extensions to this model in one and two dimensions display propagating travelling waves but also more complex transient dynamics in response to local perturbations. The neural mass formulation with local excitatory and inhibitory circuits, allows the direct incorporation of spatially distributed, functional heterogeneities into the model. We show that such heterogeneities can lead to prolonged reverberating responses to a single pulse perturbation, depending upon the location at which the stimulus is delivered. This leads to the hypothesis that prolonged rhythmic responses to local stimulation in epileptogenic tissue result from repeated self-excitation of regions of tissue with diminished inhibitory capabilities. Combined with previous models of the dynamics of focal seizures this macroscopic framework is a first step towards an explicit spatial formulation of the concept of the epileptogenic zone. Ultimately, an improved understanding of the pathophysiologic mechanisms of the epileptogenic zone will help to improve diagnostic and therapeutic measures for treating epilepsy.
Kim, Junkyeong; Lee, Chaggil; Park, Seunghee
2017-01-01
Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process. PMID:28590456
Kim, Junkyeong; Lee, Chaggil; Park, Seunghee
2017-06-07
Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process.
Quantum analysis applied to thermo field dynamics on dissipative systems
Hashizume, Yoichiro; Okamura, Soichiro; Suzuki, Masuo
2015-03-10
Thermo field dynamics is one of formulations useful to treat statistical mechanics in the scheme of field theory. In the present study, we discuss dissipative thermo field dynamics of quantum damped harmonic oscillators. To treat the effective renormalization of quantum dissipation, we use the Suzuki-Takano approximation. Finally, we derive a dissipative von Neumann equation in the Lindbrad form. In the present treatment, we can easily obtain the initial damping shown previously by Kubo.
Gas dynamics in strong centrifugal fields
Bogovalov, S.V.; Kislov, V.A.; Tronin, I.V.
2015-03-10
Dynamics of waves generated by scopes in gas centrifuges (GC) for isotope separation is considered. The centrifugal acceleration in the GC reaches values of the order of 106g. The centrifugal and Coriolis forces modify essentially the conventional sound waves. Three families of the waves with different polarisation and dispersion exist in these conditions. Dynamics of the flow in the model GC Iguasu is investigated numerically. Comparison of the results of the numerical modelling of the wave dynamics with the analytical predictions is performed. New phenomena of the resonances in the GC is found. The resonances occur for the waves polarized along the rotational axis having the smallest dumping due to the viscosity.
Modeling the Dynamics of Human Brain Activity with Recurrent Neural Networks
Güçlü, Umut; van Gerven, Marcel A. J.
2017-01-01
Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear convolution of features to responses (response model). While there has been extensive work on developing better feature models, the work on developing better response models has been rather limited. Here, we investigate the extent to which recurrent neural network models can use their internal memories for nonlinear processing of arbitrary feature sequences to predict feature-evoked response sequences as measured by functional magnetic resonance imaging. We show that the proposed recurrent neural network models can significantly outperform established response models by accurately estimating long-term dependencies that drive hemodynamic responses. The results open a new window into modeling the dynamics of brain activity in response to sensory stimuli. PMID:28232797
Runge-Kutta neural network for identification of dynamical systems in high accuracy.
Wang, Y J; Lin, C T
1998-01-01
This paper proposes Runge-Kutta neural networks (RKNNs) for identification of unknown dynamical systems described by ordinary differential equations (i.e., ordinary differential equation or ODE systems) with high accuracy. These networks are constructed according to the Runge-Kutta approximation method. The main attraction of the RKNNs is that they precisely estimate the changing rates of system states (i.e., the right-hand side of the ODE x =f(x)) directly in their subnetworks based on the space-domain interpolation within one sampling interval such that they can do long-term prediction of system state trajectories. We show theoretically the superior generalization and long-term prediction capability of the RKNNs over the normal neural networks. Two types of learning algorithms are investigated for the RKNNs, gradient-and nonlinear recursive least-squares-based algorithms. Convergence analysis of the learning algorithms is done theoretically. Computer simulations demonstrate the proved properties of the RKNNs.
Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control
Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind
2016-01-01
There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system. PMID:26829673
Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control.
Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind
2016-02-01
There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system.
NASA Astrophysics Data System (ADS)
Bruton, Christopher Patrick
Earthquakes and seismicity have long been used to monitor volcanoes. In addition to the time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new sets of waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia. The procedure can successfully distinguish between slab and volcanic events at Uturuncu, between events from four different volcanoes in the Katmai region, and between volcano-tectonic and long-period events at Spurr. Average recall and overall accuracy were greater than 80% in all three cases.
Babona-Pilipos, Robart; Droujinine, Ilia A; Popovic, Milos R; Morshead, Cindi M
2011-01-01
The existence of neural stem and progenitor cells (together termed neural precursor cells) in the adult mammalian brain has sparked great interest in utilizing these cells for regenerative medicine strategies. Endogenous neural precursors within the adult forebrain subependyma can be activated following injury, resulting in their proliferation and migration toward lesion sites where they differentiate into neural cells. The administration of growth factors and immunomodulatory agents following injury augments this activation and has been shown to result in behavioural functional recovery following stroke. With the goal of enhancing neural precursor migration to facilitate the repair process we report that externally applied direct current electric fields induce rapid and directed cathodal migration of pure populations of undifferentiated adult subependyma-derived neural precursors. Using time-lapse imaging microscopy in vitro we performed an extensive single-cell kinematic analysis demonstrating that this galvanotactic phenomenon is a feature of undifferentiated precursors, and not differentiated phenotypes. Moreover, we have shown that the migratory response of the neural precursors is a direct effect of the electric field and not due to chemotactic gradients. We also identified that epidermal growth factor receptor (EGFR) signaling plays a role in the galvanotactic response as blocking EGFR significantly attenuates the migratory behaviour. These findings suggest direct current electric fields may be implemented in endogenous repair paradigms to promote migration and tissue repair following neurotrauma.
Artificial neural network analysis of noisy visual field data in glaucoma.
Henson, D B; Spenceley, S E; Bull, D R
1997-06-01
This paper reports on the application of an artificial neural network to the clinical analysis of ophthalmological data. In particular a 2-dimensional Kohonen self-organising feature map (SOM) is used to analyse visual field data from glaucoma patients. Importantly, the paper addresses the problem of how the SOM can be utilised to accommodate the noise within the data. This is a particularly important problem within longitudinal assessment, where detecting significant change is the crux of the problem in clinical diagnosis. Data from 737 glaucomatous visual field records (Humphrey Visual Field Analyzer, program 24-2) are used to train a SOM with 25 nodes organised on a square grid. The SOM clusters the data organising the output map such that fields with early and advanced loss are at extreme positions, with a continuum of change in place and extent of loss represented by the intervening nodes. For each SOM node 100 variants, generated by a computer simulation modelling the variability that might be expected in a glaucomatous eye, are also classified by the network to establish the extent of noise upon classification. Field change is then measured with respect to classification of a subsequent field, outside the area defined by the original field and its variants. The significant contribution of this paper is that the spatial analysis of the field data, which is provided by the SOM, has been augmented with noise analysis enhancing the visual representation of longitudinal data and enabling quantification of significant class change.
Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho
2006-12-01
A new method for the robust control of flexible-joint (FJ) robots with model uncertainties in both robot dynamics and actuator dynamics is proposed. The proposed control system is a combination of the adaptive dynamic surface control (DSC) technique and the self-recurrent wavelet neural network (SRWNN). The adaptive DSC technique provides the ability to overcome the "explosion of complexity" problem in backstepping controllers. The SRWNNs are used to observe the arbitrary model uncertainties of FJ robots, and all their weights are trained online. From the Lyapunov stability analysis, their adaptation laws are induced, and the uniformly ultimately boundedness of all signals in a closed-loop adaptive system is proved. Finally, simulation results for a three-link FJ robot are utilized to validate the good position tracking performance and robustness against payload uncertainties and external disturbances of the proposed control system.
Optimal system size for complex dynamics in random neural networks near criticality
Wainrib, Gilles; García del Molino, Luis Carlos
2013-12-15
In this article, we consider a model of dynamical agents coupled through a random connectivity matrix, as introduced by Sompolinsky et al. [Phys. Rev. Lett. 61(3), 259–262 (1988)] in the context of random neural networks. When system size is infinite, it is known that increasing the disorder parameter induces a phase transition leading to chaotic dynamics. We observe and investigate here a novel phenomenon in the sub-critical regime for finite size systems: the probability of observing complex dynamics is maximal for an intermediate system size when the disorder is close enough to criticality. We give a more general explanation of this type of system size resonance in the framework of extreme values theory for eigenvalues of random matrices.
Optimal system size for complex dynamics in random neural networks near criticality
Wainrib, Gilles; García del Molino, Luis Carlos
2013-12-15
In this article, we consider a model of dynamical agents coupled through a random connectivity matrix, as introduced by Sompolinsky et al. [Phys. Rev. Lett. 61(3), 259–262 (1988)] in the context of random neural networks. When system size is infinite, it is known that increasing the disorder parameter induces a phase transition leading to chaotic dynamics. We observe and investigate here a novel phenomenon in the sub-critical regime for finite size systems: the probability of observing complex dynamics is maximal for an intermediate system size when the disorder is close enough to criticality. We give a more general explanation of this type of system size resonance in the framework of extreme values theory for eigenvalues of random matrices.
Neely, Kristina A.; Coombes, Stephen A.; Planetta, Peggy J.; Vaillancourt, David E.
2011-01-01
A central topic in sensorimotor neuroscience is the static-dynamic dichotomy that exists throughout the nervous system. Previous work examining motor unit synchronization reports that the activation strategy and timing of motor units differ for static and dynamic tasks. However, it remains unclear whether segregated or overlapping blood-oxygen-level-dependent (BOLD) activity exists in the brain for static and dynamic motor control. This study compared the neural circuits associated with the production of static force to those associated with the production of dynamic force pulses. To that end, healthy young adults (n = 17) completed static and dynamic precision grip force tasks during functional magnetic resonance imaging (fMRI). Both tasks activated core regions within the visuomotor network, including primary and sensory motor cortices, premotor cortices, multiple visual areas, putamen, and cerebellum. Static force was associated with unique activity in a right-lateralized cortical network including inferior parietal lobe, ventral premotor cortex, and dorsolateral prefrontal cortex. In contrast, dynamic force was associated with unique activity in left-lateralized and midline cortical regions, including supplementary motor area, superior parietal lobe, fusiform gyrus, and visual area V3. These findings provide the first neuroimaging evidence supporting a lateralized pattern of brain activity for the production of static and dynamic precision grip force. PMID:22109998
Neely, Kristina A; Coombes, Stephen A; Planetta, Peggy J; Vaillancourt, David E
2013-03-01
A central topic in sensorimotor neuroscience is the static-dynamic dichotomy that exists throughout the nervous system. Previous work examining motor unit synchronization reports that the activation strategy and timing of motor units differ for static and dynamic tasks. However, it remains unclear whether segregated or overlapping blood-oxygen-level-dependent (BOLD) activity exists in the brain for static and dynamic motor control. This study compared the neural circuits associated with the production of static force to those associated with the production of dynamic force pulses. To that end, healthy young adults (n = 17) completed static and dynamic precision grip force tasks during functional magnetic resonance imaging (fMRI). Both tasks activated core regions within the visuomotor network, including primary and sensory motor cortices, premotor cortices, multiple visual areas, putamen, and cerebellum. Static force was associated with unique activity in a right-lateralized cortical network including inferior parietal lobe, ventral premotor cortex, and dorsolateral prefrontal cortex. In contrast, dynamic force was associated with unique activity in left-lateralized and midline cortical regions, including supplementary motor area, superior parietal lobe, fusiform gyrus, and visual area V3. These findings provide the first neuroimaging evidence supporting a lateralized pattern of brain activity for the production of static and dynamic precision grip force.
Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field
NASA Astrophysics Data System (ADS)
Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali
2017-04-01
One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .
NASA Astrophysics Data System (ADS)
Yu, Yiqun; Koller, Josef; Jordanova, Vania K.; Zaharia, Sorin G.; Friedel, Reinhard W.; Morley, Steven K.; Chen, Yue; Baker, Daniel; Reeves, Geoffrey D.; Spence, Harlan E.
2014-03-01
We expanded our previous work on L* neural networks that used empirical magnetic field models as the underlying models by applying and extending our technique to drift shells calculated from a physics-based magnetic field model. While empirical magnetic field models represent an average, statistical magnetospheric state, the RAM-SCB model, a first-principles magnetically self-consistent code, computes magnetic fields based on fundamental equations of plasma physics. Unlike the previous L* neural networks that include McIlwain L and mirror point magnetic field as part of the inputs, the new L* neural network only requires solar wind conditions and the Dst index, allowing for an easier preparation of input parameters. This new neural network is compared against those previously trained networks and validated by the tracing method in the International Radiation Belt Environment Modeling (IRBEM) library. The accuracy of all L* neural networks with different underlying magnetic field models is evaluated by applying the electron phase space density (PSD)-matching technique derived from the Liouville's theorem to the Van Allen Probes observations. Results indicate that the uncertainty in the predicted L* is statistically (75%) below 0.7 with a median value mostly below 0.2 and the median absolute deviation around 0.15, regardless of the underlying magnetic field model. We found that such an uncertainty in the calculated L* value can shift the peak location of electron phase space density (PSD) profile by 0.2 RE radially but with its shape nearly preserved.
Noto, M; Nishikawa, J; Tateno, T
2016-03-24
A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self
Chen, Yonghong; Bressler, Steven L; Ding, Mingzhou
2006-01-30
It is often useful in multivariate time series analysis to determine statistical causal relations between different time series. Granger causality is a fundamental measure for this purpose. Yet the traditional pairwise approach to Granger causality analysis may not clearly distinguish between direct causal influences from one time series to another and indirect ones acting through a third time series. In order to differentiate direct from indirect Granger causality, a conditional Granger causality measure in the frequency domain is derived based on a partition matrix technique. Simulations and an application to neural field potential time series are demonstrated to validate the method.
NASA Astrophysics Data System (ADS)
Hasegawa, Mikio; Tran, Ha Nguyen; Miyamoto, Goh; Murata, Yoshitoshi; Harada, Hiroshi; Kato, Shuzo
We propose a neurodynamical approach to a large-scale optimization problem in Cognitive Wireless Clouds, in which a huge number of mobile terminals with multiple different air interfaces autonomously utilize the most appropriate infrastructure wireless networks, by sensing available wireless networks, selecting the most appropriate one, and reconfiguring themselves with seamless handover to the target networks. To deal with such a cognitive radio network, game theory has been applied in order to analyze the stability of the dynamical systems consisting of the mobile terminals' distributed behaviors, but it is not a tool for globally optimizing the state of the network. As a natural optimization dynamical system model suitable for large-scale complex systems, we introduce the neural network dynamics which converges to an optimal state since its property is to continually decrease its energy function. In this paper, we apply such neurodynamics to the optimization problem of radio access technology selection. We compose a neural network that solves the problem, and we show that it is possible to improve total average throughput simply by using distributed and autonomous neuron updates on the terminal side.
Poza, Jesús; Gómez, Carlos; García, María; Corralejo, Rebeca; Fernández, Alberto; Hornero, Roberto
2014-04-01
Current diagnostic guidelines encourage further research for the development of novel Alzheimer's disease (AD) biomarkers, especially in its prodromal form (i.e. mild cognitive impairment, MCI). Magnetoencephalography (MEG) can provide essential information about AD brain dynamics; however, only a few studies have addressed the characterization of MEG in incipient AD. We analyzed MEG rhythms from 36 AD patients, 18 MCI subjects and 27 controls, introducing a new wavelet-based parameter to quantify their dynamical properties: the wavelet turbulence. Our results suggest that AD progression elicits statistically significant regional-dependent patterns of abnormalities in the neural activity (p < 0.05), including a progressive loss of irregularity, variability, symmetry and Gaussianity. Furthermore, the highest accuracies to discriminate AD and MCI subjects from controls were 79.4% and 68.9%, whereas, in the three-class setting, the accuracy reached 67.9%. Our findings provide an original description of several dynamical properties of neural activity in early AD and offer preliminary evidence that the proposed methodology is a promising tool for assessing brain changes at different stages of dementia.
Neural control of cardiovascular responses and of ventilation during dynamic exercise in man.
Strange, S; Secher, N H; Pawelczyk, J A; Karpakka, J; Christensen, N J; Mitchell, J H; Saltin, B
1993-01-01
1. Nine subjects performed dynamic knee extension by voluntary muscle contractions and by evoked contractions with and without epidural anaesthesia. Four exercise bouts of 10 min each were performed: three of one-legged knee extension (10, 20 and 30 W) and one of two-legged knee extension at 2 x 20 W. Epidural anaesthesia was induced with 0.5% bupivacaine or 2% lidocaine. Presence of neural blockade was verified by cutaneous sensory anaesthesia below T8-T10 and complete paralysis of both legs. 2. Compared to voluntary exercise, control electrically induced exercise resulted in normal or enhanced cardiovascular, metabolic and ventilatory responses. However, during epidural anaesthesia the increase in blood pressure with exercise was abolished. Furthermore, the increases in heart rate, cardiac output and leg blood flow were reduced. In contrast, plasma catecholamines, leg glucose uptake and leg lactate release, arterial carbon dioxide tension and pulmonary ventilation were not affected. Arterial and venous plasma potassium concentrations became elevated but leg potassium release was not increased. 3. The results conform to the idea that a reflex originating in contracting muscle is essential for the normal blood pressure response to dynamic exercise, and that other neural, humoral and haemodynamic mechanisms cannot govern this response. However, control mechanisms other than central command and the exercise pressor reflex can influence heart rate, cardiac output, muscle blood flow and ventilation during dynamic exercise in man. PMID:8308750
Diano, Matteo; Tamietto, Marco; Celeghin, Alessia; Weiskrantz, Lawrence; Tatu, Mona-Karina; Bagnis, Arianna; Duca, Sergio; Geminiani, Giuliano; Cauda, Franco; Costa, Tommaso
2017-01-01
The quest to characterize the neural signature distinctive of different basic emotions has recently come under renewed scrutiny. Here we investigated whether facial expressions of different basic emotions modulate the functional connectivity of the amygdala with the rest of the brain. To this end, we presented seventeen healthy participants (8 females) with facial expressions of anger, disgust, fear, happiness, sadness and emotional neutrality and analyzed amygdala’s psychophysiological interaction (PPI). In fact, PPI can reveal how inter-regional amygdala communications change dynamically depending on perception of various emotional expressions to recruit different brain networks, compared to the functional interactions it entertains during perception of neutral expressions. We found that for each emotion the amygdala recruited a distinctive and spatially distributed set of structures to interact with. These changes in amygdala connectional patters characterize the dynamic signature prototypical of individual emotion processing, and seemingly represent a neural mechanism that serves to implement the distinctive influence that each emotion exerts on perceptual, cognitive, and motor responses. Besides these differences, all emotions enhanced amygdala functional integration with premotor cortices compared to neutral faces. The present findings thus concur to reconceptualise the structure-function relation between brain-emotion from the traditional one-to-one mapping toward a network-based and dynamic perspective. PMID:28345642
Diano, Matteo; Tamietto, Marco; Celeghin, Alessia; Weiskrantz, Lawrence; Tatu, Mona-Karina; Bagnis, Arianna; Duca, Sergio; Geminiani, Giuliano; Cauda, Franco; Costa, Tommaso
2017-03-27
The quest to characterize the neural signature distinctive of different basic emotions has recently come under renewed scrutiny. Here we investigated whether facial expressions of different basic emotions modulate the functional connectivity of the amygdala with the rest of the brain. To this end, we presented seventeen healthy participants (8 females) with facial expressions of anger, disgust, fear, happiness, sadness and emotional neutrality and analyzed amygdala's psychophysiological interaction (PPI). In fact, PPI can reveal how inter-regional amygdala communications change dynamically depending on perception of various emotional expressions to recruit different brain networks, compared to the functional interactions it entertains during perception of neutral expressions. We found that for each emotion the amygdala recruited a distinctive and spatially distributed set of structures to interact with. These changes in amygdala connectional patters characterize the dynamic signature prototypical of individual emotion processing, and seemingly represent a neural mechanism that serves to implement the distinctive influence that each emotion exerts on perceptual, cognitive, and motor responses. Besides these differences, all emotions enhanced amygdala functional integration with premotor cortices compared to neutral faces. The present findings thus concur to reconceptualise the structure-function relation between brain-emotion from the traditional one-to-one mapping toward a network-based and dynamic perspective.
NASA Astrophysics Data System (ADS)
Poza, Jesús; Gómez, Carlos; García, María; Corralejo, Rebeca; Fernández, Alberto; Hornero, Roberto
2014-04-01
Objective. Current diagnostic guidelines encourage further research for the development of novel Alzheimer's disease (AD) biomarkers, especially in its prodromal form (i.e. mild cognitive impairment, MCI). Magnetoencephalography (MEG) can provide essential information about AD brain dynamics; however, only a few studies have addressed the characterization of MEG in incipient AD. Approach. We analyzed MEG rhythms from 36 AD patients, 18 MCI subjects and 27 controls, introducing a new wavelet-based parameter to quantify their dynamical properties: the wavelet turbulence. Main results. Our results suggest that AD progression elicits statistically significant regional-dependent patterns of abnormalities in the neural activity (p < 0.05), including a progressive loss of irregularity, variability, symmetry and Gaussianity. Furthermore, the highest accuracies to discriminate AD and MCI subjects from controls were 79.4% and 68.9%, whereas, in the three-class setting, the accuracy reached 67.9%. Significance. Our findings provide an original description of several dynamical properties of neural activity in early AD and offer preliminary evidence that the proposed methodology is a promising tool for assessing brain changes at different stages of dementia.
Force fields for classical molecular dynamics.
Monticelli, Luca; Tieleman, D Peter
2013-01-01
In this chapter we review the basic features and the principles underlying molecular mechanics force fields commonly used in molecular modeling of biological macromolecules. We start by summarizing the historical background and then describe classical pairwise additive potential energy functions. We introduce the problem of the calculation of nonbonded interactions, of particular i