Spatiotemporal dynamics of continuum neural fields
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2012-01-01
We survey recent analytical approaches to studying the spatiotemporal dynamics of continuum neural fields. Neural fields model the large-scale dynamics of spatially structured biological neural networks in terms of nonlinear integrodifferential equations whose associated integral kernels represent the spatial distribution of neuronal synaptic connections. They provide an important example of spatially extended excitable systems with nonlocal interactions and exhibit a wide range of spatially coherent dynamics including traveling waves oscillations and Turing-like patterns.
Metastable dynamics in heterogeneous neural fields.
Schwappach, Cordula; Hutt, Axel; Beim Graben, Peter
2015-01-01
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671
Metastable dynamics in heterogeneous neural fields
Schwappach, Cordula; Hutt, Axel; beim Graben, Peter
2015-01-01
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671
Neural Field Dynamics with Heterogeneous Connection Topology
NASA Astrophysics Data System (ADS)
Qubbaj, Murad R.; Jirsa, Viktor K.
2007-06-01
Neural fields receive inputs from local and nonlocal sources. Notably in a biologically realistic architecture the latter vary under spatial translations (heterogeneous), the former do not (homogeneous). To understand the mutual effects of homogeneous and heterogeneous connectivity, we study the stability of the steady state activity of a neural field as a function of its connectivity and transmission speed. We show that myelination, a developmentally relevant change of the heterogeneous connectivity, always results in the stabilization of the steady state via oscillatory instabilities, independent of the local connectivity. Nonoscillatory instabilities are shown to be independent of any influences of time delay.
Fluctuation-response relation unifies dynamical behaviors in neural fields
NASA Astrophysics Data System (ADS)
Fung, C. C. Alan; Wong, K. Y. Michael; Mao, Hongzi; Wu, Si
2015-08-01
Anticipation is a strategy used by neural fields to compensate for transmission and processing delays during the tracking of dynamical information and can be achieved by slow, localized, inhibitory feedback mechanisms such as short-term synaptic depression, spike-frequency adaptation, or inhibitory feedback from other layers. Based on the translational symmetry of the mobile network states, we derive generic fluctuation-response relations, providing unified predictions that link their tracking behaviors in the presence of external stimuli to the intrinsic dynamics of the neural fields in their absence.
Fluctuation-response relation unifies dynamical behaviors in neural fields.
Fung, C C Alan; Wong, K Y Michael; Mao, Hongzi; Wu, Si
2015-08-01
Anticipation is a strategy used by neural fields to compensate for transmission and processing delays during the tracking of dynamical information and can be achieved by slow, localized, inhibitory feedback mechanisms such as short-term synaptic depression, spike-frequency adaptation, or inhibitory feedback from other layers. Based on the translational symmetry of the mobile network states, we derive generic fluctuation-response relations, providing unified predictions that link their tracking behaviors in the presence of external stimuli to the intrinsic dynamics of the neural fields in their absence. PMID:26382448
Conditions of activity bubble uniqueness in dynamic neural fields.
Mikhailova, Inna; Goerick, Christian
2005-02-01
Dynamic neural fields (DNFs) offer a rich spectrum of dynamic properties like hysteresis, spatiotemporal information integration, and coexistence of multiple attractors. These properties make DNFs more and more popular in implementations of sensorimotor loops for autonomous systems. Applications often imply that DNFs should have only one compact region of firing neurons (activity bubble), whereas the rest of the field should not fire (e.g., if the field represents motor commands). In this article we prove the conditions of activity bubble uniqueness in the case of locally symmetric input bubbles. The qualitative condition on inhomogeneous inputs used in earlier work on DNFs is transfered to a quantitative condition of a balance between the internal dynamics and the input. The mathematical analysis is carried out for the two-dimensional case with methods that can be extended to more than two dimensions. The article concludes with an example of how our theoretical results facilitate the practical use of DNFs. PMID:15685393
Neural Population Dynamics Modeled by Mean-Field Graphs
NASA Astrophysics Data System (ADS)
Kozma, Robert; Puljic, Marko
2011-09-01
In this work we apply random graph theory approach to describe neural population dynamics. There are important advantages of using random graph theory approach in addition to ordinary and partial differential equations. The mathematical theory of large-scale random graphs provides an efficient tool to describe transitions between high- and low-dimensional spaces. Recent advances in studying neural correlates of higher cognition indicate the significance of sudden changes in space-time neurodynamics, which can be efficiently described as phase transitions in the neuropil medium. Phase transitions are rigorously defined mathematically on random graph sequences and they can be naturally generalized to a class of percolation processes called neuropercolation. In this work we employ mean-field graphs with given vertex degree distribution and edge strength distribution. We demonstrate the emergence of collective oscillations in the style of brains.
The dynamic neural field approach to cognitive robotics.
Erlhagen, Wolfram; Bicho, Estela
2006-09-01
This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work. PMID:16921201
Dynamic neural fields as a step toward cognitive neuromorphic architectures.
Sandamirskaya, Yulia
2013-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620
Dynamic neural fields as a step toward cognitive neuromorphic architectures
Sandamirskaya, Yulia
2014-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620
Dynamic patterns in a two-dimensional neural field with refractoriness
NASA Astrophysics Data System (ADS)
Qi, Yang; Gong, Pulin
2015-08-01
The formation of dynamic patterns such as localized propagating waves is a fascinating self-organizing phenomenon that happens in a wide range of spatially extended systems including neural systems, in which they might play important functional roles. Here we derive a type of two-dimensional neural-field model with refractoriness to study the formation mechanism of localized waves. After comparing this model with existing neural-field models, we show that it is able to generate a variety of localized patterns, including stationary bumps, localized waves rotating along a circular path, and localized waves with longer-range propagation. We construct explicit bump solutions for the two-dimensional neural field and conduct a linear stability analysis on how a stationary bump transitions to a propagating wave under different spatial eigenmode perturbations. The neural-field model is then partially solved in a comoving frame to obtain localized wave solutions, whose spatial profiles are in good agreement with those obtained from simulations. We demonstrate that when there are multiple such propagating waves, they exhibit rich propagation dynamics, including propagation along periodically oscillating and irregular trajectories; these propagation dynamics are quantitatively characterized. In addition, we show that these waves can have repulsive or merging collisions, depending on their collision angles and the refractoriness parameter. Due to its analytical tractability, the two-dimensional neural-field model provides a modeling framework for studying localized propagating waves and their interactions.
Neural field simulator: two-dimensional spatio-temporal dynamics involving finite transmission speed
Nichols, Eric J.; Hutt, Axel
2015-01-01
Neural Field models (NFM) play an important role in the understanding of neural population dynamics on a mesoscopic spatial and temporal scale. Their numerical simulation is an essential element in the analysis of their spatio-temporal dynamics. The simulation tool described in this work considers scalar spatially homogeneous neural fields taking into account a finite axonal transmission speed and synaptic temporal derivatives of first and second order. A text-based interface offers complete control of field parameters and several approaches are used to accelerate simulations. A graphical output utilizes video hardware acceleration to display running output with reduced computational hindrance compared to simulators that are exclusively software-based. Diverse applications of the tool demonstrate breather oscillations, static and dynamic Turing patterns and activity spreading with finite propagation speed. The simulator is open source to allow tailoring of code and this is presented with an extension use case. PMID:26539105
Modeling human target reaching with an adaptive observer implemented with dynamic neural fields.
Fard, Farzaneh S; Hollensen, Paul; Heinke, Dietmar; Trappenberg, Thomas P
2015-12-01
Humans can point fairly accurately to memorized states when closing their eyes despite slow or even missing sensory feedback. It is also common that the arm dynamics changes during development or from injuries. We propose a biologically motivated implementation of an arm controller that includes an adaptive observer. Our implementation is based on the neural field framework, and we show how a path integration mechanism can be trained from few examples. Our results illustrate successful generalization of path integration with a dynamic neural field by which the robotic arm can move in arbitrary directions and velocities. Also, by adapting the strength of the motor effect the observer implicitly learns to compensate an image acquisition delay in the sensory system. Our dynamic implementation of an observer successfully guides the arm toward the target in the dark, and the model produces movements with a bell-shaped velocity profile, consistent with human behavior data. PMID:26559472
Coupling actin dynamics to phase-field in modeling neural growth.
Najem, Sara; Grant, Martin
2015-06-14
In this paper we model the growth of a neural cell together with the actin dynamics taking place at its growing region by constructing a phase-field model. This is done by assigning auxiliary fields to different constituents of the cell in order to differentiate them. Specifically, the inner and outer regions of the neural cell are described by ϕ = 1 and ϕ = 0 respectively, whereas the inside and outside of its leading edge are portrayed by ψ = 1 and ψ = 0. This formulation inherently locates the boundary, which is required to determine the evolution of the underlying actin dynamics. Therefore, it provides an alternative to boundary tracking algorithms. Then the equations governing the molecular workings of the cell specifically those of actin are modified in order to satisfy their corresponding boundary conditions. PMID:25943025
The dynamic brain: from spiking neurons to neural masses and cortical fields.
Deco, Gustavo; Jirsa, Viktor K; Robinson, Peter A; Breakspear, Michael; Friston, Karl
2008-01-01
The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences. PMID
Dynamics of neural cryptography
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Neural field dynamics under variation of local and global connectivity and finite transmission speed
NASA Astrophysics Data System (ADS)
Qubbaj, Murad R.; Jirsa, Viktor K.
2009-12-01
Spatially continuous networks with heterogeneous connections are ubiquitous in biological systems, in particular neural systems. To understand the mutual effects of locally homogeneous and globally heterogeneous connectivity, we investigate the stability of the steady state activity of a neural field as a function of its connectivity. The variation of the connectivity is implemented through manipulation of a heterogeneous two-point connection embedded into the otherwise homogeneous connectivity matrix and by variation of the connectivity strength and transmission speed. Detailed examples including the Ginzburg-Landau equation and various other local architectures are discussed. Our analysis shows that developmental changes such as the myelination of the cortical large-scale fiber system generally result in the stabilization of steady state activity independent of the local connectivity. Non-oscillatory instabilities are shown to be independent of any influences of time delay.
Rich spectrum of neural field dynamics in the presence of short-term synaptic depression
NASA Astrophysics Data System (ADS)
Wang, He; Lam, Kin; Fung, C. C. Alan; Wong, K. Y. Michael; Wu, Si
2015-09-01
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.
Stochastic mean-field formulation of the dynamics of diluted neural networks
NASA Astrophysics Data System (ADS)
Angulo-Garcia, D.; Torcini, A.
2015-02-01
We consider pulse-coupled leaky integrate-and-fire neural networks with randomly distributed synaptic couplings. This random dilution induces fluctuations in the evolution of the macroscopic variables and deterministic chaos at the microscopic level. Our main aim is to mimic the effect of the dilution as a noise source acting on the dynamics of a globally coupled nonchaotic system. Indeed, the evolution of a diluted neural network can be well approximated as a fully pulse-coupled network, where each neuron is driven by a mean synaptic current plus additive noise. These terms represent the average and the fluctuations of the synaptic currents acting on the single neurons in the diluted system. The main microscopic and macroscopic dynamical features can be retrieved with this stochastic approximation. Furthermore, the microscopic stability of the diluted network can be also reproduced, as demonstrated from the almost coincidence of the measured Lyapunov exponents in the deterministic and stochastic cases for an ample range of system sizes. Our results strongly suggest that the fluctuations in the synaptic currents are responsible for the emergence of chaos in this class of pulse-coupled networks.
Learning to recognize objects on the fly: a neurally based dynamic field approach.
Faubel, Christian; Schöner, Gregor
2008-05-01
Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework. PMID:18501555
Hou, Saing Paul; Haddad, Wassim M; Meskin, Nader; Bailey, James M
2015-12-01
With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory. PMID:26438186
Dynamic interactions in neural networks
Arbib, M.A. ); Amari, S. )
1989-01-01
The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.
Mean-field dynamics of a random neural network with noise
NASA Astrophysics Data System (ADS)
Klinshov, Vladimir; Franović, Igor
2015-12-01
We consider a network of randomly coupled rate-based neurons influenced by external and internal noise. We derive a second-order stochastic mean-field model for the network dynamics and use it to analyze the stability and bifurcations in the thermodynamic limit, as well as to study the fluctuations due to the finite-size effect. It is demonstrated that the two types of noise have substantially different impact on the network dynamics. While both sources of noise give rise to stochastic fluctuations in the case of the finite-size network, only the external noise affects the stationary activity levels of the network in the thermodynamic limit. We compare the theoretical predictions with the direct simulation results and show that they agree for large enough network sizes and for parameter domains sufficiently away from bifurcations.
Creative-Dynamics Approach To Neural Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1992-01-01
Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.
Francis, Joseph T; Chapin, John K
2006-06-01
In everyday life, we reach, grasp, and manipulate a variety of different objects all with their own dynamic properties. This degree of adaptability is essential for a brain-controlled prosthetic arm to work in the real world. In this study, rats were trained to make reaching movements while holding a torque manipulandum working against two distinct loads. Neural recordings obtained from arrays of 32 microelectrodes spanning the motor cortex were used to predict several movement related variables. In this paper, we demonstrate that a simple linear regression model can translate neural activity into endpoint position of a robotic manipulandum even while the animal controlling it works against different loads. A second regression model can predict, with 100% accuracy, which of the two loads is being manipulated by the animal. Finally, a third model predicts the work needed to move the manipulandum endpoint. This prediction is significantly better than that for position. In each case, the regression model uses a single set of weights. Thus, the neural ensemble is capable of providing the information necessary to compensate for at least two distinct load conditions. PMID:16792286
Emergent complex neural dynamics
NASA Astrophysics Data System (ADS)
Chialvo, Dante R.
2010-10-01
A large repertoire of spatiotemporal activity patterns in the brain is the basis for adaptive behaviour. Understanding the mechanism by which the brain's hundred billion neurons and hundred trillion synapses manage to produce such a range of cortical configurations in a flexible manner remains a fundamental problem in neuroscience. One plausible solution is the involvement of universal mechanisms of emergent complex phenomena evident in dynamical systems poised near a critical point of a second-order phase transition. We review recent theoretical and empirical results supporting the notion that the brain is naturally poised near criticality, as well as its implications for better understanding of the brain.
Dynamical systems, attractors, and neural circuits
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic—they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions. PMID:27408709
Model Of Neural Network With Creative Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail; Barhen, Jacob
1993-01-01
Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.
Foetal ECG recovery using dynamic neural networks.
Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan
2004-07-01
Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both
Propagating waves can explain irregular neural dynamics.
Keane, Adam; Gong, Pulin
2015-01-28
Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. PMID:25632135
Dynamic alignment models for neural coding.
Kollmorgen, Sepp; Hahnloser, Richard H R
2014-03-01
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448
The Complexity of Dynamics in Small Neural Circuits.
Fasoli, Diego; Cattani, Anna; Panzeri, Stefano
2016-08-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
The Complexity of Dynamics in Small Neural Circuits
Panzeri, Stefano
2016-01-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
Field-theoretic approach to fluctuation effects in neural networks
Buice, Michael A.; Cowan, Jack D.
2007-05-15
A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.
Dynamic analysis of neural encoding by point process adaptive filtering.
Eden, Uri T; Frank, Loren M; Barbieri, Riccardo; Solo, Victor; Brown, Emery N
2004-05-01
Neural receptive fields are dynamic in that with experience, neurons change their spiking responses to relevant stimuli. To understand how neural systems adapt their representations of biological information, analyses of receptive field plasticity from experimental measurements are crucial. Adaptive signal processing, the well-established engineering discipline for characterizing the temporal evolution of system parameters, suggests a framework for studying the plasticity of receptive fields. We use the Bayes' rule Chapman-Kolmogorov paradigm with a linear state equation and point process observation models to derive adaptive filters appropriate for estimation from neural spike trains. We derive point process filter analogues of the Kalman filter, recursive least squares, and steepest-descent algorithms and describe the properties of these new filters. We illustrate our algorithms in two simulated data examples. The first is a study of slow and rapid evolution of spatial receptive fields in hippocampal neurons. The second is an adaptive decoding study in which a signal is decoded from ensemble neural spiking activity as the receptive fields of the neurons in the ensemble evolve. Our results provide a paradigm for adaptive estimation for point process observations and suggest a practical approach for constructing filtering algorithms to track neural receptive field dynamics on a millisecond timescale. PMID:15070506
Comparing artificial and biological dynamical neural networks
NASA Astrophysics Data System (ADS)
McAulay, Alastair D.
2006-05-01
Modern computers can be made more friendly and otherwise improved by making them behave more like humans. Perhaps we can learn how to do this from biology in which human brains evolved over a long period of time. Therefore, we first explain a commonly used biological neural network (BNN) model, the Wilson-Cowan neural oscillator, that has cross-coupled excitatory (positive) and inhibitory (negative) neurons. The two types of neurons are used for frequency modulation communication between neurons which provides immunity to electromagnetic interference. We then evolve, for the first time, an artificial neural network (ANN) to perform the same task. Two dynamical feed-forward artificial neural networks use cross-coupling feedback (like that in a flip-flop) to form an ANN nonlinear dynamic neural oscillator with the same equations as the Wilson-Cowan neural oscillator. Finally we show, through simulation, that the equations perform the basic neural threshold function, switching between stable zero output and a stable oscillation, that is a stable limit cycle. Optical implementation with an injected laser diode and future research are discussed.
Neural network with formed dynamics of activity
Dunin-Barkovskii, V.L.; Osovets, N.B.
1995-03-01
The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.
Dynamics and kinematics of simple neural systems
Rabinovich, M. |; Selverston, A.; Rubchinsky, L.; Huerta, R.
1996-09-01
The dynamics of simple neural systems is of interest to both biologists and physicists. One of the possible roles of such systems is the production of rhythmic patterns, and their alterations (modification of behavior, processing of sensory information, adaptation, control). In this paper, the neural systems are considered as a subject of modeling by the dynamical systems approach. In particular, we analyze how a stable, ordinary behavior of a small neural system can be described by simple finite automata models, and how more complicated dynamical systems modeling can be used. The approach is illustrated by biological and numerical examples: experiments with and numerical simulations of the stomatogastric central pattern generators network of the California spiny lobster. {copyright} {ital 1996 American Institute of Physics.}
On lateral competition in dynamic neural networks
Bellyustin, N.S.
1995-02-01
Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.
Dynamics and kinematics of simple neural systems
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail; Selverston, Allen; Rubchinsky, Leonid; Huerta, Ramón
1996-09-01
The dynamics of simple neural systems is of interest to both biologists and physicists. One of the possible roles of such systems is the production of rhythmic patterns, and their alterations (modification of behavior, processing of sensory information, adaptation, control). In this paper, the neural systems are considered as a subject of modeling by the dynamical systems approach. In particular, we analyze how a stable, ordinary behavior of a small neural system can be described by simple finite automata models, and how more complicated dynamical systems modeling can be used. The approach is illustrated by biological and numerical examples: experiments with and numerical simulations of the stomatogastric central pattern generators network of the California spiny lobster.
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2015-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of
Electrokinetic confinement of axonal growth for dynamically configurable neural networks.
Honegger, Thibault; Scott, Mark A; Yanik, Mehmet F; Voldman, Joel
2013-02-21
Axons in the developing nervous system are directed via guidance cues, whose expression varies both spatially and temporally, to create functional neural circuits. Existing methods to create patterns of neural connectivity in vitro use only static geometries, and are unable to dynamically alter the guidance cues imparted on the cells. We introduce the use of AC electrokinetics to dynamically control axonal growth in cultured rat hippocampal neurons. We find that the application of modest voltages at frequencies on the order of 10(5) Hz can cause developing axons to be stopped adjacent to the electrodes while axons away from the electric fields exhibit uninhibited growth. By switching electrodes on or off, we can reversibly inhibit or permit axon passage across the electrodes. Our models suggest that dielectrophoresis is the causative AC electrokinetic effect. We make use of our dynamic control over axon elongation to create an axon-diode via an axon-lock system that consists of a pair of electrode 'gates' that either permit or prevent axons from passing through. Finally, we developed a neural circuit consisting of three populations of neurons, separated by three axon-locks to demonstrate the assembly of a functional, engineered neural network. Action potential recordings demonstrate that the AC electrokinetic effect does not harm axons, and Ca(2+) imaging demonstrated the unidirectional nature of the synaptic connections. AC electrokinetic confinement of axonal growth has potential for creating configurable, directional neural networks. PMID:23314575
Electrokinetic confinement of axonal growth for dynamically configurable neural networks
Honegger, Thibault; Scott, Mark A.; Yanik, Mehmet F.; Voldman, Joel
2013-01-01
Axons in the developing nervous system are directed via guidance cues, whose expression varies both spatially and temporally, to create functional neural circuits. Existing methods to create patterns of neural connectivity in vitro use only static geometries, and are unable to dynamically alter the guidance cues imparted on the cells. We introduce the use of AC electrokinetics to dynamically control axonal growth in cultured rat hippocampal neurons. We find that the application of modest voltages at frequencies on the order of 105 Hz can cause developing axons to be stopped adjacent to the electrodes while axons away from the electric fields exhibit uninhibited growth. By switching electrodes on or off, we can reversibly inhibit or permit axon passage across the electrodes. Our models suggest that dielectrophoresis is the causative AC electrokinetic effect. We make use of our dynamic control over axon elongation to create an axon-diode via an axon-lock system that consists of a pair of electrode `gates' that either permit or prevent axons from passing through. Finally, we developed a neural circuit consisting of three populations of neurons, separated by three axon-locks to demonstrate the assembly of a functional, engineered neural network. Action potential recordings demonstrate that the AC electrokinetic effect does not harm axons, and Ca2+ imaging demonstrated the unidirectional nature of the synaptic connections. AC electrokinetic confinement of axonal growth has potential for creating configurable, directional neural networks. PMID:23314575
Synthesis of recurrent neural networks for dynamical system simulation.
Trischler, Adam P; D'Eleuterio, Gabriele M T
2016-08-01
We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. PMID:27182811
Neural networks, field theory, directed percolation, and critical branching
NASA Astrophysics Data System (ADS)
Buice, Michael A.
We describe the dynamics of neural activity using field-theoretic methods for non-equilibrium statistical processes. Using a Markov assumption, we introduce the "spike model". The spike model permits a characterization of both neural fluctuations and response, presenting a tractable way to extend the mean field (Wilson-Cowan) equations used in much of theoretical and computational neuroscience. We also demonstrate the formalism's application to the Cowan models, one of which is equivalent to the forest fire model with immune trees. We argue that neural activity under mild conditions exhibits a dynamical phase transition which is in the universality class of directed percolation (DP). Owing to the spatial extent of neural interactions, there is a region in which the critical behavior is that of a branching process before crossing over into the DP region, consistent with measurements in cortical slice preparations. From the perspective of theoretical neuroscience, a principal contribution of this work is the connection of the problem of non-linear, non-Gaussian systems with the problem of dealing with infrared singularities in field theory. This work suggests a general characterization of epilepsy as a manifestation of a directed percolation phase transition.
Axonal Velocity Distributions in Neural Field Equations
Bojak, Ingo; Liley, David T. J.
2010-01-01
By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, long-range propagation of activity in MFMs is often approximated with partial differential equations (PDEs). However, PDE approximations in current use correspond to underlying axonal velocity distributions incompatible with experimental measurements. In order to rectify this deficiency, we here introduce novel propagation PDEs that give rise to smooth unimodal distributions of axonal conduction velocities. We also argue that velocities estimated from fibre diameters in slice and from latency measurements, respectively, relate quite differently to such distributions, a significant point for any phenomenological description. Our PDEs are then successfully fit to fibre diameter data from human corpus callosum and rat subcortical white matter. This allows for the first time to simulate long-range conduction in the mammalian brain with realistic, convenient PDEs. Furthermore, the obtained results suggest that the propagation of activity in rat and human differs significantly beyond mere scaling. The dynamical consequences of our new formulation are investigated in the context of a well known neural field model. On the basis of Turing instability analyses, we conclude that pattern formation is more easily initiated using our more realistic propagator. By increasing characteristic conduction velocities, a smooth transition can occur from self-sustaining bulk oscillations to travelling waves of various wavelengths, which may influence axonal growth during development. Our analytic results are also corroborated numerically using simulations on a large spatial grid. Thus we provide here a comprehensive analysis of empirically constrained activity propagation in the context of MFMs, which will allow more realistic studies
Nonlinear dynamics of neural delayed feedback
Longtin, A.
1990-01-01
Neural delayed feedback is a property shared by many circuits in the central and peripheral nervous systems. The evolution of the neural activity in these circuits depends on their present state as well as on their past states, due to finite propagation time of neural activity along the feedback loop. These systems are often seen to undergo a change from a quiescent state characterized by low level fluctuations to an oscillatory state. We discuss the problem of analyzing this transition using techniques from nonlinear dynamics and stochastic processes. Our main goal is to characterize the nonlinearities which enable autonomous oscillations to occur and to uncover the properties of the noise sources these circuits interact with. The concepts are illustrated on the human pupil light reflex (PLR) which has been studied both theoretically and experimentally using this approach. 5 refs., 3 figs.
Probing Dynamical Character of Neural Circuits by Using Fuzzy Logic
NASA Astrophysics Data System (ADS)
Hu, Hong; Shi, Zhongzhi
2008-11-01
Analytical study or designing of large-scale nonlinear neural circuits, especially for chaotic neural circuits, is a difficult task. Here we analyze the function of neural systems by probing the fuzzy logical framework of the neural cells' dynamical equations. In this paper, the fuzzy logical framework of neural cells is used to understand the nonlinear dynamic attributes of a common neural system, and we proved that if a neural system works in a non-chaotic way, a suitable fuzzy logical framework can be found and we can analyze or design such kind neural system similar to analyze or design a digit computer, but if a neural system works in a chaotic way, an approximation is needed for understanding the function of such neural system.
Neural dynamics in superconducting networks
NASA Astrophysics Data System (ADS)
Segall, Kenneth; Schult, Dan; Crotty, Patrick; Miller, Max
2012-02-01
We discuss the use of Josephson junction networks as analog models for simulating neuron behaviors. A single unit called a ``Josephson Junction neuron'' composed of two Josephson junctions [1] displays behavior that shows characteristics of single neurons such as action potentials, thresholds and refractory periods. Synapses can be modeled as passive filters and can be used to connect neurons together. The sign of the bias current to the Josephson neuron can be used to determine if the neuron is excitatory or inhibitory. Due to the intrinsic speed of Josephson junctions and their scaling properties as analog models, a large network of Josephson neurons measured over typical lab times contains dynamics which would essentially be impossible to calculate on a computer We discuss the operating principle of the Josephson neuron, coupling Josephson neurons together to make large networks, and the Kuramoto-like synchronization of a system of disordered junctions.[4pt] [1] ``Josephson junction simulation of neurons,'' P. Crotty, D. Schult and K. Segall, Physical Review E 82, 011914 (2010).
On the Local-Field Distribution in Attractor Neural Networks
NASA Astrophysics Data System (ADS)
Korutcheva, E.; Koroutchev, K.
In this paper a simple two-layer neural network's model, similar to that studied by D. Amit and N. Brunel,11 is investigated in the frames of the mean-field approximation. The distributions of the local fields are analytically derived and compared to those obtained in Ref. 11. The dynamic properties are discussed and the basin of attraction in some parametric space is found. A procedure for driving the system into a basin of attraction by using a regulation imposed on the network is proposed. The effect of outer stimulus is shown to have a destructive influence on the attractor, forcing the latter to disappear if the distribution of the stimulus has high enough variance or if the stimulus has a spatial structure with sufficient contrast. The techniques, used in this paper, for obtaining the analytical results can be applied to more complex topologies of linked recurrent neural networks.
A phase field model for neural cell chemotropism
NASA Astrophysics Data System (ADS)
Najem, Sara; Grant, Martin
2013-04-01
Chemotropism is the action of targeting a part of the cell by means of chemical mediators and cues, and subsequently delimiting the pathway that it should undertake. In a neural cell, this initiates axonal elongation. Herein we model this growth, where chemotropic forcing leads the axon, by a phase field method utilizing two dynamical fields assigned respectively to the cell and to its leading edge. Additionally we quantify the condition for the retraction of the axon which takes place when the cell fails to form a synaptic connection.
The neural dynamics of sensory focus
Clarke, Stephen E.; Longtin, André; Maler, Leonard
2015-01-01
Coordinated sensory and motor system activity leads to efficient localization behaviours; but what neural dynamics enable object tracking and what are the underlying coding principles? Here we show that optimized distance estimation from motion-sensitive neurons underlies object tracking performance in weakly electric fish. First, a relationship is presented for determining the distance that maximizes the Fisher information of a neuron's response to object motion. When applied to our data, the theory correctly predicts the distance chosen by an electric fish engaged in a tracking behaviour, which is associated with a bifurcation between tonic and burst modes of spiking. Although object distance, size and velocity alter the neural response, the location of the Fisher information maximum remains invariant, demonstrating that the circuitry must actively adapt to maintain ‘focus' during relative motion. PMID:26549346
Natural neural projection dynamics underlying social behavior.
Gunaydin, Lisa A; Grosenick, Logan; Finkelstein, Joel C; Kauvar, Isaac V; Fenno, Lief E; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J; Airan, Raag D; Zalocusky, Kelly A; Tye, Kay M; Anikeeva, Polina; Malenka, Robert C; Deisseroth, Karl
2014-06-19
Social interaction is a complex behavior essential for many species and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social, but not novel object, interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type 1 dopamine receptor signaling downstream in the NAc. Direct observation of deep projection-specific activity in this way captures a fundamental and previously inaccessible dimension of mammalian circuit dynamics. PMID:24949967
Natural neural projection dynamics underlying social behavior
Gunaydin, Lisa A.; Grosenick, Logan; Finkelstein, Joel C.; Kauvar, Isaac V.; Fenno, Lief E.; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J.; Airan, Raag D.; Zalocusky, Kelly A.; Tye, Kay M.; Anikeeva, Polina; Malenka, Robert C.; Deisseroth, Karl
2014-01-01
Social interaction is a complex behavior essential for many species, and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically- and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social but not novel-object interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type-1 dopamine receptor signaling downstream in the NAc. Direct observation of projection-specific activity in this way captures a fundamental and previously inaccessible dimension of circuit dynamics. PMID:24949967
Information processing in neural networks with the complex dynamic thresholds
NASA Astrophysics Data System (ADS)
Kirillov, S. Yu.; Nekorkin, V. I.
2016-06-01
A control mechanism of the information processing in neural networks is investigated, based on the complex dynamic threshold of the neural excitation. The threshold properties are controlled by the slowly varying synaptic current. The dynamic threshold shows high sensitivity to the rate of the synaptic current variation. It allows both to realize flexible selective tuning of the network elements and to provide nontrivial regimes of neural coding.
Neural Field Models with Threshold Noise.
Thul, Rüdiger; Coombes, Stephen; Laing, Carlo R
2016-12-01
The original neural field model of Wilson and Cowan is often interpreted as the averaged behaviour of a network of switch like neural elements with a distribution of switch thresholds, giving rise to the classic sigmoidal population firing-rate function so prevalent in large scale neuronal modelling. In this paper we explore the effects of such threshold noise without recourse to averaging and show that spatial correlations can have a strong effect on the behaviour of waves and patterns in continuum models. Moreover, for a prescribed spatial covariance function we explore the differences in behaviour that can emerge when the underlying stationary distribution is changed from Gaussian to non-Gaussian. For travelling front solutions, in a system with exponentially decaying spatial interactions, we make use of an interface approach to calculate the instantaneous wave speed analytically as a series expansion in the noise strength. From this we find that, for weak noise, the spatially averaged speed depends only on the choice of covariance function and not on the shape of the stationary distribution. For a system with a Mexican-hat spatial connectivity we further find that noise can induce localised bump solutions, and using an interface stability argument show that there can be multiple stable solution branches. PMID:26936267
Dynamical system modeling via signal reduction and neural network simulation
Paez, T.L.; Hunter, N.F.
1997-11-01
Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.
An efficient neural network approach to dynamic robot motion planning.
Yang, S X; Meng, M
2000-03-01
In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies. PMID:10935758
Population clocks: motor timing with neural dynamics
Buonomano, Dean V.; Laje, Rodrigo
2010-01-01
An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing. PMID:20889368
Beyond mean field theory: statistical field theory for neural networks
Buice, Michael A; Chow, Carson C
2014-01-01
Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi–Peliti–Janssen formalism, are particularly useful in this regard. PMID:25243014
Neural attractor network for application in visual field data classification
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2004-07-01
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a ' counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available.
Neural attractor network for application in visual field data classification.
Fink, Wolfgang
2004-07-01
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a 'counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available
Neural dynamics of phonological processing in the dorsal auditory stream.
Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali
2013-09-25
Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors. PMID:24068810
An integrated architecture of adaptive neural network control for dynamic systems
Ke, Liu; Tokar, R.; Mcvey, B.
1994-07-01
In this study, an integrated neural network control architecture for nonlinear dynamic systems is presented. Most of the recent emphasis in the neural network control field has no error feedback as the control input which rises the adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. Feed forward neural network controllers with state feedback establish fixed control mappings which can not adapt when model uncertainties present. With error feedbacks, neural network controllers learn the slopes or the gains respecting to the error feedbacks, which are error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation.
NASA Astrophysics Data System (ADS)
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
, the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools
Two-photon imaging and analysis of neural network dynamics
NASA Astrophysics Data System (ADS)
Lütcke, Henry; Helmchen, Fritjof
2011-08-01
The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.
Adaptive control of nonlinear systems using multistage dynamic neural networks
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Rao, Dandina H.
1992-11-01
In this paper we present a new architecture of neuron, called the dynamic neural unit (DNU). The topology of the proposed neuronal model embodies delay elements, feedforward and feedback signals weighted by the synaptic weights and a time-varying nonlinear activation function, and is thus different from the conventionally and assumed architecture of neurons. The learning algorithm for the proposed neuronal structure and the corresponding implementation scheme are presented. A multi-stage dynamic neural network is developed using the DNU as the basic processing element. The performance evaluation of the dynamic neural network is presented for nonlinear dynamic systems under various situations. The capabilities of the proposed neural network model not only account for the learning and control actions emulating some of the biological control functions, but also provide a promising parallel-distributed intelligent control scheme for large-scale complex dynamic systems.
Neural dynamics during repetitive visual stimulation
NASA Astrophysics Data System (ADS)
Tsoneva, Tsvetomira; Garcia-Molina, Gary; Desain, Peter
2015-12-01
Objective. Steady-state visual evoked potentials (SSVEPs), the brain responses to repetitive visual stimulation (RVS), are widely utilized in neuroscience. Their high signal-to-noise ratio and ability to entrain oscillatory brain activity are beneficial for their applications in brain-computer interfaces, investigation of neural processes underlying brain rhythmic activity (steady-state topography) and probing the causal role of brain rhythms in cognition and emotion. This paper aims at analyzing the space and time EEG dynamics in response to RVS at the frequency of stimulation and ongoing rhythms in the delta, theta, alpha, beta, and gamma bands. Approach.We used electroencephalography (EEG) to study the oscillatory brain dynamics during RVS at 10 frequencies in the gamma band (40-60 Hz). We collected an extensive EEG data set from 32 participants and analyzed the RVS evoked and induced responses in the time-frequency domain. Main results. Stable SSVEP over parieto-occipital sites was observed at each of the fundamental frequencies and their harmonics and sub-harmonics. Both the strength and the spatial propagation of the SSVEP response seem sensitive to stimulus frequency. The SSVEP was more localized around the parieto-occipital sites for higher frequencies (>54 Hz) and spread to fronto-central locations for lower frequencies. We observed a strong negative correlation between stimulation frequency and relative power change at that frequency, the first harmonic and the sub-harmonic components over occipital sites. Interestingly, over parietal sites for sub-harmonics a positive correlation of relative power change and stimulation frequency was found. A number of distinct patterns in delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz) and beta (15-30 Hz) bands were also observed. The transient response, from 0 to about 300 ms after stimulation onset, was accompanied by increase in delta and theta power over fronto-central and occipital sites, which returned to baseline
The temporal dynamics of resilience: Neural recovery as a biomarker.
Walter, Henrik; Erk, Susanne; Veer, Ilya M
2015-01-01
Resilience can be defined as the capability of an individual to maintain health despite stress and adversity. Here we suggest to study the temporal dynamics of neural processes associated with affective perturbation and emotion regulation at different time scales to investigate the mechanisms of resilience. Parameters related to neural recovery might serve as a predictive biomarker for resilience. PMID:26786503
ERIC Educational Resources Information Center
Noyons, E. C. M.; van Raan, A. F. J.
1998-01-01
Using bibliometric mapping techniques, authors developed a methodology of self-organized structuring of scientific fields which was applied to neural network research. Explores the evolution of a data generated field structure by monitoring the interrelationships between subfields, the internal structure of subfields, and the dynamic features of…
Dynamic causal models of neural system dynamics: current state and future extensions
Stephan, Klaas E.; Harrison, Lee M.; Kiebel, Stefan J.; David, Olivier; Penny, Will D.; Friston, Karl J.
2009-01-01
Complex processes resulting from the interaction of multiple elements can rarely be understood by analytical scientific approaches alone; additionally, mathematical models of system dynamics are required. This insight, which disciplines like physics have embraced for a long time already, is gradually gaining importance in the study of cognitive processes by functional neuroimaging. In this field, causal mechanisms in neural systems are described in terms of effective connectivity. Recently, Dynamic Causal Modelling (DCM) was introduced as a generic method to estimate effective connectivity from neuroimaging data in a Bayesian fashion. One of the key advantages of DCM over previous methods is that it distinguishes between neural state equations and modality-specific forward models that translate neural activity into a measured signal. Another strength is its natural relation to Bayesian Model Selection (BMS) procedures. In this article, we review the conceptual and mathematical basis of DCM and its implementation for functional magnetic resonance imaging data and event-related potentials. After introducing the application of BMS in the context of DCM, we conclude with an outlook to future extensions of DCM. These extensions are guided by the long-term goal of using dynamic system models for pharmacological and clinical applications, particularly with regard to synaptic plasticity. PMID:17426386
Shaping the learning curve: epigenetic dynamics in neural plasticity
Bronfman, Zohar Z.; Ginsburg, Simona; Jablonka, Eva
2014-01-01
A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation, and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network, and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies. PMID:25071483
Multistage neural network model for dynamic scene analysis
Ajjimarangsee, P.
1989-01-01
This research is concerned with dynamic scene analysis. The goal of scene analysis is to recognize objects and have a meaningful interpretation of the scene from which images are obtained. The task of the dynamic scene analysis process generally consists of region identification, motion analysis and object recognition. The objective of this research is to develop clustering algorithms using neural network approach and to investigate a multi-stage neural network model for region identification and motion analysis. The research is separated into three parts. First, a clustering algorithm using Kohonens' self-organizing feature map network is developed to be capable of generating continuous membership valued outputs. A newly developed version of the updating algorithm of the network is introduced to achieve a high degree of parallelism. A neural network model for the fuzzy c-means algorithm is proposed. In the second part, the parallel algorithms of a neural network model for clustering using the self-organizing feature maps approach and a neural network that models the fuzzy c-means algorithm are modified for implementation on a distributed memory parallel architecture. In the third part, supervised and unsupervised neural network models for motion analysis are investigated. For a supervised neural network, a three layer perceptron network is trained by a series of images to recognize the movement of the objects. For the unsupervised neural network, a self-organizing feature mapping network will learn to recognize the movement of the objects without an explicit training phase.
Neural Dynamics of Attentional Cross-Modality Control
Rabinovich, Mikhail; Tristan, Irma; Varona, Pablo
2013-01-01
Attentional networks that integrate many cortical and subcortical elements dynamically control mental processes to focus on specific events and make a decision. The resources of attentional processing are finite. Nevertheless, we often face situations in which it is necessary to simultaneously process several modalities, for example, to switch attention between players in a soccer field. Here we use a global brain mode description to build a model of attentional control dynamics. This model is based on sequential information processing stability conditions that are realized through nonsymmetric inhibition in cortical circuits. In particular, we analyze the dynamics of attentional switching and focus in the case of parallel processing of three interacting mental modalities. Using an excitatory-inhibitory network, we investigate how the bifurcations between different attentional control strategies depend on the stimuli and analyze the relationship between the time of attention focus and the strength of the stimuli. We discuss the interplay between attention and decision-making: in this context, a decision-making process is a controllable bifurcation of the attention strategy. We also suggest the dynamical evaluation of attentional resources in neural sequence processing. PMID:23696890
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems. PMID:11852439
Absolute stability and synchronization in neural field models with transmission delays
NASA Astrophysics Data System (ADS)
Kao, Chiu-Yen; Shih, Chih-Wen; Wu, Chang-Hong
2016-08-01
Neural fields model macroscopic parts of the cortex which involve several populations of neurons. We consider a class of neural field models which are represented by integro-differential equations with transmission time delays which are space-dependent. The considered domains underlying the systems can be bounded or unbounded. A new approach, called sequential contracting, instead of the conventional Lyapunov functional technique, is employed to investigate the global dynamics of such systems. Sufficient conditions for the absolute stability and synchronization of the systems are established. Several numerical examples are presented to demonstrate the theoretical results.
Neural Dynamics Underlying Event-Related Potentials
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Bressler, Steven L.; Knuth, Kevin H.; Ding, Ming-Zhou; Mehta, Ashesh D.; Ulbert, Istvan; Schroeder, Charles E.
2003-01-01
There are two opposing hypotheses about the brain mechanisms underlying sensory event-related potentials (ERPs). One holds that sensory ERPs are generated by phase resetting of ongoing electroencephalographic (EEG) activity, and the other that they result from signal averaging of stimulus-evoked neural responses. We tested several contrasting predictions of these hypotheses by direct intracortical analysis of neural activity in monkeys. Our findings clearly demonstrate evoked response contributions to the sensory ERP in the monkey, and they suggest the likelihood that a mixed (Evoked/Phase Resetting) model may account for the generation of scalp ERPs in humans.
Measuring Whole-Brain Neural Dynamics and Behavior of Freely-Moving C. elegans
NASA Astrophysics Data System (ADS)
Shipley, Frederick; Nguyen, Jeffrey; Plummer, George; Shaevitz, Joshua; Leifer, Andrew
2015-03-01
Bridging the gap between an organism's neural dynamics and its ultimate behavior is the fundamental goal of neuroscience. Previously, to probe neural dynamics, we have been limited to measuring from a limited number of neurons, whether by electrode or optogenetic measurements. Here we present an instrument to simultaneously monitor neural activity from every neuron in a freely moving Caenorhabditis elegans' head, while recording behavior at the same time. Previously, whole-brain imaging has been demonstrated in C. elegans, but only in restrained and anesthetized animals (1). For studying neural coding of behavior it is crucial to study neural activity in freely behaving animals. Neural activity is recorded optically from cells expressing a calcium indicator, GCaMP6. Real time computer vision tracks the worm's position in x-y, while a piezo stage sweeps through the brain in z, yielding five brain-volumes per second. Behavior is recorded under infrared, dark-field imaging. This tool will allow us to directly correlate neural activity with behavior and we will present progress toward this goal. Thank you to the Simons Foundation and Princeton University for supporting this research.
Beyond slots and resources: grounding cognitive concepts in neural dynamics.
Johnson, Jeffrey S; Simmering, Vanessa R; Buss, Aaron T
2014-08-01
Research over the past decade has suggested that the ability to hold information in visual working memory (VWM) may be limited to as few as three to four items. However, the precise nature and source of these capacity limits remains hotly debated. Most commonly, capacity limits have been inferred from studies of visual change detection, in which performance declines systematically as a function of the number of items that participants must remember. According to one view, such declines indicate that a limited number of fixed-resolution representations are held in independent memory "slots." Another view suggests that such capacity limits are more apparent than real, but emerge as limited memory resources are distributed across more to-be-remembered items. Here we argue that, although both perspectives have merit and have generated and explained impressive amounts of empirical data, their central focus on the representations--rather than processes--underlying VWM may ultimately limit continuing progress in this area. As an alternative, we describe a neurally grounded, process-based approach to VWM: the dynamic field theory. Simulations demonstrate that this model can account for key aspects of behavioral performance in change detection, in addition to generating novel behavioral predictions that have been confirmed experimentally. Furthermore, we describe extensions of the model to recall tasks, the integration of visual features, cognitive development, individual differences, and functional imaging studies of VWM. We conclude by discussing the importance of grounding psychological concepts in neural dynamics, as a first step toward understanding the link between brain and behavior. PMID:24306983
Beyond slots and resources: Grounding cognitive concepts in neural dynamics
Johnson, Jeffrey S.; Simmering, Vanessa R.; Buss, Aaron T.
2014-01-01
Research over the past decade has suggested that the ability to hold information in visual working memory (VWM) may be limited to as few as 3-4 items. However, the precise nature and source of these capacity limits remains hotly debated. Most commonly, capacity limits have been inferred from studies of visual change detection, in which performance declines systematically as a function of the number of items participants must remember. According to one view, such declines indicate that a limited number of fixed-resolution representations are held in independent memory ‘slots’. Another view suggests that capacity limits are more apparent than real, emerging as limited memory resources are distributed across more to-be-remembered items. Here we argue that, although both perspectives have merit and have generated and explained an impressive amount of empirical data, their central focus on the representations—rather than processes—underlying VWM may ultimately limit continuing progress in this area. As an alternative, we describe a neurally-grounded, process-based approach to VWM: the dynamic field theory. Simulations demonstrate that this model can account for key aspects of behavioral performance in change detection, in addition to generating novel behavioral predictions that have been confirmed experimentally. Furthermore, we describe extensions of the model to recall tasks, the integration of visual features, cognitive development, individual differences, and functional imaging studies of VWM. We conclude by discussing the importance of grounding psychological concepts in neural dynamics as a first step toward understanding the link between brain and behavior. PMID:24306983
A theory of neural dimensionality, dynamics, and measurement
NASA Astrophysics Data System (ADS)
Ganguli, Surya
In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing millions of behaviorally relevant neurons. Dimensionality reduction has often shown that such datasets are strikingly simple; they can be described using a much smaller number of dimensions than the number of recorded neurons, and the resulting projections onto these dimensions yield a remarkably insightful dynamical portrait of circuit computation. This ubiquitous simplicity raises several profound and timely conceptual questions. What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons? We present a theory that answers these questions, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover dynamical portraits in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. membership pending.
Magnetic field induced dynamical chaos
Ray, Somrita; Baura, Alendu; Bag, Bidhan Chandra
2013-12-15
In this article, we have studied the dynamics of a particle having charge in the presence of a magnetic field. The motion of the particle is confined in the x–y plane under a two dimensional nonlinear potential. We have shown that constant magnetic field induced dynamical chaos is possible even for a force which is derived from a simple potential. For a given strength of the magnetic field, initial position, and velocity of the particle, the dynamics may be regular, but it may become chaotic when the field is time dependent. Chaotic dynamics is very often if the field is time dependent. Origin of chaos has been explored using the Hamiltonian function of the dynamics in terms of action and angle variables. Applicability of the present study has been discussed with a few examples.
Dynamic model of neural networks with asymmetric diluted couplings
NASA Astrophysics Data System (ADS)
Choi, M. Y.; Choi, Meekyoung
1990-06-01
We study an asymmetric diluted version of the dynamic model for neural networks proposed recently, which explicitly takes into account the existence of several time scales without discretizing the time. The dynamics is neither totally synchronous nor totally asynchronous, and the couplings in the neural networks are asymmetric. These considerations may be regarded as more biologically realistic. We obtain the phase diagram as a function of the temperature ɛ-1, the capacity α, and the ratio a of the refractory period to the action potential duration.
Spontaneous Neural Dynamics and Multi-scale Network Organization
Foster, Brett L.; He, Biyu J.; Honey, Christopher J.; Jerbi, Karim; Maier, Alexander; Saalmann, Yuri B.
2016-01-01
Spontaneous neural activity has historically been viewed as task-irrelevant noise that should be controlled for via experimental design, and removed through data analysis. However, electrophysiology and functional MRI studies of spontaneous activity patterns, which have greatly increased in number over the past decade, have revealed a close correspondence between these intrinsic patterns and the structural network architecture of functional brain circuits. In particular, by analyzing the large-scale covariation of spontaneous hemodynamics, researchers are able to reliably identify functional networks in the human brain. Subsequent work has sought to identify the corresponding neural signatures via electrophysiological measurements, as this would elucidate the neural origin of spontaneous hemodynamics and would reveal the temporal dynamics of these processes across slower and faster timescales. Here we survey common approaches to quantifying spontaneous neural activity, reviewing their empirical success, and their correspondence with the findings of neuroimaging. We emphasize invasive electrophysiological measurements, which are amenable to amplitude- and phase-based analyses, and which can report variations in connectivity with high spatiotemporal precision. After summarizing key findings from the human brain, we survey work in animal models that display similar multi-scale properties. We highlight that, across many spatiotemporal scales, the covariance structure of spontaneous neural activity reflects structural properties of neural networks and dynamically tracks their functional repertoire. PMID:26903823
Toward modeling a dynamic biological neural network.
Ross, M D; Dayhoff, J E; Mugler, D H
1990-01-01
Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system. PMID:11538873
Neural network with dynamically adaptable neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.
On the dynamics of delayed neural feedback loops
NASA Astrophysics Data System (ADS)
Brandt, Sebastian F.
The computational potential of neural circuits arises from the interconnections and interactions between their elements. Feedback is a universal feature of neuronal organization and has been shown to be a key element in neural signal processing. In biological neural circuits, delays arise from finite axonal conduction speeds and at the synaptic level due to transmitter release dynamics. In this work, the influence of temporal delay on neural network dynamics is investigated. The basic feedback mechanisms involved in the regulation of neural activity consist of small circuits composed of two to three neurons. We analyze a system of two interconnected neurons and show that finite delays can induce oscillations in the system. Employing a perturbative approach in combination with a resummation scheme, we evaluate the limit cycle dynamics of the system. We show that synchronous oscillations can arise when the delays are asymmetric. Furthermore, distributed delays can stabilize the system and lead to an increased range of parameters for which the system converges to a stable fixed point. We next consider a delayed neural triad with a characteristic topology commonly found in neural feedback circuits. We show that the system can be both robust and sensitive in regard to small parameter changes and examine the significance of the different projections We then address the functional role of a particular feedback loop found in the visual system of nonmammalian vertebrates. We show that the system can function as a 'winner-take-all' and novelty detector and examine the influence of temporal delays on the system's performance. Biological systems are subject to stochastic influences and display some degree of disorder. We examine the role of noise and its effect on the stability of the synchronized state in a system of two coupled active rotators. Finally, we show that disordering the driving forces in arrays of coupled oscillators can lead to synchronization in these systems.
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
Dynamic behaviors of the non-neural ectoderm during mammalian cranial neural tube closure.
Ray, Heather J; Niswander, Lee A
2016-08-15
The embryonic brain and spinal cord initially form through the process of neural tube closure (NTC). NTC is thought to be highly similar between rodents and humans, and studies of mouse genetic mutants have greatly increased our understanding of the molecular basis of NTC with relevance for human neural tube defects. In addition, studies using amphibian and chick embryos have shed light into the cellular and tissue dynamics underlying NTC. However, the dynamics of mammalian NTC has been difficult to study due to in utero development until recently when advances in mouse embryo ex vivo culture techniques along with confocal microscopy have allowed for imaging of mouse NTC in real time. Here, we have performed live imaging of mouse embryos with a particular focus on the non-neural ectoderm (NNE). Previous studies in multiple model systems have found that the NNE is important for proper NTC, but little is known about the behavior of these cells during mammalian NTC. Here we utilized a NNE-specific genetic labeling system to assess NNE dynamics during murine NTC and identified different NNE cell behaviors as the cranial region undergoes NTC. These results bring valuable new insight into regional differences in cellular behavior during NTC that may be driven by different molecular regulators and which may underlie the various positional disruptions of NTC observed in humans with neural tube defects. PMID:27343896
Dynamics of a neural system with a multiscale architecture
Breakspear, Michael; Stam, Cornelis J
2005-01-01
The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448
Dynamic Artificial Neural Networks with Affective Systems
Schuman, Catherine D.; Birdwell, J. Douglas
2013-01-01
Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance. PMID:24303015
Dynamic Pricing in Electronic Commerce Using Neural Network
NASA Astrophysics Data System (ADS)
Ghose, Tapu Kumar; Tran, Thomas T.
In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.
Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns
Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario
2015-01-01
The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381
3-D flame temperature field reconstruction with multiobjective neural network
NASA Astrophysics Data System (ADS)
Wan, Xiong; Gao, Yiqing; Wang, Yuanmei
2003-02-01
A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.
Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno
2013-09-18
Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks. PMID:24048833
A solution to neural field equations by a recurrent neural network method
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2012-09-01
Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming
2015-01-01
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic
Dynamical analysis of uncertain neural networks with multiple time delays
NASA Astrophysics Data System (ADS)
Arik, Sabri
2016-02-01
This paper investigates the robust stability problem for dynamical neural networks in the presence of time delays and norm-bounded parameter uncertainties with respect to the class of non-decreasing, non-linear activation functions. By employing the Lyapunov stability and homeomorphism mapping theorems together, a new delay-independent sufficient condition is obtained for the existence, uniqueness and global asymptotic stability of the equilibrium point for the delayed uncertain neural networks. The condition obtained for robust stability establishes a matrix-norm relationship between the network parameters of the neural system, which can be easily verified by using properties of the class of the positive definite matrices. Some constructive numerical examples are presented to show the applicability of the obtained result and its advantages over the previously published corresponding literature results.
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
Nonlinear dynamical system approaches towards neural prosthesis
Torikai, Hiroyuki; Hashimoto, Sho
2011-04-19
An asynchronous discrete-state spiking neurons is a wired system of shift registers that can mimic nonlinear dynamics of an ODE-based neuron model. The control parameter of the neuron is the wiring pattern among the registers and thus they are suitable for on-chip learning. In this paper an asynchronous discrete-state spiking neuron is introduced and its typical nonlinear phenomena are demonstrated. Also, a learning algorithm for a set of neurons is presented and it is demonstrated that the algorithm enables the set of neurons to reconstruct nonlinear dynamics of another set of neurons with unknown parameter values. The learning function is validated by FPGA experiments.
Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Kozberg, Mariel G; Thibodeaux, David N; Zhao, Hanzhi T; Yu, Hang; Hillman, Elizabeth M C
2016-10-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results.This article is part of the themed issue 'Interpreting BOLD: a dialogue between cognitive and cellular neuroscience'. PMID:27574312
Simulation of dynamic processes with adaptive neural networks.
Tzanos, C. P.
1998-02-03
Many industrial processes are highly non-linear and complex. Their simulation with first-principle or conventional input-output correlation models is not satisfactory, either because the process physics is not well understood, or it is so complex that direct simulation is either not adequately accurate, or it requires excessive computation time, especially for on-line applications. Artificial intelligence techniques (neural networks, expert systems, fuzzy logic) or their combination with simple process-physics models can be effectively used for the simulation of such processes. Feedforward (static) neural networks (FNNs) can be used effectively to model steady-state processes. They have also been used to model dynamic (time-varying) processes by adding to the network input layer input nodes that represent values of input variables at previous time steps. The number of previous time steps is problem dependent and, in general, can be determined after extensive testing. This work demonstrates that for dynamic processes that do not vary fast with respect to the retraining time of the neural network, an adaptive feedforward neural network can be an effective simulator that is free of the complexities introduced by the use of input values at previous time steps.
Persistent Activity in Neural Networks with Dynamic Synapses
Barak, Omri; Tsodyks, Misha
2007-01-01
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli. PMID:17319739
Slow dynamics in features of synchronized neural network responses
Haroush, Netta; Marom, Shimon
2015-01-01
In this report trial-to-trial variations in the synchronized responses of neural networks are explored over time scales of minutes, in ex-vivo large scale cortical networks. We show that sub-second measures of the individual synchronous response, namely—its latency and decay duration, are related to minutes-scale network response dynamics. Network responsiveness is reflected as residency in, or shifting amongst, areas of the latency-decay plane. The different sensitivities of latency and decay durations to synaptic blockers imply that these two measures reflect aspects of inhibitory and excitatory activities. Taken together, the data suggest that trial-to-trial variations in the synchronized responses of neural networks might be related to effective excitation-inhibition ratio being a dynamic variable over time scales of minutes. PMID:25926787
Specific frontal neural dynamics contribute to decisions to check
Stoll, Frederic M.; Fontanier, Vincent; Procyk, Emmanuel
2016-01-01
Curiosity and information seeking potently shapes our behaviour and are thought to rely on the frontal cortex. Yet, the frontal regions and neural dynamics that control the drive to check for information remain unknown. Here we trained monkeys in a task where they had the opportunity to gain information about the potential delivery of a large bonus reward or continue with a default instructed decision task. Single-unit recordings in behaving monkeys reveal that decisions to check for additional information first engage midcingulate cortex and then lateral prefrontal cortex. The opposite is true for instructed decisions. Importantly, deciding to check engages neurons also involved in performance monitoring. Further, specific midcingulate activity could be discerned several trials before the monkeys actually choose to check the environment. Our data show that deciding to seek information on the current state of the environment is characterized by specific dynamics of neural activity within the prefrontal cortex. PMID:27319361
Shaping the Dynamics of a Bidirectional Neural Interface
Vato, Alessandro; Semprini, Marianna; Maggiolini, Emma; Szymanski, Francois D.; Fadiga, Luciano; Panzeri, Stefano; Mussa-Ivaldi, Ferdinando A.
2012-01-01
Progress in decoding neural signals has enabled the development of interfaces that translate cortical brain activities into commands for operating robotic arms and other devices. The electrical stimulation of sensory areas provides a means to create artificial sensory information about the state of a device. Taken together, neural activity recording and microstimulation techniques allow us to embed a portion of the central nervous system within a closed-loop system, whose behavior emerges from the combined dynamical properties of its neural and artificial components. In this study we asked if it is possible to concurrently regulate this bidirectional brain-machine interaction so as to shape a desired dynamical behavior of the combined system. To this end, we followed a well-known biological pathway. In vertebrates, the communications between brain and limb mechanics are mediated by the spinal cord, which combines brain instructions with sensory information and organizes coordinated patterns of muscle forces driving the limbs along dynamically stable trajectories. We report the creation and testing of the first neural interface that emulates this sensory-motor interaction. The interface organizes a bidirectional communication between sensory and motor areas of the brain of anaesthetized rats and an external dynamical object with programmable properties. The system includes (a) a motor interface decoding signals from a motor cortical area, and (b) a sensory interface encoding the state of the external object into electrical stimuli to a somatosensory area. The interactions between brain activities and the state of the external object generate a family of trajectories converging upon a selected equilibrium point from arbitrary starting locations. Thus, the bidirectional interface establishes the possibility to specify not only a particular movement trajectory but an entire family of motions, which includes the prescribed reactions to unexpected perturbations. PMID
Dynamic neural activity during stress signals resilient coping.
Sinha, Rajita; Lacadie, Cheryl M; Constable, R Todd; Seo, Dongju
2016-08-01
Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping. PMID:27432990
Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.
Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc
2016-08-01
This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data. PMID:26955020
Response of traveling waves to transient inputs in neural fields
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Ermentrout, Bard
2012-02-01
We analyze the effects of transient stimulation on traveling waves in neural field equations. Neural fields are modeled as integro-differential equations whose convolution term represents the synaptic connections of a spatially extended neuronal network. The adjoint of the linearized wave equation can be used to identify how a particular input will shift the location of a traveling wave. This wave response function is analogous to the phase response curve of limit cycle oscillators. For traveling fronts in an excitatory network, the sign of the shift depends solely on the sign of the transient input. A complementary estimate of the effective shift is derived using an equation for the time-dependent speed of the perturbed front. Traveling pulses are analyzed in an asymmetric lateral inhibitory network and they can be advanced or delayed, depending on the position of spatially localized transient inputs. We also develop bounds on the amplitude of transient input necessary to terminate traveling pulses, based on the global bifurcation structure of the neural field.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Dynamic Modeling of time series using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Nair, A. D.; Principe, Jose C.
1995-12-01
Artificial Neural Networks (ANN) have the ability to adapt to and learn complex topologies, they represent new technology with which to explore dynamical systems. Multi-step prediction is used to capture the dynamics of the system that produced the time series. Multi-step prediction is implemented by a recurrent ANN trained with trajectory learning. Two separate memories are employed in training the ANN, the common tapped delay-line memory and the new gamma memory. This methodology has been applied to the time series of a white dwarf and to the quasar 3C 345.
Perspective: network-guided pattern formation of neural dynamics.
Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C
2014-10-01
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics. PMID:25180302
Mean-field equations, bifurcation map and route to chaos in discrete time neural networks
NASA Astrophysics Data System (ADS)
Cessac, B.; Doyon, B.; Quoy, M.; Samuelides, M.
1994-07-01
We investigate the dynamical behaviour of neural networks with asymmetric synaptic weights, in the presence of random thresholds. We inspect low gain dynamics before using mean-field equations to study the bifurcations of the fixed points and the change of regime that occurs when varying control parameters. We infer different areas with various regimes summarized by a bifurcation map in the parameter space. We numerically show the occurence of chaos that arises generically by a quasi-periodicity route. We then discuss some features of our system in relation with biological observations such as low firing rates and refractory periods.
The dynamical stability of reverberatory neural circuits.
Tegnér, Jesper; Compte, Albert; Wang, Xiao-Jing
2002-12-01
The concept of reverberation proposed by Lorente de Nó and Hebb is key to understanding strongly recurrent cortical networks. In particular, synaptic reverberation is now viewed as a likely mechanism for the active maintenance of working memory in the prefrontal cortex. Theoretically, this has spurred a debate as to how such a potentially explosive mechanism can provide stable working-memory function given the synaptic and cellular mechanisms at play in the cerebral cortex. We present here new evidence for the participation of NMDA receptors in the stabilization of persistent delay activity in a biophysical network model of conductance-based neurons. We show that the stability of working-memory function, and the required NMDA/AMPA ratio at recurrent excitatory synapses, depend on physiological properties of neurons and synaptic interactions, such as the time constants of excitation and inhibition, mutual inhibition between interneurons, differential NMDA receptor participation at excitatory projections to pyramidal neurons and interneurons, or the presence of slow intrinsic ion currents in pyramidal neurons. We review other mechanisms proposed to enhance the dynamical stability of synaptically generated attractor states of a reverberatory circuit. This recent work represents a necessary and significant step towards testing attractor network models by cortical electrophysiology. PMID:12461636
Predicting physical time series using dynamic ridge polynomial neural networks.
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
From invasion to extinction in heterogeneous neural fields.
Bressloff, Paul C
2012-01-01
In this paper, we analyze the invasion and extinction of activity in heterogeneous neural fields. We first consider the effects of spatial heterogeneities on the propagation of an invasive activity front. In contrast to previous studies of front propagation in neural media, we assume that the front propagates into an unstable rather than a metastable zero-activity state. For sufficiently localized initial conditions, the asymptotic velocity of the resulting pulled front is given by the linear spreading velocity, which is determined by linearizing about the unstable state within the leading edge of the front. One of the characteristic features of these so-called pulled fronts is their sensitivity to perturbations inside the leading edge. This means that standard perturbation methods for studying the effects of spatial heterogeneities or external noise fluctuations break down. We show how to extend a partial differential equation method for analyzing pulled fronts in slowly modulated environments to the case of neural fields with slowly modulated synaptic weights. The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding local Hamilton-Jacobi equation. We use steepest descents to derive the Hamilton-Jacobi equation from the original nonlocal neural field equation. In the case of weak synaptic heterogenities, we then use perturbation theory to solve the corresponding Hamilton equations and thus determine the time-dependent wave speed. In the second part of the paper, we investigate how time-dependent heterogenities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic
Bio-Inspired Neural Model for Learning Dynamic Models
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Suri, Ronald
2009-01-01
A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.
Dynamic neural mechanisms underlie race disparities in social cognition.
Cassidy, Brittany S; Krendl, Anne C
2016-05-15
Race disparities in behavior may emerge in several ways, some of which may be independent of implicit bias. To mitigate the pernicious effects of different race disparities for racial minorities, we must understand whether they are rooted in perceptual, affective, or cognitive processing with regard to race perception. We used fMRI to disentangle dynamic neural mechanisms predictive of two separable race disparities that can be obtained from a trustworthiness ratings task. Increased coupling between regions involved in perceptual and affective processing when viewing Black versus White faces predicted less later racial trust disparity, which was related to implicit bias. In contrast, increased functional coupling between regions involved in controlled processing predicted less later disparity in the differentiation of Black versus White faces with regard to perceived trust, which was unrelated to bias. These findings reveal that distinct neural signatures underlie separable race disparities in social cognition that may or may not be related to implicit bias. PMID:26908320
Dynamical criticality in the collective activity of a neural population
NASA Astrophysics Data System (ADS)
Mora, Thierry
The past decade has seen a wealth of physiological data suggesting that neural networks may behave like critical branching processes. Concurrently, the collective activity of neurons has been studied using explicit mappings to classic statistical mechanics models such as disordered Ising models, allowing for the study of their thermodynamics, but these efforts have ignored the dynamical nature of neural activity. I will show how to reconcile these two approaches by learning effective statistical mechanics models of the full history of the collective activity of a neuron population directly from physiological data, treating time as an additional dimension. Applying this technique to multi-electrode recordings from retinal ganglion cells, and studying the thermodynamics of the inferred model, reveals a peak in specific heat reminiscent of a second-order phase transition.
Dynamic digital watermark technique based on neural network
NASA Astrophysics Data System (ADS)
Gu, Tao; Li, Xu
2008-04-01
An algorithm of dynamic watermark based on neural network is presented which is more robust against attack of false authentication and watermark-tampered operations contrasting with one watermark embedded method. (1) Five binary images used as watermarks are coded into a binary array. The total number of 0s and 1s is 5*N, every 0 or 1 is enlarged fivefold by information-enlarged technique. N is the original total number of the watermarks' binary bits. (2) Choose the seed image pixel p x,y and its 3×3 vicinities pixel p x-1,y-1,p x-1,y,p x-1,y+1,p x,y-1,p x,y+1,p x+1,y-1,p x+1,y,p x+1,y+1 as one sample space. The p x,y is used as the neural network target and the other eight pixel values are used as neural network inputs. (3) To make the neural network learn the sample space, 5*N pixel values and their closely relevant pixel values are randomly chosen with a password from a color BMP format image and used to train the neural network.(4) A four-layer neural network is constructed to describe the nonlinear mapped relationship between inputs and outputs. (5) One bit from the array is embedded by adjusting the polarity between a chosen pixel value and the output value of the model. (6) One randomizer generates a number to ascertain the counts of watermarks for retrieving. The randomly ascertained watermarks can be retrieved by using the restored neural network outputs value, the corresponding image pixels value, and the restore function without knowing the original image and watermarks (The restored coded-watermark bit=1, if ox,y(restored)>p x,y(reconstructed, else coded-watermark bit =0). The retrieved watermarks are different when extracting each time. The proposed technique can offer more watermarking proofs than one watermark embedded algorithm. Experimental results show that the proposed technique is very robust against some image processing operations and JPEG lossy compression. Therefore, the algorithm can be used to protect the copyright of one important image.
Dynamics of gauge field inflation
Alexander, Stephon; Jyoti, Dhrubo; Kosowsky, Arthur; Marcianò, Antonino
2015-05-05
We analyze the existence and stability of dynamical attractor solutions for cosmological inflation driven by the coupling between fermions and a gauge field. Assuming a spatially homogeneous and isotropic gauge field and fermion current, the interacting fermion equation of motion reduces to that of a free fermion up to a phase shift. Consistency of the model is ensured via the Stückelberg mechanism. We prove the existence of exactly one stable solution, and demonstrate the stability numerically. Inflation arises without fine tuning, and does not require postulating any effective potential or non-standard coupling.
Autonomic neural control of dynamic cerebral autoregulation in humans
NASA Technical Reports Server (NTRS)
Zhang, Rong; Zuckerman, Julie H.; Iwasaki, Kenichi; Wilson, Thad E.; Crandall, Craig G.; Levine, Benjamin D.
2002-01-01
BACKGROUND: The purpose of the present study was to determine the role of autonomic neural control of dynamic cerebral autoregulation in humans. METHODS AND RESULTS: We measured arterial pressure and cerebral blood flow (CBF) velocity in 12 healthy subjects (aged 29+/-6 years) before and after ganglion blockade with trimethaphan. CBF velocity was measured in the middle cerebral artery using transcranial Doppler. The magnitude of spontaneous changes in mean blood pressure and CBF velocity were quantified by spectral analysis. The transfer function gain, phase, and coherence between these variables were estimated to quantify dynamic cerebral autoregulation. After ganglion blockade, systolic and pulse pressure decreased significantly by 13% and 26%, respectively. CBF velocity decreased by 6% (P<0.05). In the very low frequency range (0.02 to 0.07 Hz), mean blood pressure variability decreased significantly (by 82%), while CBF velocity variability persisted. Thus, transfer function gain increased by 81%. In addition, the phase lead of CBF velocity to arterial pressure diminished. These changes in transfer function gain and phase persisted despite restoration of arterial pressure by infusion of phenylephrine and normalization of mean blood pressure variability by oscillatory lower body negative pressure. CONCLUSIONS: These data suggest that dynamic cerebral autoregulation is altered by ganglion blockade. We speculate that autonomic neural control of the cerebral circulation is tonically active and likely plays a significant role in the regulation of beat-to-beat CBF in humans.
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S; Zhang, Mingming; Durand, Dominique M
2015-12-01
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2-6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5-5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. PMID:26631463
NASA Astrophysics Data System (ADS)
Touboul, Jonathan
2012-08-01
In this manuscript we analyze the collective behavior of mean-field limits of large-scale, spatially extended stochastic neuronal networks with delays. Rigorously, the asymptotic regime of such systems is characterized by a very intricate stochastic delayed integro-differential McKean-Vlasov equation that remain impenetrable, leaving the stochastic collective dynamics of such networks poorly understood. In order to study these macroscopic dynamics, we analyze networks of firing-rate neurons, i.e. with linear intrinsic dynamics and sigmoidal interactions. In that case, we prove that the solution of the mean-field equation is Gaussian, hence characterized by its two first moments, and that these two quantities satisfy a set of coupled delayed integro-differential equations. These equations are similar to usual neural field equations, and incorporate noise levels as a parameter, allowing analysis of noise-induced transitions. We identify through bifurcation analysis several qualitative transitions due to noise in the mean-field limit. In particular, stabilization of spatially homogeneous solutions, synchronized oscillations, bumps, chaotic dynamics, wave or bump splitting are exhibited and arise from static or dynamic Turing-Hopf bifurcations. These surprising phenomena allow further exploring the role of noise in the nervous system.
Phase transitions in a dynamic model of neural networks
NASA Astrophysics Data System (ADS)
Shim, G. M.; Choi, M. Y.; Kim, D.
1991-01-01
A dynamic model for neural networks that explicitly takes into account the existence of several time scales without discretizing the time is studied analytically via the use of path integrals. The maximum capacity of the network is found to be that of the Hopfield model divided by 1+a2, with a the ratio of the refractory period to the action-potential duration. We obtain the phase diagram as a function of a, the capacity, and the temperature. The overall phase diagram is rich in structure, exhibiting first-order transitions as well as continuous ones.
Neural representation of dynamic frequency is degraded in older adults.
Clinard, Christopher G; Cotter, Caitlin M
2015-05-01
Older adults, even with clinically normal hearing sensitivity, often report difficulty understanding speech in the presence of background noise. Part of this difficulty may be related to age-related degradations in the neural representation of speech sounds, such as formant transitions. Frequency-following responses (FFRs), which are dependent on phase-locked neural activity, were elicited using sounds consisting of linear frequency sweeps, which may be viewed as simple models of formant transitions. Eighteen adults (ten younger, 22-24 years old, and nine older, 51-67 years old) were tested. FFRs were elicited by tonal sweeps in six conditions. Two directions of frequency change, rising or falling, were used for each of three rates of frequency change. Stimulus-to-response cross correlations revealed that older adults had significantly poorer representation of the tonal sweeps, and that FFRs became poorer for faster rates of change. An additional FFR signal-to-noise ratio analysis based on time windows revealed that across the FFR waveforms and rates of frequency change, older adults had smaller (poorer) signal-to-noise ratios. These results indicate that older adults, even with clinically-normal hearing sensitivity, have degraded phase-locked neural representations of dynamic frequency. PMID:25724819
Classification of the extracellular fields produced by activated neural structures
Richerson, Samantha; Ingram, Mark; Perry, Danielle; Stecker, Mark M
2005-01-01
Background Classifying the types of extracellular potentials recorded when neural structures are activated is an important component in understanding nerve pathophysiology. Varying definitions and approaches to understanding the factors that influence the potentials recorded during neural activity have made this issue complex. Methods In this article, many of the factors which influence the distribution of electric potential produced by a traveling action potential are discussed from a theoretical standpoint with illustrative simulations. Results For an axon of arbitrary shape, it is shown that a quadrupolar potential is generated by action potentials traveling along a straight axon. However, a dipole moment is generated at any point where an axon bends or its diameter changes. Next, it is shown how asymmetric disturbances in the conductivity of the medium surrounding an axon produce dipolar potentials, even during propagation along a straight axon. Next, by studying the electric fields generated by a dipole source in an insulating cylinder, it is shown that in finite volume conductors, the extracellular potentials can be very different from those in infinite volume conductors. Finally, the effects of impulses propagating along axons with inhomogeneous cable properties are analyzed. Conclusion Because of the well-defined factors affecting extracellular potentials, the vague terms far-field and near-field potentials should be abandoned in favor of more accurate descriptions of the potentials. PMID:16146569
Endothelial cells regulate neural crest and second heart field morphogenesis
Milgrom-Hoffman, Michal; Michailovici, Inbal; Ferrara, Napoleone; Zelzer, Elazar; Tzahor, Eldad
2014-01-01
ABSTRACT Cardiac and craniofacial developmental programs are intricately linked during early embryogenesis, which is also reflected by a high frequency of birth defects affecting both regions. The molecular nature of the crosstalk between mesoderm and neural crest progenitors and the involvement of endothelial cells within the cardio–craniofacial field are largely unclear. Here we show in the mouse that genetic ablation of vascular endothelial growth factor receptor 2 (Flk1) in the mesoderm results in early embryonic lethality, severe deformation of the cardio–craniofacial field, lack of endothelial cells and a poorly formed vascular system. We provide evidence that endothelial cells are required for migration and survival of cranial neural crest cells and consequently for the deployment of second heart field progenitors into the cardiac outflow tract. Insights into the molecular mechanisms reveal marked reduction in Transforming growth factor beta 1 (Tgfb1) along with changes in the extracellular matrix (ECM) composition. Our collective findings in both mouse and avian models suggest that endothelial cells coordinate cardio–craniofacial morphogenesis, in part via a conserved signaling circuit regulating ECM remodeling by Tgfb1. PMID:24996922
Neural measures of dynamic changes in attentive tracking load.
Drew, Trafton; Horowitz, Todd S; Wolfe, Jeremy M; Vogel, Edward K
2012-02-01
In everyday life, we often need to track several objects simultaneously, a task modeled in the laboratory using the multiple-object tracking (MOT) task [Pylyshyn, Z., & Storm, R. W. Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spatial Vision, 3, 179-197, 1988]. Unlike MOT, however, in life, the set of relevant targets tends to be fluid and change over time. Humans are quite adept at "juggling" targets in and out of the target set [Wolfe, J. M., Place, S. S., & Horowitz, T. S. Multiple object juggling: Changing what is tracked during extended MOT. Psychonomic Bulletin & Review, 14, 344-349, 2007]. Here, we measured the neural underpinnings of this process using electrophysiological methods. Vogel and colleagues [McCollough, A. W., Machizawa, M. G., & Vogel, E. K. Electrophysiological measures of maintaining representations in visual working memory. Cortex, 43, 77-94, 2007; Vogel, E. K., McCollough, A. W., & Machizawa, M. G. Neural measures reveal individual differences in controlling access to working memory. Nature, 438, 500-503, 2005; Vogel, E. K., & Machizawa, M. G. Neural activity predicts individual differences in visual working memory capacity. Nature, 428, 748-751, 2004] have shown that the amplitude of a sustained lateralized negativity, contralateral delay activity (CDA) indexes the number of items held in visual working memory. Drew and Vogel [Drew, T., & Vogel, E. K. Neural measures of individual differences in selecting and tracking multiple moving objects. Journal of Neuroscience, 28, 4183-4191, 2008] showed that the CDA also indexes the number of items being tracking a standard MOT task. In the current study, we set out to determine whether the CDA is a signal that merely represents the number of objects that are attended during a trial or a dynamic signal capable of reflecting on-line changes in tracking load during a single trial. By measuring the response to add or drop cues, we were able to observe dynamic
Adaptive neural information processing with dynamical electrical synapses
Xiao, Lei; Zhang, Dan-ke; Li, Yuan-qing; Liang, Pei-ji; Wu, Si
2013-01-01
The present study investigates a potential computational role of dynamical electrical synapses in neural information process. Compared with chemical synapses, electrical synapses are more efficient in modulating the concerted activity of neurons. Based on the experimental data, we propose a phenomenological model for short-term facilitation of electrical synapses. The model satisfactorily reproduces the phenomenon that the neuronal correlation increases although the neuronal firing rates attenuate during the luminance adaptation. We explore how the stimulus information is encoded in parallel by firing rates and correlated activity of neurons, and find that dynamical electrical synapses mediate a transition from the firing rate code to the correlation one during the luminance adaptation. The latter encodes the stimulus information by using the concerted, but lower neuronal firing rate, and hence is economically more efficient. PMID:23596413
Binocular rivalry waves in a directionally selective neural field model
NASA Astrophysics Data System (ADS)
Carroll, Samuel R.; Bressloff, Paul C.
2014-10-01
We extend a neural field model of binocular rivalry waves in the visual cortex to incorporate direction selectivity of moving stimuli. For each eye, we consider a one-dimensional network of neurons that respond maximally to a fixed orientation and speed of a grating stimulus. Recurrent connections within each one-dimensional network are taken to be excitatory and asymmetric, where the asymmetry captures the direction and speed of the moving stimuli. Connections between the two networks are taken to be inhibitory (cross-inhibition). As per previous studies, we incorporate slow adaption as a symmetry breaking mechanism that allows waves to propagate. We derive an analytical expression for traveling wave solutions of the neural field equations, as well as an implicit equation for the wave speed as a function of neurophysiological parameters, and analyze their stability. Most importantly, we show that propagation of traveling waves is faster in the direction of stimulus motion than against it, which is in agreement with previous experimental and computational studies.
Some theoretical and numerical results for delayed neural field equations
NASA Astrophysics Data System (ADS)
Faye, Grégory; Faugeras, Olivier
2010-05-01
In this paper we study neural field models with delays which define a useful framework for modeling macroscopic parts of the cortex involving several populations of neurons. Nonlinear delayed integro-differential equations describe the spatio-temporal behavior of these fields. Using methods from the theory of delay differential equations, we show the existence and uniqueness of a solution of these equations. A Lyapunov analysis gives us sufficient conditions for the solutions to be asymptotically stable. We also present a fairly detailed study of the numerical computation of these solutions. This is, to our knowledge, the first time that a serious analysis of the problem of the existence and uniqueness of a solution of these equations has been performed. Another original contribution of ours is the definition of a Lyapunov functional and the result of stability it implies. We illustrate our numerical schemes on a variety of examples that are relevant to modeling in neuroscience.
Dynamic analysis of a general class of winner-take-all competitive neural networks.
Fang, Yuguang; Cohen, Michael A; Kincaid, Thomas G
2010-05-01
This paper studies a general class of dynamical neural networks with lateral inhibition, exhibiting winner-take-all (WTA) behavior. These networks are motivated by a metal-oxide-semiconductor field effect transistor (MOSFET) implementation of neural networks, in which mutual competition plays a very important role. We show that for a fairly general class of competitive neural networks, WTA behavior exists. Sufficient conditions for the network to have a WTA equilibrium are obtained, and rigorous convergence analysis is carried out. The conditions for the network to have the WTA behavior obtained in this paper provide design guidelines for the network implementation and fabrication. We also demonstrate that whenever the network gets into the WTA region, it will stay in that region and settle down exponentially fast to the WTA point. This provides a speeding procedure for the decision making: as soon as it gets into the region, the winner can be declared. Finally, we show that this WTA neural network has a self-resetting property, and a resetting principle is proposed. PMID:20215068
Schoppik, David; Nagel, Katherine I; Lisberger, Stephen G
2008-04-24
Neural activity in the frontal eye fields controls smooth pursuit eye movements, but the relationship between single neuron responses, cortical population responses, and eye movements is not well understood. We describe an approach to dynamically link trial-to-trial fluctuations in neural responses to parallel variations in pursuit and demonstrate that individual neurons predict eye velocity fluctuations at particular moments during the course of behavior, while the population of neurons collectively tiles the entire duration of the movement. The analysis also reveals the strength of correlations in the eye movement predictions derived from pairs of simultaneously recorded neurons and suggests a simple model of cortical processing. These findings constrain the primate cortical code for movement, suggesting that either a few neurons are sufficient to drive pursuit at any given time or that many neurons operate collectively at each moment with remarkably little variation added to motor command signals downstream from the cortex. PMID:18439409
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Procyk, Emmanuel; Dominey, Peter Ford
2016-01-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a
The dynamic matching of neural and cognitive growth cycles.
Peltzer-Karpf, Annemarie
2012-01-01
In recent years complex systems biology has developed detailed numerical models mimicking the establishment, modulation, and fine-tuning of neural networks. Current research within the framework of Dynamic Systems Theory (DST) emphasizes the nexus between dynamic cycles in the brain and cognitive development which unfold in a nonlinear way and allow for individual variation. Careful observations over multiple timescales and levels of organization suggest a link to system-specific developmental changes in the central nervous system with more functional specialization opening up more efficient information processing. This can be seen in spurts of EEG energy and altered cortical coherence. Data of age- and experience-related changes in synaptic density and metabolism, shifts in blood flow and improvement of (sub)cortical connections are projected on a dynamic trajectory of cognition moving from diffuse to more refined constructions in the various subsystems, each of which exhibiting its own developmental path. Pending questions are the generation of rules amidst diversity and fluctuation, and the correlation of growth rate and critical mass in developmental dynamics and interaction. PMID:22196112
Stable dynamic backpropagation learning in recurrent neural networks.
Jin, L; Gupta, M M
1999-01-01
The conventional dynamic backpropagation (DBP) algorithm proposed by Pineda does not necessarily imply the stability of the dynamic neural model in the sense of Lyapunov during a dynamic weight learning process. A difficulty with the DBP learning process is thus associated with the stability of the equilibrium points which have to be checked by simulating the set of dynamic equations, or else by verifying the stability conditions, after the learning has been completed. To avoid unstable phenomenon during the learning process, two new learning schemes, called the multiplier and constrained learning rate algorithms, are proposed in this paper to provide stable adaptive updating processes for both the synaptic and somatic parameters of the network. Based on the explicit stability conditions, in the multiplier method these conditions are introduced into the iterative error index, and the new updating formulations contain a set of inequality constraints. In the constrained learning rate algorithm, the learning rate is updated at each iterative instant by an equation derived using the stability conditions. With these stable DBP algorithms, any analog target pattern may be implemented by a steady output vector which is a nonlinear vector function of the stable equilibrium point. The applicability of the approaches presented is illustrated through both analog and binary pattern storage examples. PMID:18252634
Dynamic construction of the neural networks underpinning empathy for pain.
Betti, Viviana; Aglioti, Salvatore Maria
2016-04-01
When people witness or imagine the pain of another person, their nervous system may react as if they were feeling that pain themselves. Early neuroscientific evidence indicates that the firsthand and vicarious experiences of pain share largely overlapping neural structures, which typically correspond to the lateral and medial brain regions that encode the sensory and the affective qualities of pain. Such neural circuitry is highly malleable and allows people to flexibly adjust the empathic behavior depending on social and personal factors. Recent views posit, however, that the brain can be conceptualized as a complex system, in which behavior emerges from the interaction between functionally connected brain regions, organized into large-scale networks. Beyond the classical modular view of the brain, here we suggest that empathic behavior may be understood through a dynamic network-based approach where the cortical circuits associated with the experience of pain flexibly change in order to code self- and other-related emotions and to intrinsically map our mentality to empathetically react to others. PMID:26877105
Neural dynamics of change detection in crowded acoustic scenes.
Sohoglu, Ediz; Chait, Maria
2016-02-01
Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection. PMID:26631816
Neural dynamics of change detection in crowded acoustic scenes
Sohoglu, Ediz; Chait, Maria
2016-01-01
Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~ 50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection. PMID:26631816
Neural dynamic programming applied to rotorcraft flight control and reconfiguration
NASA Astrophysics Data System (ADS)
Enns, Russell James
This dissertation introduces a new rotorcraft flight control methodology based on a relatively new form of neural control, neural dynamic programming (NDP). NDP is an on-line learning control scheme that is in its infancy and has only been applied to simple systems, such as those possessing a single control and a handful of states. This dissertation builds on the existing NDP concept to provide a comprehensive control system framework that can perform well as a learning controller for more realistic and practical systems of higher dimension such as helicopters. To accommodate such complex systems, the dissertation introduces the concept of a trim network that is seamlessly integrated into the NDP control structure and is also trained using this structure. This is the first time that neural networks have been applied to the helicopter control problem as a direct form of control without using other controller methodologies to augment the neural controller and without using order reducing simplifications such as axes decoupling. The dissertation focuses on providing a viable alternative helicopter control system design approach rather than providing extensive comparisons among various available controllers. As such, results showing the system's ability to stabilize the helicopter and to perform command tracking, without explicit comparison to other methods, are presented. In this research, design robustness was addressed by performing simulations under various disturbance conditions. All designs were tested using FLYRT, a sophisticated, industrial-scale, nonlinear, validated model of the Apache helicopter. Though illustrated for helicopters, the NDP control system framework should be applicable to general purpose multi-input multi-output (MIMO) control. In addition, this dissertation tackles the helicopter reconfigurable flight control problem, finding control solutions when the aircraft, and in particular its control actuators, are damaged. Such solutions have
Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1997-01-01
A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.
Bojak, Ingo; Stoyanov, Zhivko V.; Liley, David T. J.
2015-01-01
Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex. PMID:25767438
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
2015-01-01
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708
Autonomic neural control of heart rate during dynamic exercise: revisited
White, Daniel W; Raven, Peter B
2014-01-01
The accepted model of autonomic control of heart rate (HR) during dynamic exercise indicates that the initial increase is entirely attributable to the withdrawal of parasympathetic nervous system (PSNS) activity and that subsequent increases in HR are entirely attributable to increases in cardiac sympathetic activity. In the present review, we sought to re-evaluate the model of autonomic neural control of HR in humans during progressive increases in dynamic exercise workload. We analysed data from both new and previously published studies involving baroreflex stimulation and pharmacological blockade of the autonomic nervous system. Results indicate that the PSNS remains functionally active throughout exercise and that increases in HR from rest to maximal exercise result from an increasing workload-related transition from a 4 : 1 vagal–sympathetic balance to a 4 : 1 sympatho–vagal balance. Furthermore, the beat-to-beat autonomic reflex control of HR was found to be dependent on the ability of the PSNS to modulate the HR as it was progressively restrained by increasing workload-related sympathetic nerve activity. In conclusion: (i) increases in exercise workload-related HR are not caused by a total withdrawal of the PSNS followed by an increase in sympathetic tone; (ii) reciprocal antagonism is key to the transition from vagal to sympathetic dominance, and (iii) resetting of the arterial baroreflex causes immediate exercise-onset reflexive increases in HR, which are parasympathetically mediated, followed by slower increases in sympathetic tone as workloads are increased. PMID:24756637
Control of Complex Dynamic Systems by Neural Networks
NASA Technical Reports Server (NTRS)
Spall, James C.; Cristion, John A.
1993-01-01
This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.
Hidden Conditional Neural Fields for Continuous Phoneme Speech Recognition
NASA Astrophysics Data System (ADS)
Fujii, Yasuhisa; Yamamoto, Kazumasa; Nakagawa, Seiichi
In this paper, we propose Hidden Conditional Neural Fields (HCNF) for continuous phoneme speech recognition, which are a combination of Hidden Conditional Random Fields (HCRF) and a Multi-Layer Perceptron (MLP), and inherit their merits, namely, the discriminative property for sequences from HCRF and the ability to extract non-linear features from an MLP. HCNF can incorporate many types of features from which non-linear features can be extracted, and is trained by sequential criteria. We first present the formulation of HCNF and then examine three methods to further improve automatic speech recognition using HCNF, which is an objective function that explicitly considers training errors, provides a hierarchical tandem-style feature and includes a deep non-linear feature extractor for the observation function. We show that HCNF can be trained realistically without any initial model and outperforms HCRF and the triphone hidden Markov model trained by the minimum phone error (MPE) manner using experimental results for continuous English phoneme recognition on the TIMIT core test set and Japanese phoneme recognition on the IPA 100 test set.
NASA Astrophysics Data System (ADS)
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
Tracing technologies back in time to their scientific and mathematical origins reveals surprising connections between the pure pursuit of knowledge and the opportunities afforded by that pursuit for new and unexpected applications. For example, Einstein's desire to eliminate the disparity between electricity and magnetism in Maxwell's equations impelled him to develop the special theory of relativity (Einstein 1922)Einstein 1922 p 41 'The advance in method arises from the fact that the electric and magnetic fields lose their separate existences through the relativity of motion. A field which appears to be purely an electric field, judged from one system, has also magnetic field components when judged from another inertial system.'. His conviction that there should be no privileged inertial frame of reference Einstein 1922 p 58 'The possibility of explaining the numerical equality of inertia and gravitation by the unity of their nature gives to the general theory of relativity, according to my conviction, such a superiority over the conceptions of classical mechanics, that all the difficulties encountered must be considered as small in comparison with this progress.' further impelled him to utilize the non-Euclidean geometry originally developed by Riemann and others as a purely hypothetical alternative to classical geometry as the foundation for the general theory of relativity. Nowadays, anyone who depends on a global positioning system—which now includes many people who own smart phones—uses a system that would not work effectively without incorporating corrections from both special and general relativity (Ashby 2003). As another example, G H Hardy famously proclaimed his conviction that his work on number theory, which he pursued for the sheer love of exploring the beauty of mathematical structures, was unlikely to find any practical applications (Hardy 1940)Hardy 1940 pp 135-6 'The general conclusion, surely, stands out plainly enough. If useful knowledge
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
Nitzan, Erez; Krispin, Shlomo; Pfaltzgraff, Elise R.; Klar, Avihu; Labosky, Patricia A.; Kalcheim, Chaya
2013-01-01
Understanding when and how multipotent progenitors segregate into diverse fates is a key question during embryonic development. The neural crest (NC) is an exemplary model system with which to investigate the dynamics of progenitor cell specification, as it generates a multitude of derivatives. Based on ‘in ovo’ lineage analysis, we previously suggested an early fate restriction of premigratory trunk NC to generate neural versus melanogenic fates, yet the timing of fate segregation and the underlying mechanisms remained unknown. Analysis of progenitors expressing a Foxd3 reporter reveals that prospective melanoblasts downregulate Foxd3 and have already segregated from neural lineages before emigration. When this downregulation is prevented, late-emigrating avian precursors fail to upregulate the melanogenic markers Mitf and MC/1 and the guidance receptor Ednrb2, generating instead glial cells that express P0 and Fabp. In this context, Foxd3 lies downstream of Snail2 and Sox9, constituting a minimal network upstream of Mitf and Ednrb2 to link melanogenic specification with migration. Consistent with the gain-of-function data in avians, loss of Foxd3 function in mouse NC results in ectopic melanogenesis in the dorsal tube and sensory ganglia. Altogether, Foxd3 is part of a dynamically expressed gene network that is necessary and sufficient to regulate fate decisions in premigratory NC. Their timely downregulation in the dorsal neural tube is thus necessary for the switch between neural and melanocytic phases of NC development. PMID:23615280
Dynamics of fully connected attractor neural networks near saturation
NASA Astrophysics Data System (ADS)
Coolen, A. C. C.; Sherrington, D.
1993-12-01
We present an exact dynamical theory, valid on finite time scales, to describe the fully connected Hopfield model near saturation in terms of deterministic flow equations for order parameters. Two transparent assumptions allow us to perform a replica calculation of the distribution of intrinsic noise components of the alignment fields. Numerical simulations indicate that our equations describe the dynamics correctly in the region where replica symmetry is stable. In equilibrium our theory reproduces the saddle-point equations obtained in the thermodynamic analysis by Amit et al.
Ca^2+ Dynamics and Propagating Waves in Neural Networks with Excitatory and Inhibitory Neurons.
NASA Astrophysics Data System (ADS)
Bondarenko, Vladimir E.
2008-03-01
Dynamics of neural spikes, intracellular Ca^2+, and Ca^2+ in intracellular stores was investigated both in isolated Chay's neurons and in the neurons coupled in networks. Three types of neural networks were studied: a purely excitatory neural network, with only excitatory (AMPA) synapses; a purely inhibitory neural network with only inhibitory (GABA) synapses; and a hybrid neural network, with both AMPA and GABA synapses. In the hybrid neural network, the ratio of excitatory to inhibitory neurons was 4:1. For each case, we considered two types of connections, ``all-with-all" and 20 connections per neuron. Each neural network contained 100 neurons with randomly distributed connection strengths. In the neural networks with ``all-with-all" connections and AMPA/GABA synapses an increase in average synaptic strength yielded bursting activity with increased/decreased number of spikes per burst. The neural bursts and Ca^2+ transients were synchronous at relatively large connection strengths despite random connection strengths. Simulations of the neural networks with 20 connections per neuron and with only AMPA synapses showed synchronous oscillations, while the neural networks with GABA or hybrid synapses generated propagating waves of membrane potential and Ca^2+ transients.
Classification of mammographic masses using generalized dynamic fuzzy neural networks
NASA Astrophysics Data System (ADS)
Lim, Wei Keat; Er, Meng Joo
2003-05-01
In this paper, computer-aided classification of mammographic masses using generalized dynamic fuzzy neural networks (GDFNN) is presented. The texture parameters, derived from first-order gradient distribution and gray-level co-occurrence matrices (GCMs), were computed from the regions of interest (ROIs). A total of 77 images containing 38 benign cases and 39 malignant cases from the Digital Database for Screening Mammography (DDSM) were analyzed. A fast approach of automatically generating fuzzy rules from training samples was implemented to classify tumors. The novelty of this work is that it alleviates the problem of the conventional computer-aided diagnosis (CAD) system that requires a designer to examine all the input-output relationships of a training database in order to obtain the most appropriate structure for the classifier. In this approach, not only the connection weights can be adjusted, but also the structure can be self-adaptive during the learning process. With the classifier automatically generated by the GDFNN learning algorithm, the area under the receiver-operating characteristic (ROC) curve, Az, reached 0.9289, which corresponded to a true-positive fraction of 94.9% at a false positive fraction of 73.7%. The corresponding accuracy was 84.4%, the positive predictive value was 78.7% and the negative predictive value was 93.3%.
Neural dynamics for landmark orientation and angular path integration.
Seelig, Johannes D; Jayaraman, Vivek
2015-05-14
Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed Drosophila melanogaster walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body, a toroidal structure in the centre of the fly brain. The neural population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity, a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors, network structures that have been proposed to support the function of navigational brain circuits. PMID:25971509
Dynamical recurrent neural networks--towards environmental time series prediction.
Aussem, A; Murtagh, F; Sarazin, M
1995-06-01
Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meterological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed nowcasting (Murtagh et al. 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy k-nearest neighbors method. PMID:7496587
Track and Field Dynamics. Second Edition.
ERIC Educational Resources Information Center
Ecker, Tom
Track and field coaching is considered an art embodying three sciences--physiology, psychology, and dynamics. It is the area of dynamics, the branch of physics that deals with the action of force on bodies, that is central to this book. Although the book does not cover the entire realm of dynamics, the laws and principles that relate directly to…
Neural RNA as a principal dynamic information carrier in a neuron
NASA Astrophysics Data System (ADS)
Berezin, Andrey A.
1999-11-01
Quantum mechanical approach has been used to develop a model of the neural ribonucleic acid molecule dynamics. Macro and micro Fermi-Pasta-Ulam recurrence has been considered as a principle information carrier in a neuron.
Oscillatory phase dynamics in neural entrainment underpin illusory percepts of time.
Herrmann, Björn; Henry, Molly J; Grigutsch, Maren; Obleser, Jonas
2013-10-01
Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a direct link to neural oscillations and oscillatory entrainment has not yet been provided. In addition, it has thus far remained unaddressed how context-induced illusory percepts of time are coded for in oscillator models of time perception. To investigate these questions, we used magnetoencephalography and examined the neural oscillatory dynamics that underpin pitch-induced illusory percepts of temporal rate change. Human participants listened to frequency-modulated sounds that varied over time in both modulation rate and pitch, and judged the direction of rate change (decrease vs increase). Our results demonstrate distinct neural mechanisms of rate perception: Modulation rate changes directly affected listeners' rate percept as well as the exact frequency of the neural oscillation. However, pitch-induced illusory rate changes were unrelated to the exact frequency of the neural responses. The rate change illusion was instead linked to changes in neural phase patterns, which allowed for single-trial decoding of percepts. That is, illusory underestimations or overestimations of perceived rate change were tightly coupled to increased intertrial phase coherence and changes in cerebro-acoustic phase lag. The results provide insight on how illusory percepts of time are coded for by neural oscillatory dynamics. PMID:24089487
Hellyer, Peter J.; Scott, Gregory; Shanahan, Murray; Sharp, David J.
2015-01-01
Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome. PMID:26085630
The neural dynamics of song syntax in songbirds
NASA Astrophysics Data System (ADS)
Jin, Dezhe
2010-03-01
Songbird is ``the hydrogen atom'' of the neuroscience of complex, learned vocalizations such as human speech. Songs of Bengalese finch consist of sequences of syllables. While syllables are temporally stereotypical, syllable sequences can vary and follow complex, probabilistic syntactic rules, which are rudimentarily similar to grammars in human language. Songbird brain is accessible to experimental probes, and is understood well enough to construct biologically constrained, predictive computational models. In this talk, I will discuss the structure and dynamics of neural networks underlying the stereotypy of the birdsong syllables and the flexibility of syllable sequences. Recent experiments and computational models suggest that a syllable is encoded in a chain network of projection neurons in premotor nucleus HVC (proper name). Precisely timed spikes propagate along the chain, driving vocalization of the syllable through downstream nuclei. Through a computational model, I show that that variable syllable sequences can be generated through spike propagations in a network in HVC in which the syllable-encoding chain networks are connected into a branching chain pattern. The neurons mutually inhibit each other through the inhibitory HVC interneurons, and are driven by external inputs from nuclei upstream of HVC. At a branching point that connects the final group of a chain to the first groups of several chains, the spike activity selects one branch to continue the propagation. The selection is probabilistic, and is due to the winner-take-all mechanism mediated by the inhibition and noise. The model predicts that the syllable sequences statistically follow partially observable Markov models. Experimental results supporting this and other predictions of the model will be presented. We suggest that the syntax of birdsong syllable sequences is embedded in the connection patterns of HVC projection neurons.
The relevance of network micro-structure for neural dynamics.
Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan
2013-01-01
The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits. PMID:23761758
Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.
NASA Astrophysics Data System (ADS)
Sasaki, Hironori
This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically
Measuring solar magnetic fields with artificial neural networks.
Socas-Navarro, Hector
2003-01-01
The quantification of the solar magnetic field is a crucial step in modern solar physics to understand the dynamics, activity and variability of our star. Presently, a reliable inference of these fields is only possible by means of a computer-intensive process that has so far limited scientists to the analysis of observations from small regions of the solar disk, and/or very crude spatial and temporal resolution. This work presents a different approach to the problem, in which a multilayer perceptron, trained with known synthetic profiles, is able to recognize the profiles and return the magnetic field used to synthesize them. The network is then confronted with real observations of a sunspot which had been previously inverted using traditional inversion techniques. A quantitative comparison between these two procedures shows the reliability of the network when applied to points having magnetic filling factors larger than approximately 70%. The dramatic decrease in the required computing time presents an opportunity for the routine analysis of large-scale, high-resolution solar observations. PMID:12672431
Filling the Gap on Developmental Change: Tests of a Dynamic Field Theory of Spatial Cognition
ERIC Educational Resources Information Center
Schutte, Anne R.; Spencer, John P.
2010-01-01
In early childhood, there is a developmental transition in spatial memory biases. Before the transition, children's memory responses are biased toward the midline of a space, while after the transition responses are biased away from midline. The Dynamic Field Theory (DFT) posits that changes in neural interaction and changes in how children…
Track and Field: Technique Through Dynamics.
ERIC Educational Resources Information Center
Ecker, Tom
This book was designed to aid in applying the laws of dynamics to the sport of track and field, event by event. It begins by tracing the history of the discoveries of the laws of motion and the principles of dynamics, with explanations of commonly used terms derived from the vocabularies of the physical sciences. The principles and laws of…
Tanskanen, Jarno M A; Mikkonen, Jarno E; Penttonen, Markku
2005-06-30
Independent component analysis (ICA) is proposed for analysis of neural population activity from multichannel electrophysiological field potential measurements. The proposed analysis method provides information on spatial extents of active neural populations, locations of the populations with respect to each other, population evolution, including merging and splitting of populations in time, and on time lag differences between the populations. In some cases, results of the proposed analysis may also be interpreted as independent information flows carried by neurons and neural populations. In this paper, a detailed description of the analysis method is given. The proposed analysis is demonstrated with an illustrative simulation, and with an exemplary analysis of an in vivo multichannel recording from rat hippocampus. The proposed method can be applied in analysis of any recordings of neural networks in which contributions from a number of neural populations or information flows are simultaneously recorded via a number of measurement points, as well in vivo as in vitro. PMID:15922038
Neural dynamics of prediction and surprise in infants
Kouider, Sid; Long, Bria; Le Stanc, Lorna; Charron, Sylvain; Fievet, Anne-Caroline; Barbosa, Leonardo S.; Gelskov, Sofie V.
2015-01-01
Prior expectations shape neural responses in sensory regions of the brain, consistent with a Bayesian predictive coding account of perception. Yet, it remains unclear whether such a mechanism is already functional during early stages of development. To address this issue, we study how the infant brain responds to prediction violations using a cross-modal cueing paradigm. We record electroencephalographic responses to expected and unexpected visual events preceded by auditory cues in 12-month-old infants. We find an increased response for unexpected events. However, this effect of prediction error is only observed during late processing stages associated with conscious access mechanisms. In contrast, early perceptual components reveal an amplification of neural responses for predicted relative to surprising events, suggesting that selective attention enhances perceptual processing for expected events. Taken together, these results demonstrate that cross-modal statistical regularities are used to generate predictions that differentially influence early and late neural responses in infants. PMID:26460901
Brain-Machine Interactions for Assessing the Dynamics of Neural Systems
Kositsky, Michael; Chiappalone, Michela; Alford, Simon T.; Mussa-Ivaldi, Ferdinando A.
2008-01-01
A critical advance for brain–machine interfaces is the establishment of bi-directional communications between the nervous system and external devices. However, the signals generated by a population of neurons are expected to depend in a complex way upon poorly understood neural dynamics. We report a new technique for the identification of the dynamics of a neural population engaged in a bi-directional interaction with an external device. We placed in vitro preparations from the lamprey brainstem in a closed-loop interaction with simulated dynamical devices having different numbers of degrees of freedom. We used the observed behaviors of this composite system to assess how many independent parameters − or state variables − determine at each instant the output of the neural system. This information, known as the dynamical dimension of a system, allows predicting future behaviors based on the present state and the future inputs. A relevant novelty in this approach is the possibility to assess a computational property – the dynamical dimension of a neuronal population – through a simple experimental technique based on the bi-directional interaction with simulated dynamical devices. We present a set of results that demonstrate the possibility of obtaining stable and reliable measures of the dynamical dimension of a neural preparation. PMID:19430593
Neural bandwidth of veridical perception across the visual field.
Wilkinson, Michael O; Anderson, Roger S; Bradley, Arthur; Thibos, Larry N
2016-01-01
Neural undersampling of the retinal image limits the range of spatial frequencies that can be represented veridically by the array of retinal ganglion cells conveying visual information from eye to brain. Our goal was to demarcate the neural bandwidth and local anisotropy of veridical perception, unencumbered by optical imperfections of the eye, and to test competing hypotheses that might account for the results. Using monochromatic interference fringes to stimulate the retina with high-contrast sinusoidal gratings, we measured sampling-limited visual resolution along eight meridians from 0° to 50° of eccentricity. The resulting isoacuity contour maps revealed all of the expected features of the human array of retinal ganglion cells. Contours in the radial fringe maps are elongated horizontally, revealing the functional equivalent of the anatomical visual streak, and are extended into nasal retina and superior retina, indicating higher resolution along those meridians. Contours are larger in diameter for radial gratings compared to tangential or oblique gratings, indicating local anisotropy with highest bandwidth for radially oriented gratings. Comparison of these results to anatomical predictions indicates acuity is proportional to the sampling density of retinal ganglion cells everywhere in the retina. These results support the long-standing hypothesis that "pixel density" of the discrete neural image carried by the human optic nerve limits the spatial bandwidth of veridical perception at all retinal locations. PMID:26824638
Dynamic social power modulates neural basis of math calculation
Harada, Tokiko; Bridge, Donna J.; Chiao, Joan Y.
2013-01-01
Both situational (e.g., perceived power) and sustained social factors (e.g., cultural stereotypes) are known to affect how people academically perform, particularly in the domain of mathematics. The ability to compute even simple mathematics, such as addition, relies on distinct neural circuitry within the inferior parietal and inferior frontal lobes, brain regions where magnitude representation and addition are performed. Despite prior behavioral evidence of social influence on academic performance, little is known about whether or not temporarily heightening a person's sense of power may influence the neural bases of math calculation. Here we primed female participants with either high or low power (LP) and then measured neural response while they performed exact and approximate math problems. We found that priming power affected math performance; specifically, females primed with high power (HP) performed better on approximate math calculation compared to females primed with LP. Furthermore, neural response within the left inferior frontal gyrus (IFG), a region previously associated with cognitive interference, was reduced for females in the HP compared to LP group. Taken together, these results indicate that even temporarily heightening a person's sense of social power can increase their math performance, possibly by reducing cognitive interference during math performance. PMID:23390415
Neural Dynamics of Autistic Behaviors: Cognitive, Emotional, and Timing Substrates
ERIC Educational Resources Information Center
Grossberg, Stephen; Seidman, Don
2006-01-01
What brain mechanisms underlie autism, and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the Imbalanced Spectrally Timed Adaptive Resonance Theory (iSTART) model, that proposes how cognitive, emotional, timing, and motor processes that involve brain regions such as the prefrontal and temporal…
Neural field theory of nonlinear wave-wave and wave-neuron processes
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Roy, N.
2015-06-01
Systematic expansion of neural field theory equations in terms of nonlinear response functions is carried out to enable a wide variety of nonlinear wave-wave and wave-neuron processes to be treated systematically in systems involving multiple neural populations. The results are illustrated by analyzing second-harmonic generation, and they can also be applied to wave-wave coalescence, multiharmonic generation, facilitation, depression, refractoriness, and other nonlinear processes.
Relating the sequential dynamics of excitatory neural networks to synaptic cellular automata.
Nekorkin, V I; Dmitrichev, A S; Kasatkin, D V; Afraimovich, V S
2011-12-01
We have developed a new approach for the description of sequential dynamics of excitatory neural networks. Our approach is based on the dynamics of synapses possessing the short-term plasticity property. We suggest a model of such synapses in the form of a second-order system of nonlinear ODEs. In the framework of the model two types of responses are realized-the fast and the slow ones. Under some relations between their timescales a cellular automaton (CA) on the graph of connections is constructed. Such a CA has only a finite number of attractors and all of them are periodic orbits. The attractors of the CA determine the regimes of sequential dynamics of the original neural network, i.e., itineraries along the network and the times of successive firing of neurons in the form of bunches of spikes. We illustrate our approach on the example of a Morris-Lecar neural network. PMID:22225361
Lebedev, Dmitry V; Steil, Jochen J; Ritter, Helge J
2005-04-01
We introduce a new type of neural network--the dynamic wave expansion neural network (DWENN)--for path generation in a dynamic environment for both mobile robots and robotic manipulators. Our model is parameter-free, computationally efficient, and its complexity does not explicitly depend on the dimensionality of the configuration space. We give a review of existing neural networks for trajectory generation in a time-varying domain, which are compared to the presented model. We demonstrate several representative simulative comparisons as well as the results of long-run comparisons in a number of randomly-generated scenes, which reveal that the proposed model yields dominantly shorter paths, especially in highly-dynamic environments. PMID:15896575
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
A neural network dynamics that resembles protein evolution
NASA Astrophysics Data System (ADS)
Ferrán, Edgardo A.; Ferrara, Pascual
1992-06-01
We use neutral networks to classify proteins according to their sequence similarities. A network composed by 7 × 7 neurons, was trained with the Kohonen unsupervised learning algorithm using, as inputs, matrix patterns derived from the bipeptide composition of cytochrome c proteins belonging to 76 different species. As a result of the training, the network self-organized the activation of its neurons into topologically ordered maps, wherein phylogenetically related sequences were positioned close to each other. The evolution of the topological map during learning, in a representative computational experiment, roughly resembles the way in which one species evolves into several others. For instance, sequences corresponding to vertebrates, initially grouped together into one neuron, were placed in a contiguous zone of the final neural map, with sequences of fishes, amphibia, reptiles, birds and mammals associated to different neurons. Some apparent wrong classifications are due to the fact that some proteins have a greater degree of sequence identity than the one expected by phylogenetics. In the final neural map, each synaptic vector may be considered as the pattern corresponding to the ancestor of all the proteins that are attached to that neuron. Although it may be also tempting to link real time with learning epochs and to use this relationship to calibrate the molecular evolutionary clock, this is not correct because the evolutionary time schedule obtained with the neural network depends highly on the discrete way in which the winner neighborhood is decreased during learning.
Dynamic scalp topography reveals neural signs just before performance errors
Ora, Hiroki; Sekiguchi, Tatsuhiko; Miyake, Yoshihiro
2015-01-01
Performance errors may cause serious consequences. It has been reported that ongoing activity of the frontal control regions across trials associates with the occurrence of performance errors. However, neural mechanisms that cause performance errors remain largely unknown. In this study, we hypothesized that some neural functions required for correct outcomes are lacking just before performance errors, and to determine this lack of neural function we applied a spatiotemporal analysis to high-density electroencephalogram signals recorded during a visual discrimination task, a d2 test of attention. To our knowledge, this is the first report of a difference in the temporal development of scalp ERP between trials with error, and correct outcomes as seen by topography during the d2 test of attention. We observed differences in the signal potential in the frontal region and then the occipital region between reaction times matched with correct and error outcomes. Our observations suggest that lapses of top-down signals from frontal control regions cause performance errors just after the lapses. PMID:26289925
Hamiltonian dynamics of the parametrized electromagnetic field
NASA Astrophysics Data System (ADS)
Barbero G, J. Fernando; Margalef-Bentabol, Juan; Villaseñor, Eduardo J. S.
2016-06-01
We study the Hamiltonian formulation for a parametrized electromagnetic field with the purpose of clarifying the interplay between parametrization and gauge symmetries. We use a geometric approach which is tailor-made for theories where embeddings are part of the dynamical variables. Our point of view is global and coordinate free. The most important result of the paper is the identification of sectors in the primary constraint submanifold in the phase space of the model where the number of independent components of the Hamiltonian vector fields that define the dynamics changes. This explains the non-trivial behavior of the system and some of its pathologies.
NASA Astrophysics Data System (ADS)
Yang, Gang; Tang, Zheng; Dai, Hongwei
Through analyzing the dynamics characteristic of maximum neural network with an added vertex, we find that the solution quality is mainly determined by the added vertex weights. In order to increase maximum neural network ability, a stochastic nonlinear self-feedback and flexible annealing strategy are embedded in maximum neural network, which makes the network more powerful to escape local minima and be independent of the initial values. Simultaneously, we present that solving ability of maximum neural network is dependence on problem. We introduce a new parameter into our network to improve the solving ability. The simulation in k random graph and some DIMACS clique instances in the second DIMACS challenge shows that our improved network is superior to other algorithms in light of the solution quality and CPU time.
Quantum perceptron over a field and neural network architecture selection in a quantum computer.
da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa
2016-04-01
In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. PMID:26878722
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min
2013-08-01
Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the
Empirical modeling ENSO dynamics with complex-valued artificial neural networks
NASA Astrophysics Data System (ADS)
Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry
2016-04-01
The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Kimura, Masahiro
2002-12-01
This article extends previous mathematical studies on elucidating the redundancy for describing functions by feedforward neural networks (FNNs) to the elucidation of redundancy for describing dynamical systems (DSs) by continuous-time recurrent neural networks (RNNs). In order to approximate a DS on R(n) using an RNN with n visible units, an n-dimensional affine neural dynamical system (A-NDS) can be used as the DS actually produced by the above RNN under an affine map from its visible state-space R(n) to its hidden state-space. Therefore, we consider the problem of clarifying the redundancy for describing A-NDSs by RNNs and affine maps. We clarify to what extent a pair of an RNN and an affine map is uniquely determined by its corresponding A-NDS and also give a nonredundant sufficient search set for the DS approximation problem based on A-NDS. PMID:12487801
Codevelopmental learning between human and humanoid robot using a dynamic neural-network model.
Tani, Jun; Nishimoto, Ryu; Namikawa, Jun; Ito, Masato
2008-02-01
This paper examines characteristics of interactive learning between human tutors and a robot having a dynamic neural-network model, which is inspired by human parietal cortex functions. A humanoid robot, with a recurrent neural network that has a hierarchical structure, learns to manipulate objects. Robots learn tasks in repeated self-trials with the assistance of human interaction, which provides physical guidance until the tasks are mastered and learning is consolidated within the neural networks. Experimental results and the analyses showed the following: 1) codevelopmental shaping of task behaviors stems from interactions between the robot and a tutor; 2) dynamic structures for articulating and sequencing of behavior primitives are self-organized in the hierarchically organized network; and 3) such structures can afford both generalization and context dependency in generating skilled behaviors. PMID:18270081
The neural dynamics of reward value and risk coding in the human orbitofrontal cortex.
Li, Yansong; Vanni-Mercier, Giovanna; Isnard, Jean; Mauguière, François; Dreher, Jean-Claude
2016-04-01
The orbitofrontal cortex is known to carry information regarding expected reward, risk and experienced outcome. Yet, due to inherent limitations in lesion and neuroimaging methods, the neural dynamics of these computations has remained elusive in humans. Here, taking advantage of the high temporal definition of intracranial recordings, we characterize the neurophysiological signatures of the intact orbitofrontal cortex in processing information relevant for risky decisions. Local field potentials were recorded from the intact orbitofrontal cortex of patients suffering from drug-refractory partial epilepsy with implanted depth electrodes as they performed a probabilistic reward learning task that required them to associate visual cues with distinct reward probabilities. We observed three successive signals: (i) around 400 ms after cue presentation, the amplitudes of the local field potentials increased with reward probability; (ii) a risk signal emerged during the late phase of reward anticipation and during the outcome phase; and (iii) an experienced value signal appeared at the time of reward delivery. Both the medial and lateral orbitofrontal cortex encoded risk and reward probability while the lateral orbitofrontal cortex played a dominant role in coding experienced value. The present study provides the first evidence from intracranial recordings that the human orbitofrontal cortex codes reward risk both during late reward anticipation and during the outcome phase at a time scale of milliseconds. Our findings offer insights into the rapid mechanisms underlying the ability to learn structural relationships from the environment. PMID:26811252
Dark-field differential dynamic microscopy.
Bayles, Alexandra V; Squires, Todd M; Helgeson, Matthew E
2016-02-28
Differential dynamic microscopy (DDM) is an emerging technique to measure the ensemble dynamics of colloidal and complex fluid motion using optical microscopy in systems that would otherwise be difficult to measure using other methods. To date, DDM has successfully been applied to linear space invariant imaging modes including bright-field, fluorescence, confocal, polarised, and phase-contrast microscopy to study diverse dynamic phenomena. In this work, we show for the first time how DDM analysis can be extended to dark-field imaging, i.e. a linear space variant (LSV) imaging mode. Specifically, we present a particle-based framework for describing dynamic image correlations in DDM, and use it to derive a correction to the image structure function obtained by DDM that accounts for scatterers with non-homogeneous intensity distributions as they move within the imaging plane. To validate the analysis, we study the Brownian motion of gold nanoparticles, whose plasmonic structure allows for nanometer-scale particles to be imaged under dark-field illumination, in Newtonian liquids. We find that diffusion coefficients of the nanoparticles can be reliably measured by dark-field DDM, even under optically dense concentrations where analysis via multiple-particle tracking microrheology fails. These results demonstrate the potential for DDM analysis to be applied to linear space variant forms of microscopy, providing access to experimental systems unavailable to other imaging modes. PMID:26822331
From Behavior to Neural Dynamics: An Integrated Theory of Attention.
Buschman, Timothy J; Kastner, Sabine
2015-10-01
The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one's current behavior. We refer to these mechanisms as "attention." Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain's large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits, and neurons. We then integrate these disparate results into a unified theory of attention. PMID:26447577
NASA Astrophysics Data System (ADS)
Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer
2016-04-01
In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.
A Neural Network Model of the Structure and Dynamics of Human Personality
ERIC Educational Resources Information Center
Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.
2010-01-01
We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…
ERIC Educational Resources Information Center
Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo
2010-01-01
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…
Effects of refractory periods in the dynamics of a diluted neural network
NASA Astrophysics Data System (ADS)
Tamarit, F. A.; Stariolo, D. A.; Cannas, S. A.; Serra, P.
1996-05-01
We propose a stochastic dynamics for a neural network which accounts for the effects of the refractory periods (absolute and relative) in the dynamics of a single neuron. The dynamics can be solved analytically in an extremely diluted network. We found a very rich scenario that presents retrieval phases and a period doubling route to chaos in the attractors of the overlap order parameter. Our model incorporates some characteristics that make it biologically appealing, such as asymmetric synaptic efficacies, dilution of the synaptic matrix, absolute and relative refractory periods, complex retrieval dynamics, and low levels of activity in the retrieval regime.
Rigotti, Mattia; Rubin, Daniel Ben Dayan; Wang, Xiao-Jing; Fusi, Stefano
2010-01-01
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation. PMID:21048899
Direct imaging of neural currents using ultra-low field magnetic resonance techniques
Volegov, Petr L.; Matlashov, Andrei N.; Mosher, John C.; Espy, Michelle A.; Kraus, Jr., Robert H.
2009-08-11
Using resonant interactions to directly and tomographically image neural activity in the human brain using magnetic resonance imaging (MRI) techniques at ultra-low field (ULF), the present inventors have established an approach that is sensitive to magnetic field distributions local to the spin population in cortex at the Larmor frequency of the measurement field. Because the Larmor frequency can be readily manipulated (through varying B.sub.m), one can also envision using ULF-DNI to image the frequency distribution of the local fields in cortex. Such information, taken together with simultaneous acquisition of MEG and ULF-NMR signals, enables non-invasive exploration of the correlation between local fields induced by neural activity in cortex and more `distant` measures of brain activity such as MEG and EEG.
Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI
Yu, Theodore; Sejnowski, Terrence J.; Cauwenberghs, Gert
2011-01-01
We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris–Lecar and Hodgkin–Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces. PMID:22227949
Topological field theory of dynamical systems
Ovchinnikov, Igor V.
2012-09-15
Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the 'edge of chaos.' Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.
Stability of bumps in piecewise smooth neural fields with nonlinear adaptation
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Bressloff, Paul C.
2010-06-01
We study the linear stability of stationary bumps in piecewise smooth neural fields with local negative feedback in the form of synaptic depression or spike frequency adaptation. The continuum dynamics is described in terms of a nonlocal integrodifferential equation, in which the integral kernel represents the spatial distribution of synaptic weights between populations of neurons whose mean firing rate is taken to be a Heaviside function of local activity. Discontinuities in the adaptation variable associated with a bump solution means that bump stability cannot be analyzed by constructing the Evans function for a network with a sigmoidal gain function and then taking the high-gain limit. In the case of synaptic depression, we show that linear stability can be formulated in terms of solutions to a system of pseudo-linear equations. We thus establish that sufficiently strong synaptic depression can destabilize a bump that is stable in the absence of depression. These instabilities are dominated by shift perturbations that evolve into traveling pulses. In the case of spike frequency adaptation, we show that for a wide class of perturbations the activity and adaptation variables decouple in the linear regime, thus allowing us to explicitly determine stability in terms of the spectrum of a smooth linear operator. We find that bumps are always unstable with respect to this class of perturbations, and destabilization of a bump can result in either a traveling pulse or a spatially localized breather.
Mean Field Analysis of Stochastic Neural Network Models with Synaptic Depression
NASA Astrophysics Data System (ADS)
Yasuhiko Igarashi,; Masafumi Oizumi,; Masato Okada,
2010-08-01
We investigated the effects of synaptic depression on the macroscopic behavior of stochastic neural networks. Dynamical mean field equations were derived for such networks by taking the average of two stochastic variables: a firing-state variable and a synaptic variable. In these equations, the average product of thesevariables is decoupled as the product of their averages because the two stochastic variables are independent. We proved the independence of these two stochastic variables assuming that the synaptic weight Jij is of the order of 1/N with respect to the number of neurons N. Using these equations, we derived macroscopic steady-state equations for a network with uniform connections and for a ring attractor network with Mexican hat type connectivity and investigated the stability of the steady-state solutions. An oscillatory uniform state was observed in the network with uniform connections owing to a Hopf instability. For the ring network, high-frequency perturbations were shown not to affect system stability. Two mechanisms destabilize the inhomogeneous steady state, leading to two oscillatory states. A Turing instability leads to a rotating bump state, while a Hopf instability leads to an oscillatory bump state, which was previously unreported. Various oscillatory states take place in a network with synaptic depression depending on the strength of the interneuron connections.
Force field dependence of riboswitch dynamics.
Hanke, Christian A; Gohlke, Holger
2015-01-01
Riboswitches are noncoding regulatory elements that control gene expression in response to the presence of metabolites, which bind to the aptamer domain. Metabolite binding appears to occur through a combination of conformational selection and induced fit mechanism. This demands to characterize the structural dynamics of the apo state of aptamer domains. In principle, molecular dynamics (MD) simulations can give insights at the atomistic level into the dynamics of the aptamer domain. However, it is unclear to what extent contemporary force fields can bias such insights. Here, we show that the Amber force field ff99 yields the best agreement with detailed experimental observations on differences in the structural dynamics of wild type and mutant aptamer domains of the guanine-sensing riboswitch (Gsw), including a pronounced influence of Mg2+. In contrast, applying ff99 with parmbsc0 and parmχOL modifications (denoted ff10) results in strongly damped motions and overly stable tertiary loop-loop interactions. These results are based on 58 MD simulations with an aggregate simulation time>11 μs, careful modeling of Mg2+ ions, and thorough statistical testing. Our results suggest that the moderate stabilization of the χ-anti region in ff10 can have an unwanted damping effect on functionally relevant structural dynamics of marginally stable RNA systems. This suggestion is supported by crystal structure analyses of Gsw aptamer domains that reveal χ torsions with high-anti values in the most mobile regions. We expect that future RNA force field development will benefit from considering marginally stable RNA systems and optimization toward good representations of dynamics in addition to structural characteristics. PMID:25726465
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M.
1996-12-31
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.`s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the {open_quotes}brainstate{close_quotes} using logs as predicters and core permeability as {open_quotes}truth{close_quotes}. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M. )
1996-01-01
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.'s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the [open quotes]brainstate[close quotes] using logs as predicters and core permeability as [open quotes]truth[close quotes]. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Phase field approximation of dynamic brittle fracture
NASA Astrophysics Data System (ADS)
Schlüter, Alexander; Willenbücher, Adrian; Kuhn, Charlotte; Müller, Ralf
2014-11-01
Numerical methods that are able to predict the failure of technical structures due to fracture are important in many engineering applications. One of these approaches, the so-called phase field method, represents cracks by means of an additional continuous field variable. This strategy avoids some of the main drawbacks of a sharp interface description of cracks. For example, it is not necessary to track or model crack faces explicitly, which allows a simple algorithmic treatment. The phase field model for brittle fracture presented in Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) assumes quasi-static loading conditions. However dynamic effects have a great impact on the crack growth in many practical applications. Therefore this investigation presents an extension of the quasi-static phase field model for fracture from Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) to the dynamic case. First of all Hamilton's principle is applied to derive a coupled set of Euler-Lagrange equations that govern the mechanical behaviour of the body as well as the crack growth. Subsequently the model is implemented in a finite element scheme which allows to solve several test problems numerically. The numerical examples illustrate the capabilities of the developed approach to dynamic fracture in brittle materials.
Discrete neural dynamic programming in wheeled mobile robot control
NASA Astrophysics Data System (ADS)
Hendzel, Zenon; Szuster, Marcin
2011-05-01
In this paper we propose a discrete algorithm for a tracking control of a two-wheeled mobile robot (WMR), using an advanced Adaptive Critic Design (ACD). We used Dual-Heuristic Programming (DHP) algorithm, that consists of two parametric structures implemented as Neural Networks (NNs): an actor and a critic, both realized in a form of Random Vector Functional Link (RVFL) NNs. In the proposed algorithm the control system consists of the DHP adaptive critic, a PD controller and a supervisory term, derived from the Lyapunov stability theorem. The supervisory term guaranties a stable realization of a tracking movement in a learning phase of the adaptive critic structure and robustness in face of disturbances. The discrete tracking control algorithm works online, uses the WMR model for a state prediction and does not require a preliminary learning. Verification has been conducted to illustrate the performance of the proposed control algorithm, by a series of experiments on the WMR Pioneer 2-DX.
Reconstructing neural dynamics using data assimilation with multiple models
NASA Astrophysics Data System (ADS)
Hamilton, Franz; Cressman, John; Peixoto, Nathalia; Sauer, Timothy
2014-09-01
Assimilation of data with models of physical processes is a critical component of modern scientific analysis. In recent years, nonlinear versions of Kalman filtering have been developed, in addition to methods that estimate model parameters in parallel with the system state. We propose a substantial extension of these tools to deal with the specific case of unmodeled variables, when training data from the variable is avaiable. The method uses a stack of several, nonidentical copies of a physical model to jointly reconstruct the variable in question. We demonstrate the ability of this technique to accurately recover an unmodeled experimental quantity, such as an ion concentration, from a single voltage trace after the training period is completed. The method is applied to reconstruct the potassium concentration in a neural culture from multielectrode array voltage measurements.
Dynamic Neural Processing of Linguistic Cues Related to Death
Ma, Yina; Qin, Jungang; Han, Shihui
2013-01-01
Behavioral studies suggest that humans evolve the capacity to cope with anxiety induced by the awareness of death’s inevitability. However, the neurocognitive processes that underlie online death-related thoughts remain unclear. Our recent functional MRI study found that the processing of linguistic cues related to death was characterized by decreased neural activity in human insular cortex. The current study further investigated the time course of neural processing of death-related linguistic cues. We recorded event-related potentials (ERP) to death-related, life-related, negative-valence, and neutral-valence words in a modified Stroop task that required color naming of words. We found that the amplitude of an early frontal/central negativity at 84–120 ms (N1) decreased to death-related words but increased to life-related words relative to neutral-valence words. The N1 effect associated with death-related and life-related words was correlated respectively with individuals’ pessimistic and optimistic attitudes toward life. Death-related words also increased the amplitude of a frontal/central positivity at 124–300 ms (P2) and of a frontal/central positivity at 300–500 ms (P3). However, the P2 and P3 modulations were observed for both death-related and negative-valence words but not for life-related words. The ERP results suggest an early inverse coding of linguistic cues related to life and death, which is followed by negative emotional responses to death-related information. PMID:23840787
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. PMID:26881999
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. PMID:26456201
Dynamic control of ROV`s making use of the neural network concept
Ooi, Tadashi; Yoshida, Yuki; Takahashi, Yoshiaki; Kidoushi, Hideki
1994-12-31
An attempt is made to combine the classical controller with the concept of neural network, the result of which is a control system that they have named the Robust Adaptive Neural-net Controller (RANC). The RANC identifies the dynamic characteristics of the remotely operated vehicle (ROV) including its ambient environment involving cyclic disturbances such as forces induced by waves, and organizes automatically an optimized controller. A tank experiment is described in which the RANC is set to maintain a model ROV at a prescribed depth of water under artificially generated wave disturbance.
Travelling waves in a neural field model with refractoriness.
Meijer, Hil G E; Coombes, Stephen
2014-04-01
At one level of abstraction neural tissue can be regarded as a medium for turning local synaptic activity into output signals that propagate over large distances via axons to generate further synaptic activity that can cause reverberant activity in networks that possess a mixture of excitatory and inhibitory connections. This output is often taken to be a firing rate, and the mathematical form for the evolution equation of activity depends upon a spatial convolution of this rate with a fixed anatomical connectivity pattern. Such formulations often neglect the metabolic processes that would ultimately limit synaptic activity. Here we reinstate such a process, in the spirit of an original prescription by Wilson and Cowan (Biophys J 12:1-24, 1972), using a term that multiplies the usual spatial convolution with a moving time average of local activity over some refractory time-scale. This modulation can substantially affect network behaviour, and in particular give rise to periodic travelling waves in a purely excitatory network (with exponentially decaying anatomical connectivity), which in the absence of refractoriness would only support travelling fronts. We construct these solutions numerically as stationary periodic solutions in a co-moving frame (of both an equivalent delay differential model as well as the original delay integro-differential model). Continuation methods are used to obtain the dispersion curve for periodic travelling waves (speed as a function of period), and found to be reminiscent of those for spatially extended models of excitable tissue. A kinematic analysis (based on the dispersion curve) predicts the onset of wave instabilities, which are confirmed numerically. PMID:23546637
Dynamic Magnetic Field Applications for Materials Processing
NASA Technical Reports Server (NTRS)
Mazuruk, K.; Grugel, Richard N.; Motakef, S.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Magnetic fields, variable in time and space, can be used to control convection in electrically conducting melts. Flow induced by these fields has been found to be beneficial for crystal growth applications. It allows increased crystal growth rates, and improves homogeneity and quality. Particularly beneficial is the natural convection damping capability of alternating magnetic fields. One well-known example is the rotating magnetic field (RMF) configuration. RMF induces liquid motion consisting of a swirling basic flow and a meridional secondary flow. In addition to crystal growth applications, RMF can also be used for mixing non-homogeneous melts in continuous metal castings. These applied aspects have stimulated increasing research on RMF-induced fluid dynamics. A novel type of magnetic field configuration consisting of an axisymmetric magnetostatic wave, designated the traveling magnetic field (TMF), has been recently proposed. It induces a basic flow in the form of a single vortex. TMF may find use in crystal growth techniques such as the vertical Bridgman (VB), float zone (FZ), and the traveling heater method. In this review, both methods, RMF and TMF are presented. Our recent theoretical and experimental results include such topics as localized TMF, natural convection dumping using TMF in a vertical Bridgman configuration, the traveling heater method, and the Lorentz force induced by TMF as a function of frequency. Experimentally, alloy mixing results, with and without applied TMF, will be presented. Finally, advantages of the traveling magnetic field, in comparison to the more mature rotating magnetic field method, will be discussed.
Nonequilibrium dynamics of emergent field configurations
NASA Astrophysics Data System (ADS)
Howell, Rafael Cassidy
The processes by which nonlinear physical systems approach thermal equilibrium is of great importance in many areas of science. Central to this is the mechanism by which energy is transferred between the many degrees of freedom comprising these systems. With this in mind, in this research the nonequilibrium dynamics of nonperturbative fluctuations within Ginzburg-Landau models are investigated. In particular, two questions are addressed. In both cases the system is initially prepared in one of two minima of a double-well potential. First, within the context of a (2 + 1) dimensional field theory, we investigate whether emergent spatio-temporal coherent structures play a dynamcal role in the equilibration of the field. We find that the answer is sensitive to the initial temperature of the system. At low initial temperatures, the dynamics are well approximated with a time-dependent mean-field theory. For higher temperatures, the strong nonlinear coupling between the modes in the field does give rise to the synchronized emergence of coherent spatio-temporal configurations, identified with oscillons. These are long-lived coherent field configurations characterized by their persistent oscillatory behavior at their core. This initial global emergence is seen to be a consequence of resonant behavior in the long wavelength modes in the system. A second question concerns the emergence of disorder in a highly viscous system modeled by a (3 + 1) dimensional field theory. An integro-differential Boltzmann equation is derived to model the thermal nucleation of precursors of one phase within the homogeneous background. The fraction of the volume populated by these precursors is computed as a function of temperature. This model is capable of describing the onset of percolation, characterizing the approach to criticality (i.e. disorder). It also provides a nonperturbative correction to the critical temperature based on the nonequilibrium dynamics of the system.
Dynamic Neural Networks for Kinematic Redundancy Resolution of Parallel Stewart Platforms.
Mohammed, Aquil Mirza; Li, Shuai
2016-07-01
Redundancy resolution is a critical problem in the control of parallel Stewart platform. The redundancy endows us with extra design degree to improve system performance. In this paper, the kinematic control problem of Stewart platforms is formulated to a constrained quadratic programming. The Karush-Kuhn-Tucker conditions of the problem is obtained by considering the problem in its dual space, and then a dynamic neural network is designed to solve the optimization problem recurrently. Theoretical analysis reveals the global convergence of the employed dynamic neural network to the optimal solution in terms of the defined criteria. Simulation results verify the effectiveness in the tracking control of the Stewart platform for dynamic motions. PMID:26219101
Neural population dynamics in human motor cortex during movements in people with ALS.
Pandarinath, Chethan; Gilja, Vikash; Blabe, Christine H; Nuyujukian, Paul; Sarma, Anish A; Sorice, Brittany L; Eskandar, Emad N; Hochberg, Leigh R; Henderson, Jaimie M; Shenoy, Krishna V
2015-01-01
The prevailing view of motor cortex holds that motor cortical neural activity represents muscle or movement parameters. However, recent studies in non-human primates have shown that neural activity does not simply represent muscle or movement parameters; instead, its temporal structure is well-described by a dynamical system where activity during movement evolves lawfully from an initial pre-movement state. In this study, we analyze neuronal ensemble activity in motor cortex in two clinical trial participants diagnosed with Amyotrophic Lateral Sclerosis (ALS). We find that activity in human motor cortex has similar dynamical structure to that of non-human primates, indicating that human motor cortex contains a similar underlying dynamical system for movement generation. PMID:26099302
On Mean Field Limits for Dynamical Systems
NASA Astrophysics Data System (ADS)
Boers, Niklas; Pickl, Peter
2016-07-01
We present a purely probabilistic proof of propagation of molecular chaos for N-particle systems in dimension 3 with interaction forces scaling like 1/\\vert q\\vert ^{3λ - 1} with λ smaller but close to one and cut-off at q = N^{-1/3}. The proof yields a Gronwall estimate for the maximal distance between exact microscopic and approximate mean-field dynamics. This can be used to show weak convergence of the one-particle marginals to solutions of the respective mean-field equation without cut-off in a quantitative way. Our results thus lead to a derivation of the Vlasov equation from the microscopic N-particle dynamics with force term arbitrarily close to the physically relevant Coulomb- and gravitational forces.
Modeling emotional dynamics : currency versus field.
Sallach, D .L.; Decision and Information Sciences; Univ. of Chicago
2008-08-01
Randall Collins has introduced a simplified model of emotional dynamics in which emotional energy, heightened and focused by interaction rituals, serves as a common denominator for social exchange: a generic form of currency, except that it is active in a far broader range of social transactions. While the scope of this theory is attractive, the specifics of the model remain unconvincing. After a critical assessment of the currency theory of emotion, a field model of emotion is introduced that adds expressiveness by locating emotional valence within its cognitive context, thereby creating an integrated orientation field. The result is a model which claims less in the way of motivational specificity, but is more satisfactory in modeling the dynamic interaction between cognitive and emotional orientations at both individual and social levels.
NASA Astrophysics Data System (ADS)
Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.
2013-12-01
Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.
Xu, Bin; Yang, Chenguang; Pan, Yongping
2015-10-01
This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design. PMID:26259222
Electromagnetic field dynamics in Binary Neutron Stars
NASA Astrophysics Data System (ADS)
Palenzuela, Carlos; Anderson, Matthew; Hirschmann, Eric; Lehner, Luis; Liebling, Steven; Neilsen, David; Motl, Patrick
2011-04-01
Neutron star mergers represent one of the most promising sources of gravitational waves (GW) within the bandwidth of advLIGO. In addition to GW, strong magnetic fields may offer the possibility of a characteristic electromagnetic signature allowing for concurrent detection. In this talk we present results from numerical evolutions of such mergers, studying the dynamics of both the gravitational and electromagnetic degrees of freedom.
Schmidt, Helmut; Petkov, George; Richardson, Mark P.; Terry, John R.
2014-01-01
Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of computational modeling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit, which in the field of complexity sciences is known as dynamics on networks. In this study we describe the development and application of this framework using modular networks of Kuramoto oscillators. We use this framework to understand functional networks inferred from resting state EEG recordings of a cohort of 35 adults with heterogeneous idiopathic generalized epilepsies and 40 healthy adult controls. Taking emergent synchrony across the global network as a proxy for seizures, our study finds that the critical strength of coupling required to synchronize the global network is significantly decreased for the epilepsy cohort for functional networks inferred from both theta (3–6 Hz) and low-alpha (6–9 Hz) bands. We further identify left frontal regions as a potential driver of seizure activity within these networks. We also explore the ability of our method to identify individuals with epilepsy, observing up to 80 predictive power through use of receiver operating characteristic analysis. Collectively these findings demonstrate that a computer model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which should ultimately enable a more appropriate mechanistic stratification of people
Degradation Prediction Model Based on a Neural Network with Dynamic Windows
Zhang, Xinghui; Xiao, Lei; Kang, Jianshe
2015-01-01
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873
A dynamic model of thundercloud electric fields
NASA Technical Reports Server (NTRS)
Nisbet, J. S.
1983-01-01
A description is given of the first results obtained with a new type of dynamic electrical model of a thundercloud that allows the charge rearrangement produced in arc breakdown, as well as the conduction and displacement currents, to be calculated with realistic generator configurations. The model demonstrates the great complexity of behavior of thunderclouds owing to the interaction of the nonlinear breakdown mechanisms, the energy stored in the electric field, and a conductivity that varies with altitude. It is also seen that dynamic charge distributions and electric fields are quite different from static distributions. It is noted that these differences affect the initial conditions before and after lightning strokes. The conduction current density to the ionosphere is very much larger in the dynamic cases than in static simulations. Such basic properties of thunderclouds as the production of cloud-to-ground strokes are seen as compatible only with a very limited range of thundercloud models. Another finding is that coronal and convection currents cause the electric fields at the surface to be much smaller than they would be in their absence.
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Neural network assisted inverse dynamic guidance for terminally constrained entry flight.
Zhou, Hao; Rahman, Tawfiqur; Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.
Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong
2015-03-01
This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique. PMID:25720014
Molecular dynamics in high electric fields
NASA Astrophysics Data System (ADS)
Apostol, M.; Cune, L. C.
2016-06-01
Molecular rotation spectra, generated by the coupling of the molecular electric-dipole moments to an external time-dependent electric field, are discussed in a few particular conditions which can be of some experimental interest. First, the spherical-pendulum molecular model is reviewed, with the aim of introducing an approximate method which consists in the separation of the azimuthal and zenithal motions. Second, rotation spectra are considered in the presence of a static electric field. Two particular cases are analyzed, corresponding to strong and weak fields. In both cases the classical motion of the dipoles consists of rotations and vibrations about equilibrium positions; this motion may exhibit parametric resonances. For strong fields a large macroscopic electric polarization may appear. This situation may be relevant for polar matter (like pyroelectrics, ferroelectrics), or for heavy impurities embedded in a polar solid. The dipolar interaction is analyzed in polar condensed matter, where it is shown that new polarization modes appear for a spontaneous macroscopic electric polarization (these modes are tentatively called "dipolons"); one of the polarization modes is related to parametric resonances. The extension of these considerations to magnetic dipoles is briefly discussed. The treatment is extended to strong electric fields which oscillate with a high frequency, as those provided by high-power lasers. It is shown that the effect of such fields on molecular dynamics is governed by a much weaker, effective, renormalized, static electric field.
Magnetization dynamics using ultrashort magnetic field pulses
NASA Astrophysics Data System (ADS)
Tudosa, Ioan
Very short and well shaped magnetic field pulses can be generated using ultra-relativistic electron bunches at Stanford Linear Accelerator. These fields of several Tesla with duration of several picoseconds are used to study the response of magnetic materials to a very short excitation. Precession of a magnetic moment by 90 degrees in a field of 1 Tesla takes about 10 picoseconds, so we explore the range of fast switching of the magnetization by precession. Our experiments are in a region of magnetic excitation that is not yet accessible by other methods. The current table top experiments can generate fields longer than 100 ps and with strength of 0.1 Tesla only. Two types of magnetic were used, magnetic recording media and model magnetic thin films. Information about the magnetization dynamics is extracted from the magnetic patterns generated by the magnetic field. The shape and size of these patterns are influenced by the dissipation of angular momentum involved in the switching process. The high-density recording media, both in-plane and perpendicular type, shows a pattern which indicates a high spin momentum dissipation. The perpendicular magnetic recording media was exposed to multiple magnetic field pulses. We observed an extended transition region between switched and non-switched areas indicating a stochastic switching behavior that cannot be explained by thermal fluctuations. The model films consist of very thin crystalline Fe films on GaAs. Even with these model films we see an enhanced dissipation compared to ferromagnetic resonance studies. The magnetic patterns show that damping increases with time and it is not a constant as usually assumed in the equation describing the magnetization dynamics. The simulation using the theory of spin-wave scattering explains only half of the observed damping. An important feature of this theory is that the spin dissipation is time dependent and depends on the large angle between the magnetization and the magnetic
Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances
Fiete, Ila R.; Seung, H. Sebastian
2006-07-28
We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source.
Vortex dynamics in a wave field
NASA Astrophysics Data System (ADS)
Perret, Gaele; Poupardin, Adrien; Brossard, Jerome
2010-11-01
The interaction of waves and current with submerged structures in coastal zones generates some complex hydrodynamics features which may considerably impact the local environment. The geometrical singularities of the structures produce concentrated vortex filaments which may impact the sea bed and/or the free surface. The objective of the present study is to characterize the vortex dynamics generated by a horizontal plate considered as a vortex generator, in a regular wave field. Vortices are generated at the edges of the plate. They undergo three-dimensional instabilities leading to their destruction. Their dynamics is investigated thanks to laboratory experiments conducted in two different wave flumes to study the impact of the scale on the dynamics. The two-dimensional vortex dynamics is characterized using PIV measurements. Vortex intensity, trajectory and life time are determined. The three-dimensional dynamics is studied thanks to stereo photography. The vortices are visualised with hydrogen bubbles generated at the edges of the plate by electrolyse. The evolution of the vortices is visualized by two CCD cameras located in different planes. Two most unstable wavelengths are observed which do not seem to depend on the width of the wave flume.
Dynamically Partitionable Autoassociative Networks as a Solution to the Neural Binding Problem
Hayworth, Kenneth J.
2012-01-01
An outstanding question in theoretical neuroscience is how the brain solves the neural binding problem. In vision, binding can be summarized as the ability to represent that certain properties belong to one object while other properties belong to a different object. I review the binding problem in visual and other domains, and review its simplest proposed solution – the anatomical binding hypothesis. This hypothesis has traditionally been rejected as a true solution because it seems to require a type of one-to-one wiring of neurons that would be impossible in a biological system (as opposed to an engineered system like a computer). I show that this requirement for one-to-one wiring can be loosened by carefully considering how the neural representation is actually put to use by the rest of the brain. This leads to a solution where a symbol is represented not as a particular pattern of neural activation but instead as a piece of a global stable attractor state. I introduce the Dynamically Partitionable AutoAssociative Network (DPAAN) as an implementation of this solution and show how DPANNs can be used in systems which perform perceptual binding and in systems that implement syntax-sensitive rules. Finally I show how the core parts of the cognitive architecture ACT-R can be neurally implemented using a DPAAN as ACT-R’s global workspace. Because the DPAAN solution to the binding problem requires only “flat” neural representations (as opposed to the phase encoded representation hypothesized in neural synchrony solutions) it is directly compatible with the most well developed neural models of learning, memory, and pattern recognition. PMID:23060784
NASA Astrophysics Data System (ADS)
Ticchi, Alessandro; Faisal, Aldo A.; Brain; Behaviour Lab Team
2015-03-01
Experimental evidence at the behavioural-level shows that the brains are able to make Bayes-optimal inference and decisions (Kording and Wolpert 2004, Nature; Ernst and Banks, 2002, Nature), yet at the circuit level little is known about how neural circuits may implement Bayesian learning and inference (but see (Ma et al. 2006, Nat Neurosci)). Molecular sources of noise are clearly established to be powerful enough to pose limits to neural function and structure in the brain (Faisal et al. 2008, Nat Rev Neurosci; Faisal et al. 2005, Curr Biol). We propose a spking neuron model where we exploit molecular noise as a useful resource to implement close-to-optimal inference by sampling. Specifically, we derive a synaptic plasticity rule which, coupled with integrate-and-fire neural dynamics and recurrent inhibitory connections, enables a neural population to learn the statistical properties of the received sensory input (prior). Moreover, the proposed model allows to combine prior knowledge with additional sources of information (likelihood) from another neural population, and to implement in spiking neurons a Markov Chain Monte Carlo algorithm which generates samples from the inferred posterior distribution.
Autonomous dynamics in neural networks: the dHAN concept and associative thought processes
NASA Astrophysics Data System (ADS)
Gros, Claudius
2007-02-01
The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-01-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313
NASA Astrophysics Data System (ADS)
Bruton, C. P.; West, M. E.
2013-12-01
Earthquakes and seismicity have long been used to monitor volcanoes. In addition to time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-05-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313
Mixing Dynamics Induced by Traveling Magnetic Fields
NASA Technical Reports Server (NTRS)
Grugel, Richard N.; Mazuruk, Konstantin; Rose, M. Franklin (Technical Monitor)
2001-01-01
Microstructural and compositional homogeneity in metals and alloys can only be achieved if the initial melt is homogeneous prior to the onset of solidification processing. Naturally induced convection may initially facilitate this requirement but upon the onset of solidification significant compositional variations generally arise leading to undesired segregation. Application of alternating magnetic fields to promote a uniform bulk liquid concentration during solidification processing has been suggested. To investigate such possibilities an initial study of using traveling magnetic fields (TMF) to promote melt homogenization is reported in this work. Theoretically, the effect of TMF-induced convection on mixing phenomena is studied in the laminar regime of flow. Experimentally, with and without applied fields, both 1) mixing dynamics by optically monitoring the spreading of an initially localized dye in transparent fluids and, 2) compositional variations in metal alloys have been investigated.
Mixing Dynamics Induced by Traveling Magnetic Fields
NASA Technical Reports Server (NTRS)
Grugel, Richard N.; Mazuruk, Konstantin
2000-01-01
Microstructural and compositional homogeneity in metals and alloys can only be achieved if the initial melt is homogeneous prior to the onset of solidification processing. Naturally induced convection may initially facilitate this requirement but upon the onset of solidification significant compositional variations generally arise leading to undesired segregation. Application of alternating magnetic fields to promote a uniform bulk liquid concentration during solidification processing has been suggested. To investigate such possibilities an initial study of using traveling magnetic fields (TMF) to promote melt homogenization is reported in this work. Theoretically, the effect of TMF-induced convection on mixing phenomena is studied in the laminar regime of flow. Experimentally, with and without applied fields, both: mixing dynamics by optically monitoring the spreading of an initially localized dye in transparent fluids and, compositional variations in metal alloys have been investigated.
Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Robust fault detection of wind energy conversion systems based on dynamic neural networks.
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Dynamical similarity of geomagnetic field reversals.
Valet, Jean-Pierre; Fournier, Alexandre; Courtillot, Vincent; Herrero-Bervera, Emilio
2012-10-01
No consensus has been reached so far on the properties of the geomagnetic field during reversals or on the main features that might reveal its dynamics. A main characteristic of the reversing field is a large decrease in the axial dipole and the dominant role of non-dipole components. Other features strongly depend on whether they are derived from sedimentary or volcanic records. Only thermal remanent magnetization of lava flows can capture faithful records of a rapidly varying non-dipole field, but, because of episodic volcanic activity, sequences of overlying flows yield incomplete records. Here we show that the ten most detailed volcanic records of reversals can be matched in a very satisfactory way, under the assumption of a common duration, revealing common dynamical characteristics. We infer that the reversal process has remained unchanged, with the same time constants and durations, at least since 180 million years ago. We propose that the reversing field is characterized by three successive phases: a precursory event, a 180° polarity switch and a rebound. The first and third phases reflect the emergence of the non-dipole field with large-amplitude secular variation. They are rarely both recorded at the same site owing to the rapidly changing field geometry and last for less than 2,500 years. The actual transit between the two polarities does not last longer than 1,000 years and might therefore result from mechanisms other than those governing normal secular variation. Such changes are too brief to be accurately recorded by most sediments. PMID:23038471
Dynamic modeling of physical phenomena for PRAs using neural networks
Benjamin, A.S.; Brown, N.N.; Paez, T.L.
1998-04-01
In most probabilistic risk assessments, there is a set of accident scenarios that involves the physical responses of a system to environmental challenges. Examples include the effects of earthquakes and fires on the operability of a nuclear reactor safety system, the effects of fires and impacts on the safety integrity of a nuclear weapon, and the effects of human intrusions on the transport of radionuclides from an underground waste facility. The physical responses of the system to these challenges can be quite complex, and their evaluation may require the use of detailed computer codes that are very time consuming to execute. Yet, to perform meaningful probabilistic analyses, it is necessary to evaluate the responses for a large number of variations in the input parameters that describe the initial state of the system, the environments to which it is exposed, and the effects of human interaction. Because the uncertainties of the system response may be very large, it may also be necessary to perform these evaluations for various values of modeling parameters that have high uncertainties, such as material stiffnesses, surface emissivities, and ground permeabilities. The authors have been exploring the use of artificial neural networks (ANNs) as a means for estimating the physical responses of complex systems to phenomenological events such as those cited above. These networks are designed as mathematical constructs with adjustable parameters that can be trained so that the results obtained from the networks will simulate the results obtained from the detailed computer codes. The intent is for the networks to provide an adequate simulation of the detailed codes over a significant range of variables while requiring only a small fraction of the computer processing time required by the detailed codes. This enables the authors to integrate the physical response analyses into the probabilistic models in order to estimate the probabilities of various responses.
Neural network architecture for cognitive navigation in dynamic environments.
Villacorta-Atienza, José Antonio; Makarov, Valeri A
2013-12-01
Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution. PMID:24805224
Altered temporal dynamics of neural adaptation in the aging human auditory cortex.
Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas
2016-09-01
Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging. PMID:27459921
Dynamic neural networking as a basis for plasticity in the control of heart rate.
Kember, G; Armour, J A; Zamir, M
2013-01-21
A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. PMID:23041448
Dynamic functional integration of distinct neural empathy systems
2014-01-01
Recent evidence points to two separate systems for empathy: a vicarious sharing emotional system that supports our ability to share emotions and mental states and a cognitive system that involves cognitive understanding of the perspective of others. Several recent models offer new evidence regarding the brain regions involved in these systems, but no study till date has examined how regions within each system dynamically interact. The study by Raz et al. in this issue of Social, Cognitive, & Affective Neuroscience is among the first to use a novel approach of functional magnetic resonance imaging analysis of fluctuations in network cohesion while an individual is experiencing empathy. Their results substantiate the approach positing two empathy mechanisms and, more broadly, demonstrate how dynamic analysis of emotions can further our understanding of social behavior. PMID:23956080
Wang Rubin; Yu Wei
2005-08-25
In this paper, we investigate how the population of neuronal oscillators deals with information and the dynamic evolution of neural coding when the external stimulation acts on it. Numerically computing method is used to describe the evolution process of neural coding in three-dimensioned space. The numerical result proves that only the suitable stimulation can change the coupling structure and plasticity of neurons.
Quantum dynamics in strong fluctuating fields
NASA Astrophysics Data System (ADS)
Goychuk, Igor; Hänggi, Peter
A large number of multifaceted quantum transport processes in molecular systems and physical nanosystems, such as e.g. nonadiabatic electron transfer in proteins, can be treated in terms of quantum relaxation processes which couple to one or several fluctuating environments. A thermal equilibrium environment can conveniently be modelled by a thermal bath of harmonic oscillators. An archetype situation provides a two-state dissipative quantum dynamics, commonly known under the label of a spin-boson dynamics. An interesting and nontrivial physical situation emerges, however, when the quantum dynamics evolves far away from thermal equilibrium. This occurs, for example, when a charge transferring medium possesses nonequilibrium degrees of freedom, or when a strong time-dependent control field is applied externally. Accordingly, certain parameters of underlying quantum subsystem acquire stochastic character. This may occur, for example, for the tunnelling coupling between the donor and acceptor states of the transferring electron, or for the corresponding energy difference between electronic states which assume via the coupling to the fluctuating environment an explicit stochastic or deterministic time-dependence. Here, we review the general theoretical framework which is based on the method of projector operators, yielding the quantum master equations for systems that are exposed to strong external fields. This allows one to investigate on a common basis, the influence of nonequilibrium fluctuations and periodic electrical fields on those already mentioned dynamics and related quantum transport processes. Most importantly, such strong fluctuating fields induce a whole variety of nonlinear and nonequilibrium phenomena. A characteristic feature of such dynamics is the absence of thermal (quantum) detailed balance.ContentsPAGE1. Introduction5262. Quantum dynamics in stochastic fields531 2.1. Stochastic Liouville equation531 2.2. Non-Markovian vs. Markovian discrete
Dynamic recurrent neural networks for stable adaptive control of wing rock motion
NASA Astrophysics Data System (ADS)
Kooi, Steven Boon-Lam
Wing rock is a self-sustaining limit cycle oscillation (LCO) which occurs as the result of nonlinear coupling between the dynamic response of the aircraft and the unsteady aerodynamic forces. In this thesis, dynamic recurrent RBF (Radial Basis Function) network control methodology is proposed to control the wing rock motion. The concept based on the properties of the Presiach hysteresis model is used in the design of dynamic neural networks. The structure and memory mechanism in the Preisach model is analogous to the parallel connectivity and memory formation in the RBF neural networks. The proposed dynamic recurrent neural network has a feature for adding or pruning the neurons in the hidden layer according to the growth criteria based on the properties of ensemble average memory formation of the Preisach model. The recurrent feature of the RBF network deals with the dynamic nonlinearities and endowed temporal memories of the hysteresis model. The control of wing rock is a tracking problem, the trajectory starts from non-zero initial conditions and it tends to zero as time goes to infinity. In the proposed neural control structure, the recurrent dynamic RBF network performs identification process in order to approximate the unknown non-linearities of the physical system based on the input-output data obtained from the wing rock phenomenon. The design of the RBF networks together with the network controllers are carried out in discrete time domain. The recurrent RBF networks employ two separate adaptation schemes where the RBF's centre and width are adjusted by the Extended Kalman Filter in order to give a minimum networks size, while the outer networks layer weights are updated using the algorithm derived from Lyapunov stability analysis for the stable closed loop control. The issue of the robustness of the recurrent RBF networks is also addressed. The effectiveness of the proposed dynamic recurrent neural control methodology is demonstrated through simulations to