Spatiotemporal dynamics of continuum neural fields
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2012-01-01
We survey recent analytical approaches to studying the spatiotemporal dynamics of continuum neural fields. Neural fields model the large-scale dynamics of spatially structured biological neural networks in terms of nonlinear integrodifferential equations whose associated integral kernels represent the spatial distribution of neuronal synaptic connections. They provide an important example of spatially extended excitable systems with nonlocal interactions and exhibit a wide range of spatially coherent dynamics including traveling waves oscillations and Turing-like patterns.
Metastable dynamics in heterogeneous neural fields.
Schwappach, Cordula; Hutt, Axel; Beim Graben, Peter
2015-01-01
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671
Metastable dynamics in heterogeneous neural fields
Schwappach, Cordula; Hutt, Axel; beim Graben, Peter
2015-01-01
We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data. PMID:26175671
Neural Field Dynamics with Heterogeneous Connection Topology
NASA Astrophysics Data System (ADS)
Qubbaj, Murad R.; Jirsa, Viktor K.
2007-06-01
Neural fields receive inputs from local and nonlocal sources. Notably in a biologically realistic architecture the latter vary under spatial translations (heterogeneous), the former do not (homogeneous). To understand the mutual effects of homogeneous and heterogeneous connectivity, we study the stability of the steady state activity of a neural field as a function of its connectivity and transmission speed. We show that myelination, a developmentally relevant change of the heterogeneous connectivity, always results in the stabilization of the steady state via oscillatory instabilities, independent of the local connectivity. Nonoscillatory instabilities are shown to be independent of any influences of time delay.
Fluctuation-response relation unifies dynamical behaviors in neural fields
NASA Astrophysics Data System (ADS)
Fung, C. C. Alan; Wong, K. Y. Michael; Mao, Hongzi; Wu, Si
2015-08-01
Anticipation is a strategy used by neural fields to compensate for transmission and processing delays during the tracking of dynamical information and can be achieved by slow, localized, inhibitory feedback mechanisms such as short-term synaptic depression, spike-frequency adaptation, or inhibitory feedback from other layers. Based on the translational symmetry of the mobile network states, we derive generic fluctuation-response relations, providing unified predictions that link their tracking behaviors in the presence of external stimuli to the intrinsic dynamics of the neural fields in their absence.
Fluctuation-response relation unifies dynamical behaviors in neural fields.
Fung, C C Alan; Wong, K Y Michael; Mao, Hongzi; Wu, Si
2015-08-01
Anticipation is a strategy used by neural fields to compensate for transmission and processing delays during the tracking of dynamical information and can be achieved by slow, localized, inhibitory feedback mechanisms such as short-term synaptic depression, spike-frequency adaptation, or inhibitory feedback from other layers. Based on the translational symmetry of the mobile network states, we derive generic fluctuation-response relations, providing unified predictions that link their tracking behaviors in the presence of external stimuli to the intrinsic dynamics of the neural fields in their absence. PMID:26382448
Conditions of activity bubble uniqueness in dynamic neural fields.
Mikhailova, Inna; Goerick, Christian
2005-02-01
Dynamic neural fields (DNFs) offer a rich spectrum of dynamic properties like hysteresis, spatiotemporal information integration, and coexistence of multiple attractors. These properties make DNFs more and more popular in implementations of sensorimotor loops for autonomous systems. Applications often imply that DNFs should have only one compact region of firing neurons (activity bubble), whereas the rest of the field should not fire (e.g., if the field represents motor commands). In this article we prove the conditions of activity bubble uniqueness in the case of locally symmetric input bubbles. The qualitative condition on inhomogeneous inputs used in earlier work on DNFs is transfered to a quantitative condition of a balance between the internal dynamics and the input. The mathematical analysis is carried out for the two-dimensional case with methods that can be extended to more than two dimensions. The article concludes with an example of how our theoretical results facilitate the practical use of DNFs. PMID:15685393
Neural Population Dynamics Modeled by Mean-Field Graphs
NASA Astrophysics Data System (ADS)
Kozma, Robert; Puljic, Marko
2011-09-01
In this work we apply random graph theory approach to describe neural population dynamics. There are important advantages of using random graph theory approach in addition to ordinary and partial differential equations. The mathematical theory of large-scale random graphs provides an efficient tool to describe transitions between high- and low-dimensional spaces. Recent advances in studying neural correlates of higher cognition indicate the significance of sudden changes in space-time neurodynamics, which can be efficiently described as phase transitions in the neuropil medium. Phase transitions are rigorously defined mathematically on random graph sequences and they can be naturally generalized to a class of percolation processes called neuropercolation. In this work we employ mean-field graphs with given vertex degree distribution and edge strength distribution. We demonstrate the emergence of collective oscillations in the style of brains.
The dynamic neural field approach to cognitive robotics.
Erlhagen, Wolfram; Bicho, Estela
2006-09-01
This tutorial presents an architecture for autonomous robots to generate behavior in joint action tasks. To efficiently interact with another agent in solving a mutual task, a robot should be endowed with cognitive skills such as memory, decision making, action understanding and prediction. The proposed architecture is strongly inspired by our current understanding of the processing principles and the neuronal circuitry underlying these functionalities in the primate brain. As a mathematical framework, we use a coupled system of dynamic neural fields, each representing the basic functionality of neuronal populations in different brain areas. It implements goal-directed behavior in joint action as a continuous process that builds on the interpretation of observed movements in terms of the partner's action goal. We validate the architecture in two experimental paradigms: (1) a joint search task; (2) a reproduction of an observed or inferred end state of a grasping-placing sequence. We also review some of the mathematical results about dynamic neural fields that are important for the implementation work. PMID:16921201
Dynamic neural fields as a step toward cognitive neuromorphic architectures.
Sandamirskaya, Yulia
2013-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620
Dynamic neural fields as a step toward cognitive neuromorphic architectures
Sandamirskaya, Yulia
2014-01-01
Dynamic Field Theory (DFT) is an established framework for modeling embodied cognition. In DFT, elementary cognitive functions such as memory formation, formation of grounded representations, attentional processes, decision making, adaptation, and learning emerge from neuronal dynamics. The basic computational element of this framework is a Dynamic Neural Field (DNF). Under constraints on the time-scale of the dynamics, the DNF is computationally equivalent to a soft winner-take-all (WTA) network, which is considered one of the basic computational units in neuronal processing. Recently, it has been shown how a WTA network may be implemented in neuromorphic hardware, such as analog Very Large Scale Integration (VLSI) device. This paper leverages the relationship between DFT and soft WTA networks to systematically revise and integrate established DFT mechanisms that have previously been spread among different architectures. In addition, I also identify some novel computational and architectural mechanisms of DFT which may be implemented in neuromorphic VLSI devices using WTA networks as an intermediate computational layer. These specific mechanisms include the stabilization of working memory, the coupling of sensory systems to motor dynamics, intentionality, and autonomous learning. I further demonstrate how all these elements may be integrated into a unified architecture to generate behavior and autonomous learning. PMID:24478620
Dynamic patterns in a two-dimensional neural field with refractoriness
NASA Astrophysics Data System (ADS)
Qi, Yang; Gong, Pulin
2015-08-01
The formation of dynamic patterns such as localized propagating waves is a fascinating self-organizing phenomenon that happens in a wide range of spatially extended systems including neural systems, in which they might play important functional roles. Here we derive a type of two-dimensional neural-field model with refractoriness to study the formation mechanism of localized waves. After comparing this model with existing neural-field models, we show that it is able to generate a variety of localized patterns, including stationary bumps, localized waves rotating along a circular path, and localized waves with longer-range propagation. We construct explicit bump solutions for the two-dimensional neural field and conduct a linear stability analysis on how a stationary bump transitions to a propagating wave under different spatial eigenmode perturbations. The neural-field model is then partially solved in a comoving frame to obtain localized wave solutions, whose spatial profiles are in good agreement with those obtained from simulations. We demonstrate that when there are multiple such propagating waves, they exhibit rich propagation dynamics, including propagation along periodically oscillating and irregular trajectories; these propagation dynamics are quantitatively characterized. In addition, we show that these waves can have repulsive or merging collisions, depending on their collision angles and the refractoriness parameter. Due to its analytical tractability, the two-dimensional neural-field model provides a modeling framework for studying localized propagating waves and their interactions.
Neural field simulator: two-dimensional spatio-temporal dynamics involving finite transmission speed
Nichols, Eric J.; Hutt, Axel
2015-01-01
Neural Field models (NFM) play an important role in the understanding of neural population dynamics on a mesoscopic spatial and temporal scale. Their numerical simulation is an essential element in the analysis of their spatio-temporal dynamics. The simulation tool described in this work considers scalar spatially homogeneous neural fields taking into account a finite axonal transmission speed and synaptic temporal derivatives of first and second order. A text-based interface offers complete control of field parameters and several approaches are used to accelerate simulations. A graphical output utilizes video hardware acceleration to display running output with reduced computational hindrance compared to simulators that are exclusively software-based. Diverse applications of the tool demonstrate breather oscillations, static and dynamic Turing patterns and activity spreading with finite propagation speed. The simulator is open source to allow tailoring of code and this is presented with an extension use case. PMID:26539105
Modeling human target reaching with an adaptive observer implemented with dynamic neural fields.
Fard, Farzaneh S; Hollensen, Paul; Heinke, Dietmar; Trappenberg, Thomas P
2015-12-01
Humans can point fairly accurately to memorized states when closing their eyes despite slow or even missing sensory feedback. It is also common that the arm dynamics changes during development or from injuries. We propose a biologically motivated implementation of an arm controller that includes an adaptive observer. Our implementation is based on the neural field framework, and we show how a path integration mechanism can be trained from few examples. Our results illustrate successful generalization of path integration with a dynamic neural field by which the robotic arm can move in arbitrary directions and velocities. Also, by adapting the strength of the motor effect the observer implicitly learns to compensate an image acquisition delay in the sensory system. Our dynamic implementation of an observer successfully guides the arm toward the target in the dark, and the model produces movements with a bell-shaped velocity profile, consistent with human behavior data. PMID:26559472
Coupling actin dynamics to phase-field in modeling neural growth.
Najem, Sara; Grant, Martin
2015-06-14
In this paper we model the growth of a neural cell together with the actin dynamics taking place at its growing region by constructing a phase-field model. This is done by assigning auxiliary fields to different constituents of the cell in order to differentiate them. Specifically, the inner and outer regions of the neural cell are described by ϕ = 1 and ϕ = 0 respectively, whereas the inside and outside of its leading edge are portrayed by ψ = 1 and ψ = 0. This formulation inherently locates the boundary, which is required to determine the evolution of the underlying actin dynamics. Therefore, it provides an alternative to boundary tracking algorithms. Then the equations governing the molecular workings of the cell specifically those of actin are modified in order to satisfy their corresponding boundary conditions. PMID:25943025
The dynamic brain: from spiking neurons to neural masses and cortical fields.
Deco, Gustavo; Jirsa, Viktor K; Robinson, Peter A; Breakspear, Michael; Friston, Karl
2008-01-01
The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences. PMID
Dynamics of neural cryptography
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Neural field dynamics under variation of local and global connectivity and finite transmission speed
NASA Astrophysics Data System (ADS)
Qubbaj, Murad R.; Jirsa, Viktor K.
2009-12-01
Spatially continuous networks with heterogeneous connections are ubiquitous in biological systems, in particular neural systems. To understand the mutual effects of locally homogeneous and globally heterogeneous connectivity, we investigate the stability of the steady state activity of a neural field as a function of its connectivity. The variation of the connectivity is implemented through manipulation of a heterogeneous two-point connection embedded into the otherwise homogeneous connectivity matrix and by variation of the connectivity strength and transmission speed. Detailed examples including the Ginzburg-Landau equation and various other local architectures are discussed. Our analysis shows that developmental changes such as the myelination of the cortical large-scale fiber system generally result in the stabilization of steady state activity independent of the local connectivity. Non-oscillatory instabilities are shown to be independent of any influences of time delay.
Rich spectrum of neural field dynamics in the presence of short-term synaptic depression
NASA Astrophysics Data System (ADS)
Wang, He; Lam, Kin; Fung, C. C. Alan; Wong, K. Y. Michael; Wu, Si
2015-09-01
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.
Stochastic mean-field formulation of the dynamics of diluted neural networks
NASA Astrophysics Data System (ADS)
Angulo-Garcia, D.; Torcini, A.
2015-02-01
We consider pulse-coupled leaky integrate-and-fire neural networks with randomly distributed synaptic couplings. This random dilution induces fluctuations in the evolution of the macroscopic variables and deterministic chaos at the microscopic level. Our main aim is to mimic the effect of the dilution as a noise source acting on the dynamics of a globally coupled nonchaotic system. Indeed, the evolution of a diluted neural network can be well approximated as a fully pulse-coupled network, where each neuron is driven by a mean synaptic current plus additive noise. These terms represent the average and the fluctuations of the synaptic currents acting on the single neurons in the diluted system. The main microscopic and macroscopic dynamical features can be retrieved with this stochastic approximation. Furthermore, the microscopic stability of the diluted network can be also reproduced, as demonstrated from the almost coincidence of the measured Lyapunov exponents in the deterministic and stochastic cases for an ample range of system sizes. Our results strongly suggest that the fluctuations in the synaptic currents are responsible for the emergence of chaos in this class of pulse-coupled networks.
Learning to recognize objects on the fly: a neurally based dynamic field approach.
Faubel, Christian; Schöner, Gregor
2008-05-01
Autonomous robots interacting with human users need to build and continuously update scene representations. This entails the problem of rapidly learning to recognize new objects under user guidance. Based on analogies with human visual working memory, we propose a dynamical field architecture, in which localized peaks of activation represent objects over a small number of simple feature dimensions. Learning consists of laying down memory traces of such peaks. We implement the dynamical field model on a service robot and demonstrate how it learns 30 objects from a very small number of views (about 5 per object are sufficient). We also illustrate how properties of feature binding emerge from this framework. PMID:18501555
Hou, Saing Paul; Haddad, Wassim M; Meskin, Nader; Bailey, James M
2015-12-01
With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory. PMID:26438186
Dynamic interactions in neural networks
Arbib, M.A. ); Amari, S. )
1989-01-01
The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.
Mean-field dynamics of a random neural network with noise
NASA Astrophysics Data System (ADS)
Klinshov, Vladimir; Franović, Igor
2015-12-01
We consider a network of randomly coupled rate-based neurons influenced by external and internal noise. We derive a second-order stochastic mean-field model for the network dynamics and use it to analyze the stability and bifurcations in the thermodynamic limit, as well as to study the fluctuations due to the finite-size effect. It is demonstrated that the two types of noise have substantially different impact on the network dynamics. While both sources of noise give rise to stochastic fluctuations in the case of the finite-size network, only the external noise affects the stationary activity levels of the network in the thermodynamic limit. We compare the theoretical predictions with the direct simulation results and show that they agree for large enough network sizes and for parameter domains sufficiently away from bifurcations.
Creative-Dynamics Approach To Neural Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1992-01-01
Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.
Francis, Joseph T; Chapin, John K
2006-06-01
In everyday life, we reach, grasp, and manipulate a variety of different objects all with their own dynamic properties. This degree of adaptability is essential for a brain-controlled prosthetic arm to work in the real world. In this study, rats were trained to make reaching movements while holding a torque manipulandum working against two distinct loads. Neural recordings obtained from arrays of 32 microelectrodes spanning the motor cortex were used to predict several movement related variables. In this paper, we demonstrate that a simple linear regression model can translate neural activity into endpoint position of a robotic manipulandum even while the animal controlling it works against different loads. A second regression model can predict, with 100% accuracy, which of the two loads is being manipulated by the animal. Finally, a third model predicts the work needed to move the manipulandum endpoint. This prediction is significantly better than that for position. In each case, the regression model uses a single set of weights. Thus, the neural ensemble is capable of providing the information necessary to compensate for at least two distinct load conditions. PMID:16792286
Emergent complex neural dynamics
NASA Astrophysics Data System (ADS)
Chialvo, Dante R.
2010-10-01
A large repertoire of spatiotemporal activity patterns in the brain is the basis for adaptive behaviour. Understanding the mechanism by which the brain's hundred billion neurons and hundred trillion synapses manage to produce such a range of cortical configurations in a flexible manner remains a fundamental problem in neuroscience. One plausible solution is the involvement of universal mechanisms of emergent complex phenomena evident in dynamical systems poised near a critical point of a second-order phase transition. We review recent theoretical and empirical results supporting the notion that the brain is naturally poised near criticality, as well as its implications for better understanding of the brain.
Dynamical systems, attractors, and neural circuits
Miller, Paul
2016-01-01
Biology is the study of dynamical systems. Yet most of us working in biology have limited pedagogical training in the theory of dynamical systems, an unfortunate historical fact that can be remedied for future generations of life scientists. In my particular field of systems neuroscience, neural circuits are rife with nonlinearities at all levels of description, rendering simple methodologies and our own intuition unreliable. Therefore, our ideas are likely to be wrong unless informed by good models. These models should be based on the mathematical theories of dynamical systems since functioning neurons are dynamic—they change their membrane potential and firing rates with time. Thus, selecting the appropriate type of dynamical system upon which to base a model is an important first step in the modeling process. This step all too easily goes awry, in part because there are many frameworks to choose from, in part because the sparsely sampled data can be consistent with a variety of dynamical processes, and in part because each modeler has a preferred modeling approach that is difficult to move away from. This brief review summarizes some of the main dynamical paradigms that can arise in neural circuits, with comments on what they can achieve computationally and what signatures might reveal their presence within empirical data. I provide examples of different dynamical systems using simple circuits of two or three cells, emphasizing that any one connectivity pattern is compatible with multiple, diverse functions. PMID:27408709
Model Of Neural Network With Creative Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail; Barhen, Jacob
1993-01-01
Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.
Foetal ECG recovery using dynamic neural networks.
Camps-Valls, Gustavo; Martínez-Sober, Marcelino; Soria-Olivas, Emilio; Magdalena-Benedito, Rafael; Calpe-Maravilla, Javier; Guerrero-Martínez, Juan
2004-07-01
Non-invasive electrocardiography has proven to be a very interesting method for obtaining information about the foetus state and thus to assure its well-being during pregnancy. One of the main applications in this field is foetal electrocardiogram (ECG) recovery by means of automatic methods. Evident problems found in the literature are the limited number of available registers, the lack of performance indicators, and the limited use of non-linear adaptive methods. In order to circumvent these problems, we first introduce the generation of synthetic registers and discuss the influence of different kinds of noise to the modelling. Second, a method which is based on numerical (correlation coefficient) and statistical (analysis of variance, ANOVA) measures allows us to select the best recovery model. Finally, finite impulse response (FIR) and gamma neural networks are included in the adaptive noise cancellation (ANC) scheme in order to provide highly non-linear, dynamic capabilities to the recovery model. Neural networks are benchmarked with classical adaptive methods such as the least mean squares (LMS) and the normalized LMS (NLMS) algorithms in simulated and real registers and some conclusions are drawn. For synthetic registers, the most determinant factor in the identification of the models is the foetal-maternal signal-to-noise ratio (SNR). In addition, as the electromyogram contribution becomes more relevant, neural networks clearly outperform the LMS-based algorithm. From the ANOVA test, we found statistical differences between LMS-based models and neural models when complex situations (high foetal-maternal and foetal-noise SNRs) were present. These conclusions were confirmed after doing robustness tests on synthetic registers, visual inspection of the recovered signals and calculation of the recognition rates of foetal R-peaks for real situations. Finally, the best compromise between model complexity and outcomes was provided by the FIR neural network. Both
Propagating waves can explain irregular neural dynamics.
Keane, Adam; Gong, Pulin
2015-01-28
Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. PMID:25632135
Dynamic alignment models for neural coding.
Kollmorgen, Sepp; Hahnloser, Richard H R
2014-03-01
Recently, there have been remarkable advances in modeling the relationships between the sensory environment, neuronal responses, and behavior. However, most models cannot encompass variable stimulus-response relationships such as varying response latencies and state or context dependence of the neural code. Here, we consider response modeling as a dynamic alignment problem and model stimulus and response jointly by a mixed pair hidden Markov model (MPH). In MPHs, multiple stimulus-response relationships (e.g., receptive fields) are represented by different states or groups of states in a Markov chain. Each stimulus-response relationship features temporal flexibility, allowing modeling of variable response latencies, including noisy ones. We derive algorithms for learning of MPH parameters and for inference of spike response probabilities. We show that some linear-nonlinear Poisson cascade (LNP) models are a special case of MPHs. We demonstrate the efficiency and usefulness of MPHs in simulations of both jittered and switching spike responses to white noise and natural stimuli. Furthermore, we apply MPHs to extracellular single and multi-unit data recorded in cortical brain areas of singing birds to showcase a novel method for estimating response lag distributions. MPHs allow simultaneous estimation of receptive fields, latency statistics, and hidden state dynamics and so can help to uncover complex stimulus response relationships that are subject to variable timing and involve diverse neural codes. PMID:24625448
The Complexity of Dynamics in Small Neural Circuits.
Fasoli, Diego; Cattani, Anna; Panzeri, Stefano
2016-08-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
The Complexity of Dynamics in Small Neural Circuits
Panzeri, Stefano
2016-01-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
Field-theoretic approach to fluctuation effects in neural networks
Buice, Michael A.; Cowan, Jack D.
2007-05-15
A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governed by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.
Dynamic analysis of neural encoding by point process adaptive filtering.
Eden, Uri T; Frank, Loren M; Barbieri, Riccardo; Solo, Victor; Brown, Emery N
2004-05-01
Neural receptive fields are dynamic in that with experience, neurons change their spiking responses to relevant stimuli. To understand how neural systems adapt their representations of biological information, analyses of receptive field plasticity from experimental measurements are crucial. Adaptive signal processing, the well-established engineering discipline for characterizing the temporal evolution of system parameters, suggests a framework for studying the plasticity of receptive fields. We use the Bayes' rule Chapman-Kolmogorov paradigm with a linear state equation and point process observation models to derive adaptive filters appropriate for estimation from neural spike trains. We derive point process filter analogues of the Kalman filter, recursive least squares, and steepest-descent algorithms and describe the properties of these new filters. We illustrate our algorithms in two simulated data examples. The first is a study of slow and rapid evolution of spatial receptive fields in hippocampal neurons. The second is an adaptive decoding study in which a signal is decoded from ensemble neural spiking activity as the receptive fields of the neurons in the ensemble evolve. Our results provide a paradigm for adaptive estimation for point process observations and suggest a practical approach for constructing filtering algorithms to track neural receptive field dynamics on a millisecond timescale. PMID:15070506
Comparing artificial and biological dynamical neural networks
NASA Astrophysics Data System (ADS)
McAulay, Alastair D.
2006-05-01
Modern computers can be made more friendly and otherwise improved by making them behave more like humans. Perhaps we can learn how to do this from biology in which human brains evolved over a long period of time. Therefore, we first explain a commonly used biological neural network (BNN) model, the Wilson-Cowan neural oscillator, that has cross-coupled excitatory (positive) and inhibitory (negative) neurons. The two types of neurons are used for frequency modulation communication between neurons which provides immunity to electromagnetic interference. We then evolve, for the first time, an artificial neural network (ANN) to perform the same task. Two dynamical feed-forward artificial neural networks use cross-coupling feedback (like that in a flip-flop) to form an ANN nonlinear dynamic neural oscillator with the same equations as the Wilson-Cowan neural oscillator. Finally we show, through simulation, that the equations perform the basic neural threshold function, switching between stable zero output and a stable oscillation, that is a stable limit cycle. Optical implementation with an injected laser diode and future research are discussed.
Neural network with formed dynamics of activity
Dunin-Barkovskii, V.L.; Osovets, N.B.
1995-03-01
The problem of developing a neural network with a given pattern of the state sequence is considered. A neural network structure and an algorithm, of forming its bond matrix which lead to an approximate but robust solution of the problem are proposed and discussed. Limiting characteristics of the serviceability of the proposed structure are studied. Various methods of visualizing dynamic processes in a neural network are compared. Possible applications of the results obtained for interpretation of neurophysiological data and in neuroinformatics systems are discussed.
Dynamics and kinematics of simple neural systems
Rabinovich, M. |; Selverston, A.; Rubchinsky, L.; Huerta, R.
1996-09-01
The dynamics of simple neural systems is of interest to both biologists and physicists. One of the possible roles of such systems is the production of rhythmic patterns, and their alterations (modification of behavior, processing of sensory information, adaptation, control). In this paper, the neural systems are considered as a subject of modeling by the dynamical systems approach. In particular, we analyze how a stable, ordinary behavior of a small neural system can be described by simple finite automata models, and how more complicated dynamical systems modeling can be used. The approach is illustrated by biological and numerical examples: experiments with and numerical simulations of the stomatogastric central pattern generators network of the California spiny lobster. {copyright} {ital 1996 American Institute of Physics.}
On lateral competition in dynamic neural networks
Bellyustin, N.S.
1995-02-01
Artificial neural networks connected homogeneously, which use retinal image processing methods, are considered. We point out that there are probably two different types of lateral inhibition for each neural element by the neighboring ones-due to the negative connection coefficients between elements and due to the decreasing neuron`s response to a too high input signal. The first case characterized by stable dynamics, which is given by the Lyapunov function, while in the second case, stability is absent and two-dimensional dynamic chaos occurs if the time step in the integration of model equations is large enough. The continuous neural medium approximation is used for analytical estimation in both cases. The result is the partition of the parameter space into domains with qualitatively different dynamic modes. Computer simulations confirm the estimates and show that joining two-dimensional chaos with symmetries provided by the initial and boundary conditions may produce patterns which are genuine pieces of art.
Dynamics and kinematics of simple neural systems
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail; Selverston, Allen; Rubchinsky, Leonid; Huerta, Ramón
1996-09-01
The dynamics of simple neural systems is of interest to both biologists and physicists. One of the possible roles of such systems is the production of rhythmic patterns, and their alterations (modification of behavior, processing of sensory information, adaptation, control). In this paper, the neural systems are considered as a subject of modeling by the dynamical systems approach. In particular, we analyze how a stable, ordinary behavior of a small neural system can be described by simple finite automata models, and how more complicated dynamical systems modeling can be used. The approach is illustrated by biological and numerical examples: experiments with and numerical simulations of the stomatogastric central pattern generators network of the California spiny lobster.
Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio
2015-01-01
How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how “attentional shrouds” are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of
Electrokinetic confinement of axonal growth for dynamically configurable neural networks.
Honegger, Thibault; Scott, Mark A; Yanik, Mehmet F; Voldman, Joel
2013-02-21
Axons in the developing nervous system are directed via guidance cues, whose expression varies both spatially and temporally, to create functional neural circuits. Existing methods to create patterns of neural connectivity in vitro use only static geometries, and are unable to dynamically alter the guidance cues imparted on the cells. We introduce the use of AC electrokinetics to dynamically control axonal growth in cultured rat hippocampal neurons. We find that the application of modest voltages at frequencies on the order of 10(5) Hz can cause developing axons to be stopped adjacent to the electrodes while axons away from the electric fields exhibit uninhibited growth. By switching electrodes on or off, we can reversibly inhibit or permit axon passage across the electrodes. Our models suggest that dielectrophoresis is the causative AC electrokinetic effect. We make use of our dynamic control over axon elongation to create an axon-diode via an axon-lock system that consists of a pair of electrode 'gates' that either permit or prevent axons from passing through. Finally, we developed a neural circuit consisting of three populations of neurons, separated by three axon-locks to demonstrate the assembly of a functional, engineered neural network. Action potential recordings demonstrate that the AC electrokinetic effect does not harm axons, and Ca(2+) imaging demonstrated the unidirectional nature of the synaptic connections. AC electrokinetic confinement of axonal growth has potential for creating configurable, directional neural networks. PMID:23314575
Electrokinetic confinement of axonal growth for dynamically configurable neural networks
Honegger, Thibault; Scott, Mark A.; Yanik, Mehmet F.; Voldman, Joel
2013-01-01
Axons in the developing nervous system are directed via guidance cues, whose expression varies both spatially and temporally, to create functional neural circuits. Existing methods to create patterns of neural connectivity in vitro use only static geometries, and are unable to dynamically alter the guidance cues imparted on the cells. We introduce the use of AC electrokinetics to dynamically control axonal growth in cultured rat hippocampal neurons. We find that the application of modest voltages at frequencies on the order of 105 Hz can cause developing axons to be stopped adjacent to the electrodes while axons away from the electric fields exhibit uninhibited growth. By switching electrodes on or off, we can reversibly inhibit or permit axon passage across the electrodes. Our models suggest that dielectrophoresis is the causative AC electrokinetic effect. We make use of our dynamic control over axon elongation to create an axon-diode via an axon-lock system that consists of a pair of electrode `gates' that either permit or prevent axons from passing through. Finally, we developed a neural circuit consisting of three populations of neurons, separated by three axon-locks to demonstrate the assembly of a functional, engineered neural network. Action potential recordings demonstrate that the AC electrokinetic effect does not harm axons, and Ca2+ imaging demonstrated the unidirectional nature of the synaptic connections. AC electrokinetic confinement of axonal growth has potential for creating configurable, directional neural networks. PMID:23314575
Synthesis of recurrent neural networks for dynamical system simulation.
Trischler, Adam P; D'Eleuterio, Gabriele M T
2016-08-01
We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. PMID:27182811
Neural networks, field theory, directed percolation, and critical branching
NASA Astrophysics Data System (ADS)
Buice, Michael A.
We describe the dynamics of neural activity using field-theoretic methods for non-equilibrium statistical processes. Using a Markov assumption, we introduce the "spike model". The spike model permits a characterization of both neural fluctuations and response, presenting a tractable way to extend the mean field (Wilson-Cowan) equations used in much of theoretical and computational neuroscience. We also demonstrate the formalism's application to the Cowan models, one of which is equivalent to the forest fire model with immune trees. We argue that neural activity under mild conditions exhibits a dynamical phase transition which is in the universality class of directed percolation (DP). Owing to the spatial extent of neural interactions, there is a region in which the critical behavior is that of a branching process before crossing over into the DP region, consistent with measurements in cortical slice preparations. From the perspective of theoretical neuroscience, a principal contribution of this work is the connection of the problem of non-linear, non-Gaussian systems with the problem of dealing with infrared singularities in field theory. This work suggests a general characterization of epilepsy as a manifestation of a directed percolation phase transition.
Axonal Velocity Distributions in Neural Field Equations
Bojak, Ingo; Liley, David T. J.
2010-01-01
By modelling the average activity of large neuronal populations, continuum mean field models (MFMs) have become an increasingly important theoretical tool for understanding the emergent activity of cortical tissue. In order to be computationally tractable, long-range propagation of activity in MFMs is often approximated with partial differential equations (PDEs). However, PDE approximations in current use correspond to underlying axonal velocity distributions incompatible with experimental measurements. In order to rectify this deficiency, we here introduce novel propagation PDEs that give rise to smooth unimodal distributions of axonal conduction velocities. We also argue that velocities estimated from fibre diameters in slice and from latency measurements, respectively, relate quite differently to such distributions, a significant point for any phenomenological description. Our PDEs are then successfully fit to fibre diameter data from human corpus callosum and rat subcortical white matter. This allows for the first time to simulate long-range conduction in the mammalian brain with realistic, convenient PDEs. Furthermore, the obtained results suggest that the propagation of activity in rat and human differs significantly beyond mere scaling. The dynamical consequences of our new formulation are investigated in the context of a well known neural field model. On the basis of Turing instability analyses, we conclude that pattern formation is more easily initiated using our more realistic propagator. By increasing characteristic conduction velocities, a smooth transition can occur from self-sustaining bulk oscillations to travelling waves of various wavelengths, which may influence axonal growth during development. Our analytic results are also corroborated numerically using simulations on a large spatial grid. Thus we provide here a comprehensive analysis of empirically constrained activity propagation in the context of MFMs, which will allow more realistic studies
Nonlinear dynamics of neural delayed feedback
Longtin, A.
1990-01-01
Neural delayed feedback is a property shared by many circuits in the central and peripheral nervous systems. The evolution of the neural activity in these circuits depends on their present state as well as on their past states, due to finite propagation time of neural activity along the feedback loop. These systems are often seen to undergo a change from a quiescent state characterized by low level fluctuations to an oscillatory state. We discuss the problem of analyzing this transition using techniques from nonlinear dynamics and stochastic processes. Our main goal is to characterize the nonlinearities which enable autonomous oscillations to occur and to uncover the properties of the noise sources these circuits interact with. The concepts are illustrated on the human pupil light reflex (PLR) which has been studied both theoretically and experimentally using this approach. 5 refs., 3 figs.
Probing Dynamical Character of Neural Circuits by Using Fuzzy Logic
NASA Astrophysics Data System (ADS)
Hu, Hong; Shi, Zhongzhi
2008-11-01
Analytical study or designing of large-scale nonlinear neural circuits, especially for chaotic neural circuits, is a difficult task. Here we analyze the function of neural systems by probing the fuzzy logical framework of the neural cells' dynamical equations. In this paper, the fuzzy logical framework of neural cells is used to understand the nonlinear dynamic attributes of a common neural system, and we proved that if a neural system works in a non-chaotic way, a suitable fuzzy logical framework can be found and we can analyze or design such kind neural system similar to analyze or design a digit computer, but if a neural system works in a chaotic way, an approximation is needed for understanding the function of such neural system.
Neural dynamics in superconducting networks
NASA Astrophysics Data System (ADS)
Segall, Kenneth; Schult, Dan; Crotty, Patrick; Miller, Max
2012-02-01
We discuss the use of Josephson junction networks as analog models for simulating neuron behaviors. A single unit called a ``Josephson Junction neuron'' composed of two Josephson junctions [1] displays behavior that shows characteristics of single neurons such as action potentials, thresholds and refractory periods. Synapses can be modeled as passive filters and can be used to connect neurons together. The sign of the bias current to the Josephson neuron can be used to determine if the neuron is excitatory or inhibitory. Due to the intrinsic speed of Josephson junctions and their scaling properties as analog models, a large network of Josephson neurons measured over typical lab times contains dynamics which would essentially be impossible to calculate on a computer We discuss the operating principle of the Josephson neuron, coupling Josephson neurons together to make large networks, and the Kuramoto-like synchronization of a system of disordered junctions.[4pt] [1] ``Josephson junction simulation of neurons,'' P. Crotty, D. Schult and K. Segall, Physical Review E 82, 011914 (2010).
On the Local-Field Distribution in Attractor Neural Networks
NASA Astrophysics Data System (ADS)
Korutcheva, E.; Koroutchev, K.
In this paper a simple two-layer neural network's model, similar to that studied by D. Amit and N. Brunel,11 is investigated in the frames of the mean-field approximation. The distributions of the local fields are analytically derived and compared to those obtained in Ref. 11. The dynamic properties are discussed and the basin of attraction in some parametric space is found. A procedure for driving the system into a basin of attraction by using a regulation imposed on the network is proposed. The effect of outer stimulus is shown to have a destructive influence on the attractor, forcing the latter to disappear if the distribution of the stimulus has high enough variance or if the stimulus has a spatial structure with sufficient contrast. The techniques, used in this paper, for obtaining the analytical results can be applied to more complex topologies of linked recurrent neural networks.
A phase field model for neural cell chemotropism
NASA Astrophysics Data System (ADS)
Najem, Sara; Grant, Martin
2013-04-01
Chemotropism is the action of targeting a part of the cell by means of chemical mediators and cues, and subsequently delimiting the pathway that it should undertake. In a neural cell, this initiates axonal elongation. Herein we model this growth, where chemotropic forcing leads the axon, by a phase field method utilizing two dynamical fields assigned respectively to the cell and to its leading edge. Additionally we quantify the condition for the retraction of the axon which takes place when the cell fails to form a synaptic connection.
The neural dynamics of sensory focus
Clarke, Stephen E.; Longtin, André; Maler, Leonard
2015-01-01
Coordinated sensory and motor system activity leads to efficient localization behaviours; but what neural dynamics enable object tracking and what are the underlying coding principles? Here we show that optimized distance estimation from motion-sensitive neurons underlies object tracking performance in weakly electric fish. First, a relationship is presented for determining the distance that maximizes the Fisher information of a neuron's response to object motion. When applied to our data, the theory correctly predicts the distance chosen by an electric fish engaged in a tracking behaviour, which is associated with a bifurcation between tonic and burst modes of spiking. Although object distance, size and velocity alter the neural response, the location of the Fisher information maximum remains invariant, demonstrating that the circuitry must actively adapt to maintain ‘focus' during relative motion. PMID:26549346
Natural neural projection dynamics underlying social behavior.
Gunaydin, Lisa A; Grosenick, Logan; Finkelstein, Joel C; Kauvar, Isaac V; Fenno, Lief E; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J; Airan, Raag D; Zalocusky, Kelly A; Tye, Kay M; Anikeeva, Polina; Malenka, Robert C; Deisseroth, Karl
2014-06-19
Social interaction is a complex behavior essential for many species and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social, but not novel object, interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type 1 dopamine receptor signaling downstream in the NAc. Direct observation of deep projection-specific activity in this way captures a fundamental and previously inaccessible dimension of mammalian circuit dynamics. PMID:24949967
Natural neural projection dynamics underlying social behavior
Gunaydin, Lisa A.; Grosenick, Logan; Finkelstein, Joel C.; Kauvar, Isaac V.; Fenno, Lief E.; Adhikari, Avishek; Lammel, Stephan; Mirzabekov, Julie J.; Airan, Raag D.; Zalocusky, Kelly A.; Tye, Kay M.; Anikeeva, Polina; Malenka, Robert C.; Deisseroth, Karl
2014-01-01
Social interaction is a complex behavior essential for many species, and is impaired in major neuropsychiatric disorders. Pharmacological studies have implicated certain neurotransmitter systems in social behavior, but circuit-level understanding of endogenous neural activity during social interaction is lacking. We therefore developed and applied a new methodology, termed fiber photometry, to optically record natural neural activity in genetically- and connectivity-defined projections to elucidate the real-time role of specified pathways in mammalian behavior. Fiber photometry revealed that activity dynamics of a ventral tegmental area (VTA)-to-nucleus accumbens (NAc) projection could encode and predict key features of social but not novel-object interaction. Consistent with this observation, optogenetic control of cells specifically contributing to this projection was sufficient to modulate social behavior, which was mediated by type-1 dopamine receptor signaling downstream in the NAc. Direct observation of projection-specific activity in this way captures a fundamental and previously inaccessible dimension of circuit dynamics. PMID:24949967
Information processing in neural networks with the complex dynamic thresholds
NASA Astrophysics Data System (ADS)
Kirillov, S. Yu.; Nekorkin, V. I.
2016-06-01
A control mechanism of the information processing in neural networks is investigated, based on the complex dynamic threshold of the neural excitation. The threshold properties are controlled by the slowly varying synaptic current. The dynamic threshold shows high sensitivity to the rate of the synaptic current variation. It allows both to realize flexible selective tuning of the network elements and to provide nontrivial regimes of neural coding.
Neural Field Models with Threshold Noise.
Thul, Rüdiger; Coombes, Stephen; Laing, Carlo R
2016-12-01
The original neural field model of Wilson and Cowan is often interpreted as the averaged behaviour of a network of switch like neural elements with a distribution of switch thresholds, giving rise to the classic sigmoidal population firing-rate function so prevalent in large scale neuronal modelling. In this paper we explore the effects of such threshold noise without recourse to averaging and show that spatial correlations can have a strong effect on the behaviour of waves and patterns in continuum models. Moreover, for a prescribed spatial covariance function we explore the differences in behaviour that can emerge when the underlying stationary distribution is changed from Gaussian to non-Gaussian. For travelling front solutions, in a system with exponentially decaying spatial interactions, we make use of an interface approach to calculate the instantaneous wave speed analytically as a series expansion in the noise strength. From this we find that, for weak noise, the spatially averaged speed depends only on the choice of covariance function and not on the shape of the stationary distribution. For a system with a Mexican-hat spatial connectivity we further find that noise can induce localised bump solutions, and using an interface stability argument show that there can be multiple stable solution branches. PMID:26936267
Dynamical system modeling via signal reduction and neural network simulation
Paez, T.L.; Hunter, N.F.
1997-11-01
Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.
An efficient neural network approach to dynamic robot motion planning.
Yang, S X; Meng, M
2000-03-01
In this paper, a biologically inspired neural network approach to real-time collision-free motion planning of mobile robots or robot manipulators in a nonstationary environment is proposed. Each neuron in the topologically organized neural network has only local connections, whose neural dynamics is characterized by a shunting equation. Thus the computational complexity linearly depends on the neural network size. The real-time robot motion is planned through the dynamic activity landscape of the neural network without any prior knowledge of the dynamic environment, without explicitly searching over the free workspace or the collision paths, and without any learning procedures. Therefore it is computationally efficient. The global stability of the neural network is guaranteed by qualitative analysis and the Lyapunov stability theory. The effectiveness and efficiency of the proposed approach are demonstrated through simulation studies. PMID:10935758
Population clocks: motor timing with neural dynamics
Buonomano, Dean V.; Laje, Rodrigo
2010-01-01
An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing. PMID:20889368
Beyond mean field theory: statistical field theory for neural networks
Buice, Michael A; Chow, Carson C
2014-01-01
Mean field theories have been a stalwart for studying the dynamics of networks of coupled neurons. They are convenient because they are relatively simple and possible to analyze. However, classical mean field theory neglects the effects of fluctuations and correlations due to single neuron effects. Here, we consider various possible approaches for going beyond mean field theory and incorporating correlation effects. Statistical field theory methods, in particular the Doi–Peliti–Janssen formalism, are particularly useful in this regard. PMID:25243014
Neural attractor network for application in visual field data classification
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2004-07-01
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a ' counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available.
Neural attractor network for application in visual field data classification.
Fink, Wolfgang
2004-07-01
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a 'counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available
Neural dynamics of phonological processing in the dorsal auditory stream.
Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali
2013-09-25
Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors. PMID:24068810
An integrated architecture of adaptive neural network control for dynamic systems
Ke, Liu; Tokar, R.; Mcvey, B.
1994-07-01
In this study, an integrated neural network control architecture for nonlinear dynamic systems is presented. Most of the recent emphasis in the neural network control field has no error feedback as the control input which rises the adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. Feed forward neural network controllers with state feedback establish fixed control mappings which can not adapt when model uncertainties present. With error feedbacks, neural network controllers learn the slopes or the gains respecting to the error feedbacks, which are error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation.
NASA Astrophysics Data System (ADS)
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
, the sun, earth and moon) proved to be far more difficult. In the late nineteenth century, Poincaré made significant progress on this problem, introducing a geometric method of reasoning about solutions to differential equations (Diacu and Holmes 1996). This work had a powerful impact on mathematicians and physicists, and also began to influence biology. In his 1925 book, based on his work starting in 1907, and that of others, Lotka used nonlinear differential equations and concepts from dynamical systems theory to analyze a wide variety of biological problems, including oscillations in the numbers of predators and prey (Lotka 1925). Although little was known in detail about the function of the nervous system, Lotka concluded his book with speculations about consciousness and the implications this might have for creating a mathematical formulation of biological systems. Much experimental work in the 1930s and 1940s focused on the biophysical mechanisms of excitability in neural tissue, and Rashevsky and others continued to apply tools and concepts from nonlinear dynamical systems theory as a means of providing a more general framework for understanding these results (Rashevsky 1960, Landahl and Podolsky 1949). The publication of Hodgkin and Huxley's classic quantitative model of the action potential in 1952 created a new impetus for these studies (Hodgkin and Huxley 1952). In 1955, FitzHugh published an important paper that summarized much of the earlier literature, and used concepts from phase plane analysis such as asymptotic stability, saddle points, separatrices and the role of noise to provide a deeper theoretical and conceptual understanding of threshold phenomena (Fitzhugh 1955, Izhikevich and FitzHugh 2006). The Fitzhugh-Nagumo equations constituted an important two-dimensional simplification of the four-dimensional Hodgkin and Huxley equations, and gave rise to an extensive literature of analysis. Many of the papers in this special issue build on tools
Two-photon imaging and analysis of neural network dynamics
NASA Astrophysics Data System (ADS)
Lütcke, Henry; Helmchen, Fritjof
2011-08-01
The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.
Adaptive control of nonlinear systems using multistage dynamic neural networks
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Rao, Dandina H.
1992-11-01
In this paper we present a new architecture of neuron, called the dynamic neural unit (DNU). The topology of the proposed neuronal model embodies delay elements, feedforward and feedback signals weighted by the synaptic weights and a time-varying nonlinear activation function, and is thus different from the conventionally and assumed architecture of neurons. The learning algorithm for the proposed neuronal structure and the corresponding implementation scheme are presented. A multi-stage dynamic neural network is developed using the DNU as the basic processing element. The performance evaluation of the dynamic neural network is presented for nonlinear dynamic systems under various situations. The capabilities of the proposed neural network model not only account for the learning and control actions emulating some of the biological control functions, but also provide a promising parallel-distributed intelligent control scheme for large-scale complex dynamic systems.
Neural dynamics during repetitive visual stimulation
NASA Astrophysics Data System (ADS)
Tsoneva, Tsvetomira; Garcia-Molina, Gary; Desain, Peter
2015-12-01
Objective. Steady-state visual evoked potentials (SSVEPs), the brain responses to repetitive visual stimulation (RVS), are widely utilized in neuroscience. Their high signal-to-noise ratio and ability to entrain oscillatory brain activity are beneficial for their applications in brain-computer interfaces, investigation of neural processes underlying brain rhythmic activity (steady-state topography) and probing the causal role of brain rhythms in cognition and emotion. This paper aims at analyzing the space and time EEG dynamics in response to RVS at the frequency of stimulation and ongoing rhythms in the delta, theta, alpha, beta, and gamma bands. Approach.We used electroencephalography (EEG) to study the oscillatory brain dynamics during RVS at 10 frequencies in the gamma band (40-60 Hz). We collected an extensive EEG data set from 32 participants and analyzed the RVS evoked and induced responses in the time-frequency domain. Main results. Stable SSVEP over parieto-occipital sites was observed at each of the fundamental frequencies and their harmonics and sub-harmonics. Both the strength and the spatial propagation of the SSVEP response seem sensitive to stimulus frequency. The SSVEP was more localized around the parieto-occipital sites for higher frequencies (>54 Hz) and spread to fronto-central locations for lower frequencies. We observed a strong negative correlation between stimulation frequency and relative power change at that frequency, the first harmonic and the sub-harmonic components over occipital sites. Interestingly, over parietal sites for sub-harmonics a positive correlation of relative power change and stimulation frequency was found. A number of distinct patterns in delta (1-4 Hz), theta (4-8 Hz), alpha (8-12 Hz) and beta (15-30 Hz) bands were also observed. The transient response, from 0 to about 300 ms after stimulation onset, was accompanied by increase in delta and theta power over fronto-central and occipital sites, which returned to baseline
The temporal dynamics of resilience: Neural recovery as a biomarker.
Walter, Henrik; Erk, Susanne; Veer, Ilya M
2015-01-01
Resilience can be defined as the capability of an individual to maintain health despite stress and adversity. Here we suggest to study the temporal dynamics of neural processes associated with affective perturbation and emotion regulation at different time scales to investigate the mechanisms of resilience. Parameters related to neural recovery might serve as a predictive biomarker for resilience. PMID:26786503
ERIC Educational Resources Information Center
Noyons, E. C. M.; van Raan, A. F. J.
1998-01-01
Using bibliometric mapping techniques, authors developed a methodology of self-organized structuring of scientific fields which was applied to neural network research. Explores the evolution of a data generated field structure by monitoring the interrelationships between subfields, the internal structure of subfields, and the dynamic features of…
Dynamic causal models of neural system dynamics: current state and future extensions
Stephan, Klaas E.; Harrison, Lee M.; Kiebel, Stefan J.; David, Olivier; Penny, Will D.; Friston, Karl J.
2009-01-01
Complex processes resulting from the interaction of multiple elements can rarely be understood by analytical scientific approaches alone; additionally, mathematical models of system dynamics are required. This insight, which disciplines like physics have embraced for a long time already, is gradually gaining importance in the study of cognitive processes by functional neuroimaging. In this field, causal mechanisms in neural systems are described in terms of effective connectivity. Recently, Dynamic Causal Modelling (DCM) was introduced as a generic method to estimate effective connectivity from neuroimaging data in a Bayesian fashion. One of the key advantages of DCM over previous methods is that it distinguishes between neural state equations and modality-specific forward models that translate neural activity into a measured signal. Another strength is its natural relation to Bayesian Model Selection (BMS) procedures. In this article, we review the conceptual and mathematical basis of DCM and its implementation for functional magnetic resonance imaging data and event-related potentials. After introducing the application of BMS in the context of DCM, we conclude with an outlook to future extensions of DCM. These extensions are guided by the long-term goal of using dynamic system models for pharmacological and clinical applications, particularly with regard to synaptic plasticity. PMID:17426386
Shaping the learning curve: epigenetic dynamics in neural plasticity
Bronfman, Zohar Z.; Ginsburg, Simona; Jablonka, Eva
2014-01-01
A key characteristic of learning and neural plasticity is state-dependent acquisition dynamics reflected by the non-linear learning curve that links increase in learning with practice. Here we propose that the manner by which epigenetic states of individual cells change during learning contributes to the shape of the neural and behavioral learning curve. We base our suggestion on recent studies showing that epigenetic mechanisms such as DNA methylation, histone acetylation, and RNA-mediated gene regulation are intimately involved in the establishment and maintenance of long-term neural plasticity, reflecting specific learning-histories and influencing future learning. Our model, which is the first to suggest a dynamic molecular account of the shape of the learning curve, leads to several testable predictions regarding the link between epigenetic dynamics at the promoter, gene-network, and neural-network levels. This perspective opens up new avenues for therapeutic interventions in neurological pathologies. PMID:25071483
Multistage neural network model for dynamic scene analysis
Ajjimarangsee, P.
1989-01-01
This research is concerned with dynamic scene analysis. The goal of scene analysis is to recognize objects and have a meaningful interpretation of the scene from which images are obtained. The task of the dynamic scene analysis process generally consists of region identification, motion analysis and object recognition. The objective of this research is to develop clustering algorithms using neural network approach and to investigate a multi-stage neural network model for region identification and motion analysis. The research is separated into three parts. First, a clustering algorithm using Kohonens' self-organizing feature map network is developed to be capable of generating continuous membership valued outputs. A newly developed version of the updating algorithm of the network is introduced to achieve a high degree of parallelism. A neural network model for the fuzzy c-means algorithm is proposed. In the second part, the parallel algorithms of a neural network model for clustering using the self-organizing feature maps approach and a neural network that models the fuzzy c-means algorithm are modified for implementation on a distributed memory parallel architecture. In the third part, supervised and unsupervised neural network models for motion analysis are investigated. For a supervised neural network, a three layer perceptron network is trained by a series of images to recognize the movement of the objects. For the unsupervised neural network, a self-organizing feature mapping network will learn to recognize the movement of the objects without an explicit training phase.
Neural Dynamics of Attentional Cross-Modality Control
Rabinovich, Mikhail; Tristan, Irma; Varona, Pablo
2013-01-01
Attentional networks that integrate many cortical and subcortical elements dynamically control mental processes to focus on specific events and make a decision. The resources of attentional processing are finite. Nevertheless, we often face situations in which it is necessary to simultaneously process several modalities, for example, to switch attention between players in a soccer field. Here we use a global brain mode description to build a model of attentional control dynamics. This model is based on sequential information processing stability conditions that are realized through nonsymmetric inhibition in cortical circuits. In particular, we analyze the dynamics of attentional switching and focus in the case of parallel processing of three interacting mental modalities. Using an excitatory-inhibitory network, we investigate how the bifurcations between different attentional control strategies depend on the stimuli and analyze the relationship between the time of attention focus and the strength of the stimuli. We discuss the interplay between attention and decision-making: in this context, a decision-making process is a controllable bifurcation of the attention strategy. We also suggest the dynamical evaluation of attentional resources in neural sequence processing. PMID:23696890
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems. PMID:11852439
Absolute stability and synchronization in neural field models with transmission delays
NASA Astrophysics Data System (ADS)
Kao, Chiu-Yen; Shih, Chih-Wen; Wu, Chang-Hong
2016-08-01
Neural fields model macroscopic parts of the cortex which involve several populations of neurons. We consider a class of neural field models which are represented by integro-differential equations with transmission time delays which are space-dependent. The considered domains underlying the systems can be bounded or unbounded. A new approach, called sequential contracting, instead of the conventional Lyapunov functional technique, is employed to investigate the global dynamics of such systems. Sufficient conditions for the absolute stability and synchronization of the systems are established. Several numerical examples are presented to demonstrate the theoretical results.
Neural Dynamics Underlying Event-Related Potentials
NASA Technical Reports Server (NTRS)
Shah, Ankoor S.; Bressler, Steven L.; Knuth, Kevin H.; Ding, Ming-Zhou; Mehta, Ashesh D.; Ulbert, Istvan; Schroeder, Charles E.
2003-01-01
There are two opposing hypotheses about the brain mechanisms underlying sensory event-related potentials (ERPs). One holds that sensory ERPs are generated by phase resetting of ongoing electroencephalographic (EEG) activity, and the other that they result from signal averaging of stimulus-evoked neural responses. We tested several contrasting predictions of these hypotheses by direct intracortical analysis of neural activity in monkeys. Our findings clearly demonstrate evoked response contributions to the sensory ERP in the monkey, and they suggest the likelihood that a mixed (Evoked/Phase Resetting) model may account for the generation of scalp ERPs in humans.
Measuring Whole-Brain Neural Dynamics and Behavior of Freely-Moving C. elegans
NASA Astrophysics Data System (ADS)
Shipley, Frederick; Nguyen, Jeffrey; Plummer, George; Shaevitz, Joshua; Leifer, Andrew
2015-03-01
Bridging the gap between an organism's neural dynamics and its ultimate behavior is the fundamental goal of neuroscience. Previously, to probe neural dynamics, we have been limited to measuring from a limited number of neurons, whether by electrode or optogenetic measurements. Here we present an instrument to simultaneously monitor neural activity from every neuron in a freely moving Caenorhabditis elegans' head, while recording behavior at the same time. Previously, whole-brain imaging has been demonstrated in C. elegans, but only in restrained and anesthetized animals (1). For studying neural coding of behavior it is crucial to study neural activity in freely behaving animals. Neural activity is recorded optically from cells expressing a calcium indicator, GCaMP6. Real time computer vision tracks the worm's position in x-y, while a piezo stage sweeps through the brain in z, yielding five brain-volumes per second. Behavior is recorded under infrared, dark-field imaging. This tool will allow us to directly correlate neural activity with behavior and we will present progress toward this goal. Thank you to the Simons Foundation and Princeton University for supporting this research.
Beyond slots and resources: grounding cognitive concepts in neural dynamics.
Johnson, Jeffrey S; Simmering, Vanessa R; Buss, Aaron T
2014-08-01
Research over the past decade has suggested that the ability to hold information in visual working memory (VWM) may be limited to as few as three to four items. However, the precise nature and source of these capacity limits remains hotly debated. Most commonly, capacity limits have been inferred from studies of visual change detection, in which performance declines systematically as a function of the number of items that participants must remember. According to one view, such declines indicate that a limited number of fixed-resolution representations are held in independent memory "slots." Another view suggests that such capacity limits are more apparent than real, but emerge as limited memory resources are distributed across more to-be-remembered items. Here we argue that, although both perspectives have merit and have generated and explained impressive amounts of empirical data, their central focus on the representations--rather than processes--underlying VWM may ultimately limit continuing progress in this area. As an alternative, we describe a neurally grounded, process-based approach to VWM: the dynamic field theory. Simulations demonstrate that this model can account for key aspects of behavioral performance in change detection, in addition to generating novel behavioral predictions that have been confirmed experimentally. Furthermore, we describe extensions of the model to recall tasks, the integration of visual features, cognitive development, individual differences, and functional imaging studies of VWM. We conclude by discussing the importance of grounding psychological concepts in neural dynamics, as a first step toward understanding the link between brain and behavior. PMID:24306983
Beyond slots and resources: Grounding cognitive concepts in neural dynamics
Johnson, Jeffrey S.; Simmering, Vanessa R.; Buss, Aaron T.
2014-01-01
Research over the past decade has suggested that the ability to hold information in visual working memory (VWM) may be limited to as few as 3-4 items. However, the precise nature and source of these capacity limits remains hotly debated. Most commonly, capacity limits have been inferred from studies of visual change detection, in which performance declines systematically as a function of the number of items participants must remember. According to one view, such declines indicate that a limited number of fixed-resolution representations are held in independent memory ‘slots’. Another view suggests that capacity limits are more apparent than real, emerging as limited memory resources are distributed across more to-be-remembered items. Here we argue that, although both perspectives have merit and have generated and explained an impressive amount of empirical data, their central focus on the representations—rather than processes—underlying VWM may ultimately limit continuing progress in this area. As an alternative, we describe a neurally-grounded, process-based approach to VWM: the dynamic field theory. Simulations demonstrate that this model can account for key aspects of behavioral performance in change detection, in addition to generating novel behavioral predictions that have been confirmed experimentally. Furthermore, we describe extensions of the model to recall tasks, the integration of visual features, cognitive development, individual differences, and functional imaging studies of VWM. We conclude by discussing the importance of grounding psychological concepts in neural dynamics as a first step toward understanding the link between brain and behavior. PMID:24306983
A theory of neural dimensionality, dynamics, and measurement
NASA Astrophysics Data System (ADS)
Ganguli, Surya
In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing millions of behaviorally relevant neurons. Dimensionality reduction has often shown that such datasets are strikingly simple; they can be described using a much smaller number of dimensions than the number of recorded neurons, and the resulting projections onto these dimensions yield a remarkably insightful dynamical portrait of circuit computation. This ubiquitous simplicity raises several profound and timely conceptual questions. What is the origin of this simplicity and its implications for the complexity of brain dynamics? Would neuronal datasets become more complex if we recorded more neurons? How and when can we trust dynamical portraits obtained from only hundreds of neurons in circuits containing millions of neurons? We present a theory that answers these questions, and test it using neural data recorded from reaching monkeys. Overall, this theory yields a picture of the neural measurement process as a random projection of neural dynamics, conceptual insights into how we can reliably recover dynamical portraits in such under-sampled measurement regimes, and quantitative guidelines for the design of future experiments. Moreover, it reveals the existence of phase transition boundaries in our ability to successfully decode cognition and behavior as a function of the number of recorded neurons, the complexity of the task, and the smoothness of neural dynamics. membership pending.
Magnetic field induced dynamical chaos
Ray, Somrita; Baura, Alendu; Bag, Bidhan Chandra
2013-12-15
In this article, we have studied the dynamics of a particle having charge in the presence of a magnetic field. The motion of the particle is confined in the x–y plane under a two dimensional nonlinear potential. We have shown that constant magnetic field induced dynamical chaos is possible even for a force which is derived from a simple potential. For a given strength of the magnetic field, initial position, and velocity of the particle, the dynamics may be regular, but it may become chaotic when the field is time dependent. Chaotic dynamics is very often if the field is time dependent. Origin of chaos has been explored using the Hamiltonian function of the dynamics in terms of action and angle variables. Applicability of the present study has been discussed with a few examples.
Dynamic model of neural networks with asymmetric diluted couplings
NASA Astrophysics Data System (ADS)
Choi, M. Y.; Choi, Meekyoung
1990-06-01
We study an asymmetric diluted version of the dynamic model for neural networks proposed recently, which explicitly takes into account the existence of several time scales without discretizing the time. The dynamics is neither totally synchronous nor totally asynchronous, and the couplings in the neural networks are asymmetric. These considerations may be regarded as more biologically realistic. We obtain the phase diagram as a function of the temperature ɛ-1, the capacity α, and the ratio a of the refractory period to the action potential duration.
Spontaneous Neural Dynamics and Multi-scale Network Organization
Foster, Brett L.; He, Biyu J.; Honey, Christopher J.; Jerbi, Karim; Maier, Alexander; Saalmann, Yuri B.
2016-01-01
Spontaneous neural activity has historically been viewed as task-irrelevant noise that should be controlled for via experimental design, and removed through data analysis. However, electrophysiology and functional MRI studies of spontaneous activity patterns, which have greatly increased in number over the past decade, have revealed a close correspondence between these intrinsic patterns and the structural network architecture of functional brain circuits. In particular, by analyzing the large-scale covariation of spontaneous hemodynamics, researchers are able to reliably identify functional networks in the human brain. Subsequent work has sought to identify the corresponding neural signatures via electrophysiological measurements, as this would elucidate the neural origin of spontaneous hemodynamics and would reveal the temporal dynamics of these processes across slower and faster timescales. Here we survey common approaches to quantifying spontaneous neural activity, reviewing their empirical success, and their correspondence with the findings of neuroimaging. We emphasize invasive electrophysiological measurements, which are amenable to amplitude- and phase-based analyses, and which can report variations in connectivity with high spatiotemporal precision. After summarizing key findings from the human brain, we survey work in animal models that display similar multi-scale properties. We highlight that, across many spatiotemporal scales, the covariance structure of spontaneous neural activity reflects structural properties of neural networks and dynamically tracks their functional repertoire. PMID:26903823
Toward modeling a dynamic biological neural network.
Ross, M D; Dayhoff, J E; Mugler, D H
1990-01-01
Mammalian macular endorgans are linear bioaccelerometers located in the vestibular membranous labyrinth of the inner ear. In this paper, the organization of the endorgan is interpreted on physical and engineering principles. This is a necessary prerequisite to mathematical and symbolic modeling of information processing by the macular neural network. Mathematical notations that describe the functioning system were used to produce a novel, symbolic model. The model is six-tiered and is constructed to mimic the neural system. Initial simulations show that the network functions best when some of the detecting elements (type I hair cells) are excitatory and others (type II hair cells) are weakly inhibitory. The simulations also illustrate the importance of disinhibition of receptors located in the third tier in shaping nerve discharge patterns at the sixth tier in the model system. PMID:11538873
Neural network with dynamically adaptable neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.
On the dynamics of delayed neural feedback loops
NASA Astrophysics Data System (ADS)
Brandt, Sebastian F.
The computational potential of neural circuits arises from the interconnections and interactions between their elements. Feedback is a universal feature of neuronal organization and has been shown to be a key element in neural signal processing. In biological neural circuits, delays arise from finite axonal conduction speeds and at the synaptic level due to transmitter release dynamics. In this work, the influence of temporal delay on neural network dynamics is investigated. The basic feedback mechanisms involved in the regulation of neural activity consist of small circuits composed of two to three neurons. We analyze a system of two interconnected neurons and show that finite delays can induce oscillations in the system. Employing a perturbative approach in combination with a resummation scheme, we evaluate the limit cycle dynamics of the system. We show that synchronous oscillations can arise when the delays are asymmetric. Furthermore, distributed delays can stabilize the system and lead to an increased range of parameters for which the system converges to a stable fixed point. We next consider a delayed neural triad with a characteristic topology commonly found in neural feedback circuits. We show that the system can be both robust and sensitive in regard to small parameter changes and examine the significance of the different projections We then address the functional role of a particular feedback loop found in the visual system of nonmammalian vertebrates. We show that the system can function as a 'winner-take-all' and novelty detector and examine the influence of temporal delays on the system's performance. Biological systems are subject to stochastic influences and display some degree of disorder. We examine the role of noise and its effect on the stability of the synchronized state in a system of two coupled active rotators. Finally, we show that disordering the driving forces in arrays of coupled oscillators can lead to synchronization in these systems.
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
Dynamic behaviors of the non-neural ectoderm during mammalian cranial neural tube closure.
Ray, Heather J; Niswander, Lee A
2016-08-15
The embryonic brain and spinal cord initially form through the process of neural tube closure (NTC). NTC is thought to be highly similar between rodents and humans, and studies of mouse genetic mutants have greatly increased our understanding of the molecular basis of NTC with relevance for human neural tube defects. In addition, studies using amphibian and chick embryos have shed light into the cellular and tissue dynamics underlying NTC. However, the dynamics of mammalian NTC has been difficult to study due to in utero development until recently when advances in mouse embryo ex vivo culture techniques along with confocal microscopy have allowed for imaging of mouse NTC in real time. Here, we have performed live imaging of mouse embryos with a particular focus on the non-neural ectoderm (NNE). Previous studies in multiple model systems have found that the NNE is important for proper NTC, but little is known about the behavior of these cells during mammalian NTC. Here we utilized a NNE-specific genetic labeling system to assess NNE dynamics during murine NTC and identified different NNE cell behaviors as the cranial region undergoes NTC. These results bring valuable new insight into regional differences in cellular behavior during NTC that may be driven by different molecular regulators and which may underlie the various positional disruptions of NTC observed in humans with neural tube defects. PMID:27343896
Dynamics of a neural system with a multiscale architecture
Breakspear, Michael; Stam, Cornelis J
2005-01-01
The architecture of the brain is characterized by a modular organization repeated across a hierarchy of spatial scales—neurons, minicolumns, cortical columns, functional brain regions, and so on. It is important to consider that the processes governing neural dynamics at any given scale are not only determined by the behaviour of other neural structures at that scale, but also by the emergent behaviour of smaller scales, and the constraining influence of activity at larger scales. In this paper, we introduce a theoretical framework for neural systems in which the dynamics are nested within a multiscale architecture. In essence, the dynamics at each scale are determined by a coupled ensemble of nonlinear oscillators, which embody the principle scale-specific neurobiological processes. The dynamics at larger scales are ‘slaved’ to the emergent behaviour of smaller scales through a coupling function that depends on a multiscale wavelet decomposition. The approach is first explicated mathematically. Numerical examples are then given to illustrate phenomena such as between-scale bifurcations, and how synchronization in small-scale structures influences the dynamics in larger structures in an intuitive manner that cannot be captured by existing modelling approaches. A framework for relating the dynamical behaviour of the system to measured observables is presented and further extensions to capture wave phenomena and mode coupling are suggested. PMID:16087448
Dynamic Artificial Neural Networks with Affective Systems
Schuman, Catherine D.; Birdwell, J. Douglas
2013-01-01
Artificial neural networks (ANNs) are processors that are trained to perform particular tasks. We couple a computational ANN with a simulated affective system in order to explore the interaction between the two. In particular, we design a simple affective system that adjusts the threshold values in the neurons of our ANN. The aim of this paper is to demonstrate that this simple affective system can control the firing rate of the ensemble of neurons in the ANN, as well as to explore the coupling between the affective system and the processes of long term potentiation (LTP) and long term depression (LTD), and the effect of the parameters of the affective system on its performance. We apply our networks with affective systems to a simple pole balancing example and briefly discuss the effect of affective systems on network performance. PMID:24303015
Dynamic Pricing in Electronic Commerce Using Neural Network
NASA Astrophysics Data System (ADS)
Ghose, Tapu Kumar; Tran, Thomas T.
In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.
Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns
Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario
2015-01-01
The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381
3-D flame temperature field reconstruction with multiobjective neural network
NASA Astrophysics Data System (ADS)
Wan, Xiong; Gao, Yiqing; Wang, Yuanmei
2003-02-01
A novel 3-D temperature field reconstruction method is proposed in this paper, which is based on multiwavelength thermometry and Hopfield neural network computed tomography. A mathematical model of multi-wavelength thermometry is founded, and a neural network algorithm based on multiobjective optimization is developed. Through computer simulation and comparison with the algebraic reconstruction technique (ART) and the filter back-projection algorithm (FBP), the reconstruction result of the new method is discussed in detail. The study shows that the new method always gives the best reconstruction results. At last, temperature distribution of a section of four peaks candle flame is reconstructed with this novel method.
Naudé, Jérémie; Cessac, Bruno; Berry, Hugues; Delord, Bruno
2013-09-18
Homeostatic intrinsic plasticity (HIP) is a ubiquitous cellular mechanism regulating neuronal activity, cardinal for the proper functioning of nervous systems. In invertebrates, HIP is critical for orchestrating stereotyped activity patterns. The functional impact of HIP remains more obscure in vertebrate networks, where higher order cognitive processes rely on complex neural dynamics. The hypothesis has emerged that HIP might control the complexity of activity dynamics in recurrent networks, with important computational consequences. However, conflicting results about the causal relationships between cellular HIP, network dynamics, and computational performance have arisen from machine-learning studies. Here, we assess how cellular HIP effects translate into collective dynamics and computational properties in biological recurrent networks. We develop a realistic multiscale model including a generic HIP rule regulating the neuronal threshold with actual molecular signaling pathways kinetics, Dale's principle, sparse connectivity, synaptic balance, and Hebbian synaptic plasticity (SP). Dynamic mean-field analysis and simulations unravel that HIP sets a working point at which inputs are transduced by large derivative ranges of the transfer function. This cellular mechanism ensures increased network dynamics complexity, robust balance with SP at the edge of chaos, and improved input separability. Although critically dependent upon balanced excitatory and inhibitory drives, these effects display striking robustness to changes in network architecture, learning rates, and input features. Thus, the mechanism we unveil might represent a ubiquitous cellular basis for complex dynamics in neural networks. Understanding this robustness is an important challenge to unraveling principles underlying self-organization around criticality in biological recurrent neural networks. PMID:24048833
A solution to neural field equations by a recurrent neural network method
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2012-09-01
Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S.; Zhang, Mingming
2015-01-01
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2–6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5–5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. SIGNIFICANCE STATEMENT Neural activity (waves or spikes) can propagate using well documented mechanisms such as synaptic transmission, gap junctions, or diffusion. However, the purpose of this paper is to provide an explanation for experimental data showing that neural signals can propagate by means other than synaptic
Dynamical analysis of uncertain neural networks with multiple time delays
NASA Astrophysics Data System (ADS)
Arik, Sabri
2016-02-01
This paper investigates the robust stability problem for dynamical neural networks in the presence of time delays and norm-bounded parameter uncertainties with respect to the class of non-decreasing, non-linear activation functions. By employing the Lyapunov stability and homeomorphism mapping theorems together, a new delay-independent sufficient condition is obtained for the existence, uniqueness and global asymptotic stability of the equilibrium point for the delayed uncertain neural networks. The condition obtained for robust stability establishes a matrix-norm relationship between the network parameters of the neural system, which can be easily verified by using properties of the class of the positive definite matrices. Some constructive numerical examples are presented to show the applicability of the obtained result and its advantages over the previously published corresponding literature results.
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
Nonlinear dynamical system approaches towards neural prosthesis
Torikai, Hiroyuki; Hashimoto, Sho
2011-04-19
An asynchronous discrete-state spiking neurons is a wired system of shift registers that can mimic nonlinear dynamics of an ODE-based neuron model. The control parameter of the neuron is the wiring pattern among the registers and thus they are suitable for on-chip learning. In this paper an asynchronous discrete-state spiking neuron is introduced and its typical nonlinear phenomena are demonstrated. Also, a learning algorithm for a set of neurons is presented and it is demonstrated that the algorithm enables the set of neurons to reconstruct nonlinear dynamics of another set of neurons with unknown parameter values. The learning function is validated by FPGA experiments.
Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Kozberg, Mariel G; Thibodeaux, David N; Zhao, Hanzhi T; Yu, Hang; Hillman, Elizabeth M C
2016-10-01
Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results.This article is part of the themed issue 'Interpreting BOLD: a dialogue between cognitive and cellular neuroscience'. PMID:27574312
Simulation of dynamic processes with adaptive neural networks.
Tzanos, C. P.
1998-02-03
Many industrial processes are highly non-linear and complex. Their simulation with first-principle or conventional input-output correlation models is not satisfactory, either because the process physics is not well understood, or it is so complex that direct simulation is either not adequately accurate, or it requires excessive computation time, especially for on-line applications. Artificial intelligence techniques (neural networks, expert systems, fuzzy logic) or their combination with simple process-physics models can be effectively used for the simulation of such processes. Feedforward (static) neural networks (FNNs) can be used effectively to model steady-state processes. They have also been used to model dynamic (time-varying) processes by adding to the network input layer input nodes that represent values of input variables at previous time steps. The number of previous time steps is problem dependent and, in general, can be determined after extensive testing. This work demonstrates that for dynamic processes that do not vary fast with respect to the retraining time of the neural network, an adaptive feedforward neural network can be an effective simulator that is free of the complexities introduced by the use of input values at previous time steps.
Persistent Activity in Neural Networks with Dynamic Synapses
Barak, Omri; Tsodyks, Misha
2007-01-01
Persistent activity states (attractors), observed in several neocortical areas after the removal of a sensory stimulus, are believed to be the neuronal basis of working memory. One of the possible mechanisms that can underlie persistent activity is recurrent excitation mediated by intracortical synaptic connections. A recent experimental study revealed that connections between pyramidal cells in prefrontal cortex exhibit various degrees of synaptic depression and facilitation. Here we analyze the effect of synaptic dynamics on the emergence and persistence of attractor states in interconnected neural networks. We show that different combinations of synaptic depression and facilitation result in qualitatively different network dynamics with respect to the emergence of the attractor states. This analysis raises the possibility that the framework of attractor neural networks can be extended to represent time-dependent stimuli. PMID:17319739
Slow dynamics in features of synchronized neural network responses
Haroush, Netta; Marom, Shimon
2015-01-01
In this report trial-to-trial variations in the synchronized responses of neural networks are explored over time scales of minutes, in ex-vivo large scale cortical networks. We show that sub-second measures of the individual synchronous response, namely—its latency and decay duration, are related to minutes-scale network response dynamics. Network responsiveness is reflected as residency in, or shifting amongst, areas of the latency-decay plane. The different sensitivities of latency and decay durations to synaptic blockers imply that these two measures reflect aspects of inhibitory and excitatory activities. Taken together, the data suggest that trial-to-trial variations in the synchronized responses of neural networks might be related to effective excitation-inhibition ratio being a dynamic variable over time scales of minutes. PMID:25926787
Specific frontal neural dynamics contribute to decisions to check
Stoll, Frederic M.; Fontanier, Vincent; Procyk, Emmanuel
2016-01-01
Curiosity and information seeking potently shapes our behaviour and are thought to rely on the frontal cortex. Yet, the frontal regions and neural dynamics that control the drive to check for information remain unknown. Here we trained monkeys in a task where they had the opportunity to gain information about the potential delivery of a large bonus reward or continue with a default instructed decision task. Single-unit recordings in behaving monkeys reveal that decisions to check for additional information first engage midcingulate cortex and then lateral prefrontal cortex. The opposite is true for instructed decisions. Importantly, deciding to check engages neurons also involved in performance monitoring. Further, specific midcingulate activity could be discerned several trials before the monkeys actually choose to check the environment. Our data show that deciding to seek information on the current state of the environment is characterized by specific dynamics of neural activity within the prefrontal cortex. PMID:27319361
Shaping the Dynamics of a Bidirectional Neural Interface
Vato, Alessandro; Semprini, Marianna; Maggiolini, Emma; Szymanski, Francois D.; Fadiga, Luciano; Panzeri, Stefano; Mussa-Ivaldi, Ferdinando A.
2012-01-01
Progress in decoding neural signals has enabled the development of interfaces that translate cortical brain activities into commands for operating robotic arms and other devices. The electrical stimulation of sensory areas provides a means to create artificial sensory information about the state of a device. Taken together, neural activity recording and microstimulation techniques allow us to embed a portion of the central nervous system within a closed-loop system, whose behavior emerges from the combined dynamical properties of its neural and artificial components. In this study we asked if it is possible to concurrently regulate this bidirectional brain-machine interaction so as to shape a desired dynamical behavior of the combined system. To this end, we followed a well-known biological pathway. In vertebrates, the communications between brain and limb mechanics are mediated by the spinal cord, which combines brain instructions with sensory information and organizes coordinated patterns of muscle forces driving the limbs along dynamically stable trajectories. We report the creation and testing of the first neural interface that emulates this sensory-motor interaction. The interface organizes a bidirectional communication between sensory and motor areas of the brain of anaesthetized rats and an external dynamical object with programmable properties. The system includes (a) a motor interface decoding signals from a motor cortical area, and (b) a sensory interface encoding the state of the external object into electrical stimuli to a somatosensory area. The interactions between brain activities and the state of the external object generate a family of trajectories converging upon a selected equilibrium point from arbitrary starting locations. Thus, the bidirectional interface establishes the possibility to specify not only a particular movement trajectory but an entire family of motions, which includes the prescribed reactions to unexpected perturbations. PMID
Dynamic neural activity during stress signals resilient coping.
Sinha, Rajita; Lacadie, Cheryl M; Constable, R Todd; Seo, Dongju
2016-08-01
Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping. PMID:27432990
Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.
Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc
2016-08-01
This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data. PMID:26955020
Response of traveling waves to transient inputs in neural fields
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Ermentrout, Bard
2012-02-01
We analyze the effects of transient stimulation on traveling waves in neural field equations. Neural fields are modeled as integro-differential equations whose convolution term represents the synaptic connections of a spatially extended neuronal network. The adjoint of the linearized wave equation can be used to identify how a particular input will shift the location of a traveling wave. This wave response function is analogous to the phase response curve of limit cycle oscillators. For traveling fronts in an excitatory network, the sign of the shift depends solely on the sign of the transient input. A complementary estimate of the effective shift is derived using an equation for the time-dependent speed of the perturbed front. Traveling pulses are analyzed in an asymmetric lateral inhibitory network and they can be advanced or delayed, depending on the position of spatially localized transient inputs. We also develop bounds on the amplitude of transient input necessary to terminate traveling pulses, based on the global bifurcation structure of the neural field.
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. PMID:26752681
Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields
NASA Astrophysics Data System (ADS)
Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo
2016-01-01
Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.
Dynamic Modeling of time series using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Nair, A. D.; Principe, Jose C.
1995-12-01
Artificial Neural Networks (ANN) have the ability to adapt to and learn complex topologies, they represent new technology with which to explore dynamical systems. Multi-step prediction is used to capture the dynamics of the system that produced the time series. Multi-step prediction is implemented by a recurrent ANN trained with trajectory learning. Two separate memories are employed in training the ANN, the common tapped delay-line memory and the new gamma memory. This methodology has been applied to the time series of a white dwarf and to the quasar 3C 345.
Perspective: network-guided pattern formation of neural dynamics.
Hütt, Marc-Thorsten; Kaiser, Marcus; Hilgetag, Claus C
2014-10-01
The understanding of neural activity patterns is fundamentally linked to an understanding of how the brain's network architecture shapes dynamical processes. Established approaches rely mostly on deviations of a given network from certain classes of random graphs. Hypotheses about the supposed role of prominent topological features (for instance, the roles of modularity, network motifs or hierarchical network organization) are derived from these deviations. An alternative strategy could be to study deviations of network architectures from regular graphs (rings and lattices) and consider the implications of such deviations for self-organized dynamic patterns on the network. Following this strategy, we draw on the theory of spatio-temporal pattern formation and propose a novel perspective for analysing dynamics on networks, by evaluating how the self-organized dynamics are confined by network architecture to a small set of permissible collective states. In particular, we discuss the role of prominent topological features of brain connectivity, such as hubs, modules and hierarchy, in shaping activity patterns. We illustrate the notion of network-guided pattern formation with numerical simulations and outline how it can facilitate the understanding of neural dynamics. PMID:25180302
Mean-field equations, bifurcation map and route to chaos in discrete time neural networks
NASA Astrophysics Data System (ADS)
Cessac, B.; Doyon, B.; Quoy, M.; Samuelides, M.
1994-07-01
We investigate the dynamical behaviour of neural networks with asymmetric synaptic weights, in the presence of random thresholds. We inspect low gain dynamics before using mean-field equations to study the bifurcations of the fixed points and the change of regime that occurs when varying control parameters. We infer different areas with various regimes summarized by a bifurcation map in the parameter space. We numerically show the occurence of chaos that arises generically by a quasi-periodicity route. We then discuss some features of our system in relation with biological observations such as low firing rates and refractory periods.
The dynamical stability of reverberatory neural circuits.
Tegnér, Jesper; Compte, Albert; Wang, Xiao-Jing
2002-12-01
The concept of reverberation proposed by Lorente de Nó and Hebb is key to understanding strongly recurrent cortical networks. In particular, synaptic reverberation is now viewed as a likely mechanism for the active maintenance of working memory in the prefrontal cortex. Theoretically, this has spurred a debate as to how such a potentially explosive mechanism can provide stable working-memory function given the synaptic and cellular mechanisms at play in the cerebral cortex. We present here new evidence for the participation of NMDA receptors in the stabilization of persistent delay activity in a biophysical network model of conductance-based neurons. We show that the stability of working-memory function, and the required NMDA/AMPA ratio at recurrent excitatory synapses, depend on physiological properties of neurons and synaptic interactions, such as the time constants of excitation and inhibition, mutual inhibition between interneurons, differential NMDA receptor participation at excitatory projections to pyramidal neurons and interneurons, or the presence of slow intrinsic ion currents in pyramidal neurons. We review other mechanisms proposed to enhance the dynamical stability of synaptically generated attractor states of a reverberatory circuit. This recent work represents a necessary and significant step towards testing attractor network models by cortical electrophysiology. PMID:12461636
Predicting physical time series using dynamic ridge polynomial neural networks.
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
From invasion to extinction in heterogeneous neural fields.
Bressloff, Paul C
2012-01-01
In this paper, we analyze the invasion and extinction of activity in heterogeneous neural fields. We first consider the effects of spatial heterogeneities on the propagation of an invasive activity front. In contrast to previous studies of front propagation in neural media, we assume that the front propagates into an unstable rather than a metastable zero-activity state. For sufficiently localized initial conditions, the asymptotic velocity of the resulting pulled front is given by the linear spreading velocity, which is determined by linearizing about the unstable state within the leading edge of the front. One of the characteristic features of these so-called pulled fronts is their sensitivity to perturbations inside the leading edge. This means that standard perturbation methods for studying the effects of spatial heterogeneities or external noise fluctuations break down. We show how to extend a partial differential equation method for analyzing pulled fronts in slowly modulated environments to the case of neural fields with slowly modulated synaptic weights. The basic idea is to rescale space and time so that the front becomes a sharp interface whose location can be determined by solving a corresponding local Hamilton-Jacobi equation. We use steepest descents to derive the Hamilton-Jacobi equation from the original nonlocal neural field equation. In the case of weak synaptic heterogenities, we then use perturbation theory to solve the corresponding Hamilton equations and thus determine the time-dependent wave speed. In the second part of the paper, we investigate how time-dependent heterogenities in the form of extrinsic multiplicative noise can induce rare noise-driven transitions to the zero-activity state, which now acts as an absorbing state signaling the extinction of all activity. In this case, the most probable path to extinction can be obtained by solving the classical equations of motion that dominate a path integral representation of the stochastic
Bio-Inspired Neural Model for Learning Dynamic Models
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Suri, Ronald
2009-01-01
A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.
Dynamic neural mechanisms underlie race disparities in social cognition.
Cassidy, Brittany S; Krendl, Anne C
2016-05-15
Race disparities in behavior may emerge in several ways, some of which may be independent of implicit bias. To mitigate the pernicious effects of different race disparities for racial minorities, we must understand whether they are rooted in perceptual, affective, or cognitive processing with regard to race perception. We used fMRI to disentangle dynamic neural mechanisms predictive of two separable race disparities that can be obtained from a trustworthiness ratings task. Increased coupling between regions involved in perceptual and affective processing when viewing Black versus White faces predicted less later racial trust disparity, which was related to implicit bias. In contrast, increased functional coupling between regions involved in controlled processing predicted less later disparity in the differentiation of Black versus White faces with regard to perceived trust, which was unrelated to bias. These findings reveal that distinct neural signatures underlie separable race disparities in social cognition that may or may not be related to implicit bias. PMID:26908320
Dynamical criticality in the collective activity of a neural population
NASA Astrophysics Data System (ADS)
Mora, Thierry
The past decade has seen a wealth of physiological data suggesting that neural networks may behave like critical branching processes. Concurrently, the collective activity of neurons has been studied using explicit mappings to classic statistical mechanics models such as disordered Ising models, allowing for the study of their thermodynamics, but these efforts have ignored the dynamical nature of neural activity. I will show how to reconcile these two approaches by learning effective statistical mechanics models of the full history of the collective activity of a neuron population directly from physiological data, treating time as an additional dimension. Applying this technique to multi-electrode recordings from retinal ganglion cells, and studying the thermodynamics of the inferred model, reveals a peak in specific heat reminiscent of a second-order phase transition.
Dynamic digital watermark technique based on neural network
NASA Astrophysics Data System (ADS)
Gu, Tao; Li, Xu
2008-04-01
An algorithm of dynamic watermark based on neural network is presented which is more robust against attack of false authentication and watermark-tampered operations contrasting with one watermark embedded method. (1) Five binary images used as watermarks are coded into a binary array. The total number of 0s and 1s is 5*N, every 0 or 1 is enlarged fivefold by information-enlarged technique. N is the original total number of the watermarks' binary bits. (2) Choose the seed image pixel p x,y and its 3×3 vicinities pixel p x-1,y-1,p x-1,y,p x-1,y+1,p x,y-1,p x,y+1,p x+1,y-1,p x+1,y,p x+1,y+1 as one sample space. The p x,y is used as the neural network target and the other eight pixel values are used as neural network inputs. (3) To make the neural network learn the sample space, 5*N pixel values and their closely relevant pixel values are randomly chosen with a password from a color BMP format image and used to train the neural network.(4) A four-layer neural network is constructed to describe the nonlinear mapped relationship between inputs and outputs. (5) One bit from the array is embedded by adjusting the polarity between a chosen pixel value and the output value of the model. (6) One randomizer generates a number to ascertain the counts of watermarks for retrieving. The randomly ascertained watermarks can be retrieved by using the restored neural network outputs value, the corresponding image pixels value, and the restore function without knowing the original image and watermarks (The restored coded-watermark bit=1, if ox,y(restored)>p x,y(reconstructed, else coded-watermark bit =0). The retrieved watermarks are different when extracting each time. The proposed technique can offer more watermarking proofs than one watermark embedded algorithm. Experimental results show that the proposed technique is very robust against some image processing operations and JPEG lossy compression. Therefore, the algorithm can be used to protect the copyright of one important image.
Dynamics of gauge field inflation
Alexander, Stephon; Jyoti, Dhrubo; Kosowsky, Arthur; Marcianò, Antonino
2015-05-05
We analyze the existence and stability of dynamical attractor solutions for cosmological inflation driven by the coupling between fermions and a gauge field. Assuming a spatially homogeneous and isotropic gauge field and fermion current, the interacting fermion equation of motion reduces to that of a free fermion up to a phase shift. Consistency of the model is ensured via the Stückelberg mechanism. We prove the existence of exactly one stable solution, and demonstrate the stability numerically. Inflation arises without fine tuning, and does not require postulating any effective potential or non-standard coupling.
Autonomic neural control of dynamic cerebral autoregulation in humans
NASA Technical Reports Server (NTRS)
Zhang, Rong; Zuckerman, Julie H.; Iwasaki, Kenichi; Wilson, Thad E.; Crandall, Craig G.; Levine, Benjamin D.
2002-01-01
BACKGROUND: The purpose of the present study was to determine the role of autonomic neural control of dynamic cerebral autoregulation in humans. METHODS AND RESULTS: We measured arterial pressure and cerebral blood flow (CBF) velocity in 12 healthy subjects (aged 29+/-6 years) before and after ganglion blockade with trimethaphan. CBF velocity was measured in the middle cerebral artery using transcranial Doppler. The magnitude of spontaneous changes in mean blood pressure and CBF velocity were quantified by spectral analysis. The transfer function gain, phase, and coherence between these variables were estimated to quantify dynamic cerebral autoregulation. After ganglion blockade, systolic and pulse pressure decreased significantly by 13% and 26%, respectively. CBF velocity decreased by 6% (P<0.05). In the very low frequency range (0.02 to 0.07 Hz), mean blood pressure variability decreased significantly (by 82%), while CBF velocity variability persisted. Thus, transfer function gain increased by 81%. In addition, the phase lead of CBF velocity to arterial pressure diminished. These changes in transfer function gain and phase persisted despite restoration of arterial pressure by infusion of phenylephrine and normalization of mean blood pressure variability by oscillatory lower body negative pressure. CONCLUSIONS: These data suggest that dynamic cerebral autoregulation is altered by ganglion blockade. We speculate that autonomic neural control of the cerebral circulation is tonically active and likely plays a significant role in the regulation of beat-to-beat CBF in humans.
Can Neural Activity Propagate by Endogenous Electrical Field?
Qiu, Chen; Shivacharan, Rajat S; Zhang, Mingming; Durand, Dominique M
2015-12-01
It is widely accepted that synaptic transmissions and gap junctions are the major governing mechanisms for signal traveling in the neural system. Yet, a group of neural waves, either physiological or pathological, share the same speed of ∼0.1 m/s without synaptic transmission or gap junctions, and this speed is not consistent with axonal conduction or ionic diffusion. The only explanation left is an electrical field effect. We tested the hypothesis that endogenous electric fields are sufficient to explain the propagation with in silico and in vitro experiments. Simulation results show that field effects alone can indeed mediate propagation across layers of neurons with speeds of 0.12 ± 0.09 m/s with pathological kinetics, and 0.11 ± 0.03 m/s with physiologic kinetics, both generating weak field amplitudes of ∼2-6 mV/mm. Further, the model predicted that propagation speed values are inversely proportional to the cell-to-cell distances, but do not significantly change with extracellular resistivity, membrane capacitance, or membrane resistance. In vitro recordings in mice hippocampi produced similar speeds (0.10 ± 0.03 m/s) and field amplitudes (2.5-5 mV/mm), and by applying a blocking field, the propagation speed was greatly reduced. Finally, osmolarity experiments confirmed the model's prediction that cell-to-cell distance inversely affects propagation speed. Together, these results show that despite their weak amplitude, electric fields can be solely responsible for spike propagation at ∼0.1 m/s. This phenomenon could be important to explain the slow propagation of epileptic activity and other normal propagations at similar speeds. PMID:26631463
NASA Astrophysics Data System (ADS)
Touboul, Jonathan
2012-08-01
In this manuscript we analyze the collective behavior of mean-field limits of large-scale, spatially extended stochastic neuronal networks with delays. Rigorously, the asymptotic regime of such systems is characterized by a very intricate stochastic delayed integro-differential McKean-Vlasov equation that remain impenetrable, leaving the stochastic collective dynamics of such networks poorly understood. In order to study these macroscopic dynamics, we analyze networks of firing-rate neurons, i.e. with linear intrinsic dynamics and sigmoidal interactions. In that case, we prove that the solution of the mean-field equation is Gaussian, hence characterized by its two first moments, and that these two quantities satisfy a set of coupled delayed integro-differential equations. These equations are similar to usual neural field equations, and incorporate noise levels as a parameter, allowing analysis of noise-induced transitions. We identify through bifurcation analysis several qualitative transitions due to noise in the mean-field limit. In particular, stabilization of spatially homogeneous solutions, synchronized oscillations, bumps, chaotic dynamics, wave or bump splitting are exhibited and arise from static or dynamic Turing-Hopf bifurcations. These surprising phenomena allow further exploring the role of noise in the nervous system.
Phase transitions in a dynamic model of neural networks
NASA Astrophysics Data System (ADS)
Shim, G. M.; Choi, M. Y.; Kim, D.
1991-01-01
A dynamic model for neural networks that explicitly takes into account the existence of several time scales without discretizing the time is studied analytically via the use of path integrals. The maximum capacity of the network is found to be that of the Hopfield model divided by 1+a2, with a the ratio of the refractory period to the action-potential duration. We obtain the phase diagram as a function of a, the capacity, and the temperature. The overall phase diagram is rich in structure, exhibiting first-order transitions as well as continuous ones.
Neural representation of dynamic frequency is degraded in older adults.
Clinard, Christopher G; Cotter, Caitlin M
2015-05-01
Older adults, even with clinically normal hearing sensitivity, often report difficulty understanding speech in the presence of background noise. Part of this difficulty may be related to age-related degradations in the neural representation of speech sounds, such as formant transitions. Frequency-following responses (FFRs), which are dependent on phase-locked neural activity, were elicited using sounds consisting of linear frequency sweeps, which may be viewed as simple models of formant transitions. Eighteen adults (ten younger, 22-24 years old, and nine older, 51-67 years old) were tested. FFRs were elicited by tonal sweeps in six conditions. Two directions of frequency change, rising or falling, were used for each of three rates of frequency change. Stimulus-to-response cross correlations revealed that older adults had significantly poorer representation of the tonal sweeps, and that FFRs became poorer for faster rates of change. An additional FFR signal-to-noise ratio analysis based on time windows revealed that across the FFR waveforms and rates of frequency change, older adults had smaller (poorer) signal-to-noise ratios. These results indicate that older adults, even with clinically-normal hearing sensitivity, have degraded phase-locked neural representations of dynamic frequency. PMID:25724819
Classification of the extracellular fields produced by activated neural structures
Richerson, Samantha; Ingram, Mark; Perry, Danielle; Stecker, Mark M
2005-01-01
Background Classifying the types of extracellular potentials recorded when neural structures are activated is an important component in understanding nerve pathophysiology. Varying definitions and approaches to understanding the factors that influence the potentials recorded during neural activity have made this issue complex. Methods In this article, many of the factors which influence the distribution of electric potential produced by a traveling action potential are discussed from a theoretical standpoint with illustrative simulations. Results For an axon of arbitrary shape, it is shown that a quadrupolar potential is generated by action potentials traveling along a straight axon. However, a dipole moment is generated at any point where an axon bends or its diameter changes. Next, it is shown how asymmetric disturbances in the conductivity of the medium surrounding an axon produce dipolar potentials, even during propagation along a straight axon. Next, by studying the electric fields generated by a dipole source in an insulating cylinder, it is shown that in finite volume conductors, the extracellular potentials can be very different from those in infinite volume conductors. Finally, the effects of impulses propagating along axons with inhomogeneous cable properties are analyzed. Conclusion Because of the well-defined factors affecting extracellular potentials, the vague terms far-field and near-field potentials should be abandoned in favor of more accurate descriptions of the potentials. PMID:16146569
Endothelial cells regulate neural crest and second heart field morphogenesis
Milgrom-Hoffman, Michal; Michailovici, Inbal; Ferrara, Napoleone; Zelzer, Elazar; Tzahor, Eldad
2014-01-01
ABSTRACT Cardiac and craniofacial developmental programs are intricately linked during early embryogenesis, which is also reflected by a high frequency of birth defects affecting both regions. The molecular nature of the crosstalk between mesoderm and neural crest progenitors and the involvement of endothelial cells within the cardio–craniofacial field are largely unclear. Here we show in the mouse that genetic ablation of vascular endothelial growth factor receptor 2 (Flk1) in the mesoderm results in early embryonic lethality, severe deformation of the cardio–craniofacial field, lack of endothelial cells and a poorly formed vascular system. We provide evidence that endothelial cells are required for migration and survival of cranial neural crest cells and consequently for the deployment of second heart field progenitors into the cardiac outflow tract. Insights into the molecular mechanisms reveal marked reduction in Transforming growth factor beta 1 (Tgfb1) along with changes in the extracellular matrix (ECM) composition. Our collective findings in both mouse and avian models suggest that endothelial cells coordinate cardio–craniofacial morphogenesis, in part via a conserved signaling circuit regulating ECM remodeling by Tgfb1. PMID:24996922
Neural measures of dynamic changes in attentive tracking load.
Drew, Trafton; Horowitz, Todd S; Wolfe, Jeremy M; Vogel, Edward K
2012-02-01
In everyday life, we often need to track several objects simultaneously, a task modeled in the laboratory using the multiple-object tracking (MOT) task [Pylyshyn, Z., & Storm, R. W. Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spatial Vision, 3, 179-197, 1988]. Unlike MOT, however, in life, the set of relevant targets tends to be fluid and change over time. Humans are quite adept at "juggling" targets in and out of the target set [Wolfe, J. M., Place, S. S., & Horowitz, T. S. Multiple object juggling: Changing what is tracked during extended MOT. Psychonomic Bulletin & Review, 14, 344-349, 2007]. Here, we measured the neural underpinnings of this process using electrophysiological methods. Vogel and colleagues [McCollough, A. W., Machizawa, M. G., & Vogel, E. K. Electrophysiological measures of maintaining representations in visual working memory. Cortex, 43, 77-94, 2007; Vogel, E. K., McCollough, A. W., & Machizawa, M. G. Neural measures reveal individual differences in controlling access to working memory. Nature, 438, 500-503, 2005; Vogel, E. K., & Machizawa, M. G. Neural activity predicts individual differences in visual working memory capacity. Nature, 428, 748-751, 2004] have shown that the amplitude of a sustained lateralized negativity, contralateral delay activity (CDA) indexes the number of items held in visual working memory. Drew and Vogel [Drew, T., & Vogel, E. K. Neural measures of individual differences in selecting and tracking multiple moving objects. Journal of Neuroscience, 28, 4183-4191, 2008] showed that the CDA also indexes the number of items being tracking a standard MOT task. In the current study, we set out to determine whether the CDA is a signal that merely represents the number of objects that are attended during a trial or a dynamic signal capable of reflecting on-line changes in tracking load during a single trial. By measuring the response to add or drop cues, we were able to observe dynamic
Adaptive neural information processing with dynamical electrical synapses
Xiao, Lei; Zhang, Dan-ke; Li, Yuan-qing; Liang, Pei-ji; Wu, Si
2013-01-01
The present study investigates a potential computational role of dynamical electrical synapses in neural information process. Compared with chemical synapses, electrical synapses are more efficient in modulating the concerted activity of neurons. Based on the experimental data, we propose a phenomenological model for short-term facilitation of electrical synapses. The model satisfactorily reproduces the phenomenon that the neuronal correlation increases although the neuronal firing rates attenuate during the luminance adaptation. We explore how the stimulus information is encoded in parallel by firing rates and correlated activity of neurons, and find that dynamical electrical synapses mediate a transition from the firing rate code to the correlation one during the luminance adaptation. The latter encodes the stimulus information by using the concerted, but lower neuronal firing rate, and hence is economically more efficient. PMID:23596413
Binocular rivalry waves in a directionally selective neural field model
NASA Astrophysics Data System (ADS)
Carroll, Samuel R.; Bressloff, Paul C.
2014-10-01
We extend a neural field model of binocular rivalry waves in the visual cortex to incorporate direction selectivity of moving stimuli. For each eye, we consider a one-dimensional network of neurons that respond maximally to a fixed orientation and speed of a grating stimulus. Recurrent connections within each one-dimensional network are taken to be excitatory and asymmetric, where the asymmetry captures the direction and speed of the moving stimuli. Connections between the two networks are taken to be inhibitory (cross-inhibition). As per previous studies, we incorporate slow adaption as a symmetry breaking mechanism that allows waves to propagate. We derive an analytical expression for traveling wave solutions of the neural field equations, as well as an implicit equation for the wave speed as a function of neurophysiological parameters, and analyze their stability. Most importantly, we show that propagation of traveling waves is faster in the direction of stimulus motion than against it, which is in agreement with previous experimental and computational studies.
Some theoretical and numerical results for delayed neural field equations
NASA Astrophysics Data System (ADS)
Faye, Grégory; Faugeras, Olivier
2010-05-01
In this paper we study neural field models with delays which define a useful framework for modeling macroscopic parts of the cortex involving several populations of neurons. Nonlinear delayed integro-differential equations describe the spatio-temporal behavior of these fields. Using methods from the theory of delay differential equations, we show the existence and uniqueness of a solution of these equations. A Lyapunov analysis gives us sufficient conditions for the solutions to be asymptotically stable. We also present a fairly detailed study of the numerical computation of these solutions. This is, to our knowledge, the first time that a serious analysis of the problem of the existence and uniqueness of a solution of these equations has been performed. Another original contribution of ours is the definition of a Lyapunov functional and the result of stability it implies. We illustrate our numerical schemes on a variety of examples that are relevant to modeling in neuroscience.
Dynamic analysis of a general class of winner-take-all competitive neural networks.
Fang, Yuguang; Cohen, Michael A; Kincaid, Thomas G
2010-05-01
This paper studies a general class of dynamical neural networks with lateral inhibition, exhibiting winner-take-all (WTA) behavior. These networks are motivated by a metal-oxide-semiconductor field effect transistor (MOSFET) implementation of neural networks, in which mutual competition plays a very important role. We show that for a fairly general class of competitive neural networks, WTA behavior exists. Sufficient conditions for the network to have a WTA equilibrium are obtained, and rigorous convergence analysis is carried out. The conditions for the network to have the WTA behavior obtained in this paper provide design guidelines for the network implementation and fabrication. We also demonstrate that whenever the network gets into the WTA region, it will stay in that region and settle down exponentially fast to the WTA point. This provides a speeding procedure for the decision making: as soon as it gets into the region, the winner can be declared. Finally, we show that this WTA neural network has a self-resetting property, and a resetting principle is proposed. PMID:20215068
Schoppik, David; Nagel, Katherine I; Lisberger, Stephen G
2008-04-24
Neural activity in the frontal eye fields controls smooth pursuit eye movements, but the relationship between single neuron responses, cortical population responses, and eye movements is not well understood. We describe an approach to dynamically link trial-to-trial fluctuations in neural responses to parallel variations in pursuit and demonstrate that individual neurons predict eye velocity fluctuations at particular moments during the course of behavior, while the population of neurons collectively tiles the entire duration of the movement. The analysis also reveals the strength of correlations in the eye movement predictions derived from pairs of simultaneously recorded neurons and suggests a simple model of cortical processing. These findings constrain the primate cortical code for movement, suggesting that either a few neurons are sufficient to drive pursuit at any given time or that many neurons operate collectively at each moment with remarkably little variation added to motor command signals downstream from the cortex. PMID:18439409
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Procyk, Emmanuel; Dominey, Peter Ford
2016-01-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a
The dynamic matching of neural and cognitive growth cycles.
Peltzer-Karpf, Annemarie
2012-01-01
In recent years complex systems biology has developed detailed numerical models mimicking the establishment, modulation, and fine-tuning of neural networks. Current research within the framework of Dynamic Systems Theory (DST) emphasizes the nexus between dynamic cycles in the brain and cognitive development which unfold in a nonlinear way and allow for individual variation. Careful observations over multiple timescales and levels of organization suggest a link to system-specific developmental changes in the central nervous system with more functional specialization opening up more efficient information processing. This can be seen in spurts of EEG energy and altered cortical coherence. Data of age- and experience-related changes in synaptic density and metabolism, shifts in blood flow and improvement of (sub)cortical connections are projected on a dynamic trajectory of cognition moving from diffuse to more refined constructions in the various subsystems, each of which exhibiting its own developmental path. Pending questions are the generation of rules amidst diversity and fluctuation, and the correlation of growth rate and critical mass in developmental dynamics and interaction. PMID:22196112
Stable dynamic backpropagation learning in recurrent neural networks.
Jin, L; Gupta, M M
1999-01-01
The conventional dynamic backpropagation (DBP) algorithm proposed by Pineda does not necessarily imply the stability of the dynamic neural model in the sense of Lyapunov during a dynamic weight learning process. A difficulty with the DBP learning process is thus associated with the stability of the equilibrium points which have to be checked by simulating the set of dynamic equations, or else by verifying the stability conditions, after the learning has been completed. To avoid unstable phenomenon during the learning process, two new learning schemes, called the multiplier and constrained learning rate algorithms, are proposed in this paper to provide stable adaptive updating processes for both the synaptic and somatic parameters of the network. Based on the explicit stability conditions, in the multiplier method these conditions are introduced into the iterative error index, and the new updating formulations contain a set of inequality constraints. In the constrained learning rate algorithm, the learning rate is updated at each iterative instant by an equation derived using the stability conditions. With these stable DBP algorithms, any analog target pattern may be implemented by a steady output vector which is a nonlinear vector function of the stable equilibrium point. The applicability of the approaches presented is illustrated through both analog and binary pattern storage examples. PMID:18252634
Dynamic construction of the neural networks underpinning empathy for pain.
Betti, Viviana; Aglioti, Salvatore Maria
2016-04-01
When people witness or imagine the pain of another person, their nervous system may react as if they were feeling that pain themselves. Early neuroscientific evidence indicates that the firsthand and vicarious experiences of pain share largely overlapping neural structures, which typically correspond to the lateral and medial brain regions that encode the sensory and the affective qualities of pain. Such neural circuitry is highly malleable and allows people to flexibly adjust the empathic behavior depending on social and personal factors. Recent views posit, however, that the brain can be conceptualized as a complex system, in which behavior emerges from the interaction between functionally connected brain regions, organized into large-scale networks. Beyond the classical modular view of the brain, here we suggest that empathic behavior may be understood through a dynamic network-based approach where the cortical circuits associated with the experience of pain flexibly change in order to code self- and other-related emotions and to intrinsically map our mentality to empathetically react to others. PMID:26877105
Neural dynamics of change detection in crowded acoustic scenes.
Sohoglu, Ediz; Chait, Maria
2016-02-01
Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection. PMID:26631816
Neural dynamics of change detection in crowded acoustic scenes
Sohoglu, Ediz; Chait, Maria
2016-01-01
Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~ 50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection. PMID:26631816
Neural dynamic programming applied to rotorcraft flight control and reconfiguration
NASA Astrophysics Data System (ADS)
Enns, Russell James
This dissertation introduces a new rotorcraft flight control methodology based on a relatively new form of neural control, neural dynamic programming (NDP). NDP is an on-line learning control scheme that is in its infancy and has only been applied to simple systems, such as those possessing a single control and a handful of states. This dissertation builds on the existing NDP concept to provide a comprehensive control system framework that can perform well as a learning controller for more realistic and practical systems of higher dimension such as helicopters. To accommodate such complex systems, the dissertation introduces the concept of a trim network that is seamlessly integrated into the NDP control structure and is also trained using this structure. This is the first time that neural networks have been applied to the helicopter control problem as a direct form of control without using other controller methodologies to augment the neural controller and without using order reducing simplifications such as axes decoupling. The dissertation focuses on providing a viable alternative helicopter control system design approach rather than providing extensive comparisons among various available controllers. As such, results showing the system's ability to stabilize the helicopter and to perform command tracking, without explicit comparison to other methods, are presented. In this research, design robustness was addressed by performing simulations under various disturbance conditions. All designs were tested using FLYRT, a sophisticated, industrial-scale, nonlinear, validated model of the Apache helicopter. Though illustrated for helicopters, the NDP control system framework should be applicable to general purpose multi-input multi-output (MIMO) control. In addition, this dissertation tackles the helicopter reconfigurable flight control problem, finding control solutions when the aircraft, and in particular its control actuators, are damaged. Such solutions have
Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1997-01-01
A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.
Bojak, Ingo; Stoyanov, Zhivko V.; Liley, David T. J.
2015-01-01
Burst suppression in the electroencephalogram (EEG) is a well-described phenomenon that occurs during deep anesthesia, as well as in a variety of congenital and acquired brain insults. Classically it is thought of as spatially synchronous, quasi-periodic bursts of high amplitude EEG separated by low amplitude activity. However, its characterization as a “global brain state” has been challenged by recent results obtained with intracranial electrocortigraphy. Not only does it appear that burst suppression activity is highly asynchronous across cortex, but also that it may occur in isolated regions of circumscribed spatial extent. Here we outline a realistic neural field model for burst suppression by adding a slow process of synaptic resource depletion and recovery, which is able to reproduce qualitatively the empirically observed features during general anesthesia at the whole cortex level. Simulations reveal heterogeneous bursting over the model cortex and complex spatiotemporal dynamics during simulated anesthetic action, and provide forward predictions of neuroimaging signals for subsequent empirical comparisons and more detailed characterization. Because burst suppression corresponds to a dynamical end-point of brain activity, theoretically accounting for its spatiotemporal emergence will vitally contribute to efforts aimed at clarifying whether a common physiological trajectory is induced by the actions of general anesthetic agents. We have taken a first step in this direction by showing that a neural field model can qualitatively match recent experimental data that indicate spatial differentiation of burst suppression activity across cortex. PMID:25767438
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
2015-01-01
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708
Autonomic neural control of heart rate during dynamic exercise: revisited
White, Daniel W; Raven, Peter B
2014-01-01
The accepted model of autonomic control of heart rate (HR) during dynamic exercise indicates that the initial increase is entirely attributable to the withdrawal of parasympathetic nervous system (PSNS) activity and that subsequent increases in HR are entirely attributable to increases in cardiac sympathetic activity. In the present review, we sought to re-evaluate the model of autonomic neural control of HR in humans during progressive increases in dynamic exercise workload. We analysed data from both new and previously published studies involving baroreflex stimulation and pharmacological blockade of the autonomic nervous system. Results indicate that the PSNS remains functionally active throughout exercise and that increases in HR from rest to maximal exercise result from an increasing workload-related transition from a 4 : 1 vagal–sympathetic balance to a 4 : 1 sympatho–vagal balance. Furthermore, the beat-to-beat autonomic reflex control of HR was found to be dependent on the ability of the PSNS to modulate the HR as it was progressively restrained by increasing workload-related sympathetic nerve activity. In conclusion: (i) increases in exercise workload-related HR are not caused by a total withdrawal of the PSNS followed by an increase in sympathetic tone; (ii) reciprocal antagonism is key to the transition from vagal to sympathetic dominance, and (iii) resetting of the arterial baroreflex causes immediate exercise-onset reflexive increases in HR, which are parasympathetically mediated, followed by slower increases in sympathetic tone as workloads are increased. PMID:24756637
Control of Complex Dynamic Systems by Neural Networks
NASA Technical Reports Server (NTRS)
Spall, James C.; Cristion, John A.
1993-01-01
This paper considers the use of neural networks (NN's) in controlling a nonlinear, stochastic system with unknown process equations. The NN is used to model the resulting unknown control law. The approach here is based on using the output error of the system to train the NN controller without the need to construct a separate model (NN or other type) for the unknown process dynamics. To implement such a direct adaptive control approach, it is required that connection weights in the NN be estimated while the system is being controlled. As a result of the feedback of the unknown process dynamics, however, it is not possible to determine the gradient of the loss function for use in standard (back-propagation-type) weight estimation algorithms. Therefore, this paper considers the use of a new stochastic approximation algorithm for this weight estimation, which is based on a 'simultaneous perturbation' gradient approximation that only requires the system output error. It is shown that this algorithm can greatly enhance the efficiency over more standard stochastic approximation algorithms based on finite-difference gradient approximations.
Hidden Conditional Neural Fields for Continuous Phoneme Speech Recognition
NASA Astrophysics Data System (ADS)
Fujii, Yasuhisa; Yamamoto, Kazumasa; Nakagawa, Seiichi
In this paper, we propose Hidden Conditional Neural Fields (HCNF) for continuous phoneme speech recognition, which are a combination of Hidden Conditional Random Fields (HCRF) and a Multi-Layer Perceptron (MLP), and inherit their merits, namely, the discriminative property for sequences from HCRF and the ability to extract non-linear features from an MLP. HCNF can incorporate many types of features from which non-linear features can be extracted, and is trained by sequential criteria. We first present the formulation of HCNF and then examine three methods to further improve automatic speech recognition using HCNF, which is an objective function that explicitly considers training errors, provides a hierarchical tandem-style feature and includes a deep non-linear feature extractor for the observation function. We show that HCNF can be trained realistically without any initial model and outperforms HCRF and the triphone hidden Markov model trained by the minimum phone error (MPE) manner using experimental results for continuous English phoneme recognition on the TIMIT core test set and Japanese phoneme recognition on the IPA 100 test set.
NASA Astrophysics Data System (ADS)
Chiel, Hillel J.; Thomas, Peter J.
2011-12-01
Tracing technologies back in time to their scientific and mathematical origins reveals surprising connections between the pure pursuit of knowledge and the opportunities afforded by that pursuit for new and unexpected applications. For example, Einstein's desire to eliminate the disparity between electricity and magnetism in Maxwell's equations impelled him to develop the special theory of relativity (Einstein 1922)Einstein 1922 p 41 'The advance in method arises from the fact that the electric and magnetic fields lose their separate existences through the relativity of motion. A field which appears to be purely an electric field, judged from one system, has also magnetic field components when judged from another inertial system.'. His conviction that there should be no privileged inertial frame of reference Einstein 1922 p 58 'The possibility of explaining the numerical equality of inertia and gravitation by the unity of their nature gives to the general theory of relativity, according to my conviction, such a superiority over the conceptions of classical mechanics, that all the difficulties encountered must be considered as small in comparison with this progress.' further impelled him to utilize the non-Euclidean geometry originally developed by Riemann and others as a purely hypothetical alternative to classical geometry as the foundation for the general theory of relativity. Nowadays, anyone who depends on a global positioning system—which now includes many people who own smart phones—uses a system that would not work effectively without incorporating corrections from both special and general relativity (Ashby 2003). As another example, G H Hardy famously proclaimed his conviction that his work on number theory, which he pursued for the sheer love of exploring the beauty of mathematical structures, was unlikely to find any practical applications (Hardy 1940)Hardy 1940 pp 135-6 'The general conclusion, surely, stands out plainly enough. If useful knowledge
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
Nitzan, Erez; Krispin, Shlomo; Pfaltzgraff, Elise R.; Klar, Avihu; Labosky, Patricia A.; Kalcheim, Chaya
2013-01-01
Understanding when and how multipotent progenitors segregate into diverse fates is a key question during embryonic development. The neural crest (NC) is an exemplary model system with which to investigate the dynamics of progenitor cell specification, as it generates a multitude of derivatives. Based on ‘in ovo’ lineage analysis, we previously suggested an early fate restriction of premigratory trunk NC to generate neural versus melanogenic fates, yet the timing of fate segregation and the underlying mechanisms remained unknown. Analysis of progenitors expressing a Foxd3 reporter reveals that prospective melanoblasts downregulate Foxd3 and have already segregated from neural lineages before emigration. When this downregulation is prevented, late-emigrating avian precursors fail to upregulate the melanogenic markers Mitf and MC/1 and the guidance receptor Ednrb2, generating instead glial cells that express P0 and Fabp. In this context, Foxd3 lies downstream of Snail2 and Sox9, constituting a minimal network upstream of Mitf and Ednrb2 to link melanogenic specification with migration. Consistent with the gain-of-function data in avians, loss of Foxd3 function in mouse NC results in ectopic melanogenesis in the dorsal tube and sensory ganglia. Altogether, Foxd3 is part of a dynamically expressed gene network that is necessary and sufficient to regulate fate decisions in premigratory NC. Their timely downregulation in the dorsal neural tube is thus necessary for the switch between neural and melanocytic phases of NC development. PMID:23615280
Dynamics of fully connected attractor neural networks near saturation
NASA Astrophysics Data System (ADS)
Coolen, A. C. C.; Sherrington, D.
1993-12-01
We present an exact dynamical theory, valid on finite time scales, to describe the fully connected Hopfield model near saturation in terms of deterministic flow equations for order parameters. Two transparent assumptions allow us to perform a replica calculation of the distribution of intrinsic noise components of the alignment fields. Numerical simulations indicate that our equations describe the dynamics correctly in the region where replica symmetry is stable. In equilibrium our theory reproduces the saddle-point equations obtained in the thermodynamic analysis by Amit et al.
Ca^2+ Dynamics and Propagating Waves in Neural Networks with Excitatory and Inhibitory Neurons.
NASA Astrophysics Data System (ADS)
Bondarenko, Vladimir E.
2008-03-01
Dynamics of neural spikes, intracellular Ca^2+, and Ca^2+ in intracellular stores was investigated both in isolated Chay's neurons and in the neurons coupled in networks. Three types of neural networks were studied: a purely excitatory neural network, with only excitatory (AMPA) synapses; a purely inhibitory neural network with only inhibitory (GABA) synapses; and a hybrid neural network, with both AMPA and GABA synapses. In the hybrid neural network, the ratio of excitatory to inhibitory neurons was 4:1. For each case, we considered two types of connections, ``all-with-all" and 20 connections per neuron. Each neural network contained 100 neurons with randomly distributed connection strengths. In the neural networks with ``all-with-all" connections and AMPA/GABA synapses an increase in average synaptic strength yielded bursting activity with increased/decreased number of spikes per burst. The neural bursts and Ca^2+ transients were synchronous at relatively large connection strengths despite random connection strengths. Simulations of the neural networks with 20 connections per neuron and with only AMPA synapses showed synchronous oscillations, while the neural networks with GABA or hybrid synapses generated propagating waves of membrane potential and Ca^2+ transients.
Classification of mammographic masses using generalized dynamic fuzzy neural networks
NASA Astrophysics Data System (ADS)
Lim, Wei Keat; Er, Meng Joo
2003-05-01
In this paper, computer-aided classification of mammographic masses using generalized dynamic fuzzy neural networks (GDFNN) is presented. The texture parameters, derived from first-order gradient distribution and gray-level co-occurrence matrices (GCMs), were computed from the regions of interest (ROIs). A total of 77 images containing 38 benign cases and 39 malignant cases from the Digital Database for Screening Mammography (DDSM) were analyzed. A fast approach of automatically generating fuzzy rules from training samples was implemented to classify tumors. The novelty of this work is that it alleviates the problem of the conventional computer-aided diagnosis (CAD) system that requires a designer to examine all the input-output relationships of a training database in order to obtain the most appropriate structure for the classifier. In this approach, not only the connection weights can be adjusted, but also the structure can be self-adaptive during the learning process. With the classifier automatically generated by the GDFNN learning algorithm, the area under the receiver-operating characteristic (ROC) curve, Az, reached 0.9289, which corresponded to a true-positive fraction of 94.9% at a false positive fraction of 73.7%. The corresponding accuracy was 84.4%, the positive predictive value was 78.7% and the negative predictive value was 93.3%.
Neural dynamics for landmark orientation and angular path integration.
Seelig, Johannes D; Jayaraman, Vivek
2015-05-14
Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed Drosophila melanogaster walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body, a toroidal structure in the centre of the fly brain. The neural population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity, a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors, network structures that have been proposed to support the function of navigational brain circuits. PMID:25971509
Dynamical recurrent neural networks--towards environmental time series prediction.
Aussem, A; Murtagh, F; Sarazin, M
1995-06-01
Dynamical Recurrent Neural Networks (DRNN) (Aussem 1995a) are a class of fully recurrent networks obtained by modeling synapses as autoregressive filters. By virtue of their internal dynamic, these networks approximate the underlying law governing the time series by a system of nonlinear difference equations of internal variables. They therefore provide history-sensitive forecasts without having to be explicitly fed with external memory. The model is trained by a local and recursive error propagation algorithm called temporal-recurrent-backpropagation. The efficiency of the procedure benefits from the exponential decay of the gradient terms backpropagated through the adjoint network. We assess the predictive ability of the DRNN model with meterological and astronomical time series recorded around the candidate observation sites for the future VLT telescope. The hope is that reliable environmental forecasts provided with the model will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. In this perspective, the model is first appraised on precipitation measurements with traditional nonlinear AR and ARMA techniques using feedforward networks. Then we tackle a complex problem, namely the prediction of astronomical seeing, known to be a very erratic time series. A fuzzy coding approach is used to reduce the complexity of the underlying laws governing the seeing. Then, a fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Based on a carefully selected set of meteorological variables at the same time-point, a nonlinear multiple regression, termed nowcasting (Murtagh et al. 1993, 1995), is carried out on the fuzzily coded seeing records. The DRNN is shown to outperform the fuzzy k-nearest neighbors method. PMID:7496587
Track and Field Dynamics. Second Edition.
ERIC Educational Resources Information Center
Ecker, Tom
Track and field coaching is considered an art embodying three sciences--physiology, psychology, and dynamics. It is the area of dynamics, the branch of physics that deals with the action of force on bodies, that is central to this book. Although the book does not cover the entire realm of dynamics, the laws and principles that relate directly to…
Neural RNA as a principal dynamic information carrier in a neuron
NASA Astrophysics Data System (ADS)
Berezin, Andrey A.
1999-11-01
Quantum mechanical approach has been used to develop a model of the neural ribonucleic acid molecule dynamics. Macro and micro Fermi-Pasta-Ulam recurrence has been considered as a principle information carrier in a neuron.
Oscillatory phase dynamics in neural entrainment underpin illusory percepts of time.
Herrmann, Björn; Henry, Molly J; Grigutsch, Maren; Obleser, Jonas
2013-10-01
Neural oscillatory dynamics are a candidate mechanism to steer perception of time and temporal rate change. While oscillator models of time perception are strongly supported by behavioral evidence, a direct link to neural oscillations and oscillatory entrainment has not yet been provided. In addition, it has thus far remained unaddressed how context-induced illusory percepts of time are coded for in oscillator models of time perception. To investigate these questions, we used magnetoencephalography and examined the neural oscillatory dynamics that underpin pitch-induced illusory percepts of temporal rate change. Human participants listened to frequency-modulated sounds that varied over time in both modulation rate and pitch, and judged the direction of rate change (decrease vs increase). Our results demonstrate distinct neural mechanisms of rate perception: Modulation rate changes directly affected listeners' rate percept as well as the exact frequency of the neural oscillation. However, pitch-induced illusory rate changes were unrelated to the exact frequency of the neural responses. The rate change illusion was instead linked to changes in neural phase patterns, which allowed for single-trial decoding of percepts. That is, illusory underestimations or overestimations of perceived rate change were tightly coupled to increased intertrial phase coherence and changes in cerebro-acoustic phase lag. The results provide insight on how illusory percepts of time are coded for by neural oscillatory dynamics. PMID:24089487
Hellyer, Peter J.; Scott, Gregory; Shanahan, Murray; Sharp, David J.
2015-01-01
Current theory proposes that healthy neural dynamics operate in a metastable regime, where brain regions interact to simultaneously maximize integration and segregation. Metastability may confer important behavioral properties, such as cognitive flexibility. It is increasingly recognized that neural dynamics are constrained by the underlying structural connections between brain regions. An important challenge is, therefore, to relate structural connectivity, neural dynamics, and behavior. Traumatic brain injury (TBI) is a pre-eminent structural disconnection disorder whereby traumatic axonal injury damages large-scale connectivity, producing characteristic cognitive impairments, including slowed information processing speed and reduced cognitive flexibility, that may be a result of disrupted metastable dynamics. Therefore, TBI provides an experimental and theoretical model to examine how metastable dynamics relate to structural connectivity and cognition. Here, we use complementary empirical and computational approaches to investigate how metastability arises from the healthy structural connectome and relates to cognitive performance. We found reduced metastability in large-scale neural dynamics after TBI, measured with resting-state functional MRI. This reduction in metastability was associated with damage to the connectome, measured using diffusion MRI. Furthermore, decreased metastability was associated with reduced cognitive flexibility and information processing. A computational model, defined by empirically derived connectivity data, demonstrates how behaviorally relevant changes in neural dynamics result from structural disconnection. Our findings suggest how metastable dynamics are important for normal brain function and contingent on the structure of the human connectome. PMID:26085630
The neural dynamics of song syntax in songbirds
NASA Astrophysics Data System (ADS)
Jin, Dezhe
2010-03-01
Songbird is ``the hydrogen atom'' of the neuroscience of complex, learned vocalizations such as human speech. Songs of Bengalese finch consist of sequences of syllables. While syllables are temporally stereotypical, syllable sequences can vary and follow complex, probabilistic syntactic rules, which are rudimentarily similar to grammars in human language. Songbird brain is accessible to experimental probes, and is understood well enough to construct biologically constrained, predictive computational models. In this talk, I will discuss the structure and dynamics of neural networks underlying the stereotypy of the birdsong syllables and the flexibility of syllable sequences. Recent experiments and computational models suggest that a syllable is encoded in a chain network of projection neurons in premotor nucleus HVC (proper name). Precisely timed spikes propagate along the chain, driving vocalization of the syllable through downstream nuclei. Through a computational model, I show that that variable syllable sequences can be generated through spike propagations in a network in HVC in which the syllable-encoding chain networks are connected into a branching chain pattern. The neurons mutually inhibit each other through the inhibitory HVC interneurons, and are driven by external inputs from nuclei upstream of HVC. At a branching point that connects the final group of a chain to the first groups of several chains, the spike activity selects one branch to continue the propagation. The selection is probabilistic, and is due to the winner-take-all mechanism mediated by the inhibition and noise. The model predicts that the syllable sequences statistically follow partially observable Markov models. Experimental results supporting this and other predictions of the model will be presented. We suggest that the syntax of birdsong syllable sequences is embedded in the connection patterns of HVC projection neurons.
The relevance of network micro-structure for neural dynamics.
Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan
2013-01-01
The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits. PMID:23761758
Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.
NASA Astrophysics Data System (ADS)
Sasaki, Hironori
This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically
Measuring solar magnetic fields with artificial neural networks.
Socas-Navarro, Hector
2003-01-01
The quantification of the solar magnetic field is a crucial step in modern solar physics to understand the dynamics, activity and variability of our star. Presently, a reliable inference of these fields is only possible by means of a computer-intensive process that has so far limited scientists to the analysis of observations from small regions of the solar disk, and/or very crude spatial and temporal resolution. This work presents a different approach to the problem, in which a multilayer perceptron, trained with known synthetic profiles, is able to recognize the profiles and return the magnetic field used to synthesize them. The network is then confronted with real observations of a sunspot which had been previously inverted using traditional inversion techniques. A quantitative comparison between these two procedures shows the reliability of the network when applied to points having magnetic filling factors larger than approximately 70%. The dramatic decrease in the required computing time presents an opportunity for the routine analysis of large-scale, high-resolution solar observations. PMID:12672431
Filling the Gap on Developmental Change: Tests of a Dynamic Field Theory of Spatial Cognition
ERIC Educational Resources Information Center
Schutte, Anne R.; Spencer, John P.
2010-01-01
In early childhood, there is a developmental transition in spatial memory biases. Before the transition, children's memory responses are biased toward the midline of a space, while after the transition responses are biased away from midline. The Dynamic Field Theory (DFT) posits that changes in neural interaction and changes in how children…
Track and Field: Technique Through Dynamics.
ERIC Educational Resources Information Center
Ecker, Tom
This book was designed to aid in applying the laws of dynamics to the sport of track and field, event by event. It begins by tracing the history of the discoveries of the laws of motion and the principles of dynamics, with explanations of commonly used terms derived from the vocabularies of the physical sciences. The principles and laws of…
Tanskanen, Jarno M A; Mikkonen, Jarno E; Penttonen, Markku
2005-06-30
Independent component analysis (ICA) is proposed for analysis of neural population activity from multichannel electrophysiological field potential measurements. The proposed analysis method provides information on spatial extents of active neural populations, locations of the populations with respect to each other, population evolution, including merging and splitting of populations in time, and on time lag differences between the populations. In some cases, results of the proposed analysis may also be interpreted as independent information flows carried by neurons and neural populations. In this paper, a detailed description of the analysis method is given. The proposed analysis is demonstrated with an illustrative simulation, and with an exemplary analysis of an in vivo multichannel recording from rat hippocampus. The proposed method can be applied in analysis of any recordings of neural networks in which contributions from a number of neural populations or information flows are simultaneously recorded via a number of measurement points, as well in vivo as in vitro. PMID:15922038
Neural dynamics of prediction and surprise in infants
Kouider, Sid; Long, Bria; Le Stanc, Lorna; Charron, Sylvain; Fievet, Anne-Caroline; Barbosa, Leonardo S.; Gelskov, Sofie V.
2015-01-01
Prior expectations shape neural responses in sensory regions of the brain, consistent with a Bayesian predictive coding account of perception. Yet, it remains unclear whether such a mechanism is already functional during early stages of development. To address this issue, we study how the infant brain responds to prediction violations using a cross-modal cueing paradigm. We record electroencephalographic responses to expected and unexpected visual events preceded by auditory cues in 12-month-old infants. We find an increased response for unexpected events. However, this effect of prediction error is only observed during late processing stages associated with conscious access mechanisms. In contrast, early perceptual components reveal an amplification of neural responses for predicted relative to surprising events, suggesting that selective attention enhances perceptual processing for expected events. Taken together, these results demonstrate that cross-modal statistical regularities are used to generate predictions that differentially influence early and late neural responses in infants. PMID:26460901
Brain-Machine Interactions for Assessing the Dynamics of Neural Systems
Kositsky, Michael; Chiappalone, Michela; Alford, Simon T.; Mussa-Ivaldi, Ferdinando A.
2008-01-01
A critical advance for brain–machine interfaces is the establishment of bi-directional communications between the nervous system and external devices. However, the signals generated by a population of neurons are expected to depend in a complex way upon poorly understood neural dynamics. We report a new technique for the identification of the dynamics of a neural population engaged in a bi-directional interaction with an external device. We placed in vitro preparations from the lamprey brainstem in a closed-loop interaction with simulated dynamical devices having different numbers of degrees of freedom. We used the observed behaviors of this composite system to assess how many independent parameters − or state variables − determine at each instant the output of the neural system. This information, known as the dynamical dimension of a system, allows predicting future behaviors based on the present state and the future inputs. A relevant novelty in this approach is the possibility to assess a computational property – the dynamical dimension of a neuronal population – through a simple experimental technique based on the bi-directional interaction with simulated dynamical devices. We present a set of results that demonstrate the possibility of obtaining stable and reliable measures of the dynamical dimension of a neural preparation. PMID:19430593
Neural bandwidth of veridical perception across the visual field.
Wilkinson, Michael O; Anderson, Roger S; Bradley, Arthur; Thibos, Larry N
2016-01-01
Neural undersampling of the retinal image limits the range of spatial frequencies that can be represented veridically by the array of retinal ganglion cells conveying visual information from eye to brain. Our goal was to demarcate the neural bandwidth and local anisotropy of veridical perception, unencumbered by optical imperfections of the eye, and to test competing hypotheses that might account for the results. Using monochromatic interference fringes to stimulate the retina with high-contrast sinusoidal gratings, we measured sampling-limited visual resolution along eight meridians from 0° to 50° of eccentricity. The resulting isoacuity contour maps revealed all of the expected features of the human array of retinal ganglion cells. Contours in the radial fringe maps are elongated horizontally, revealing the functional equivalent of the anatomical visual streak, and are extended into nasal retina and superior retina, indicating higher resolution along those meridians. Contours are larger in diameter for radial gratings compared to tangential or oblique gratings, indicating local anisotropy with highest bandwidth for radially oriented gratings. Comparison of these results to anatomical predictions indicates acuity is proportional to the sampling density of retinal ganglion cells everywhere in the retina. These results support the long-standing hypothesis that "pixel density" of the discrete neural image carried by the human optic nerve limits the spatial bandwidth of veridical perception at all retinal locations. PMID:26824638
Dynamic social power modulates neural basis of math calculation
Harada, Tokiko; Bridge, Donna J.; Chiao, Joan Y.
2013-01-01
Both situational (e.g., perceived power) and sustained social factors (e.g., cultural stereotypes) are known to affect how people academically perform, particularly in the domain of mathematics. The ability to compute even simple mathematics, such as addition, relies on distinct neural circuitry within the inferior parietal and inferior frontal lobes, brain regions where magnitude representation and addition are performed. Despite prior behavioral evidence of social influence on academic performance, little is known about whether or not temporarily heightening a person's sense of power may influence the neural bases of math calculation. Here we primed female participants with either high or low power (LP) and then measured neural response while they performed exact and approximate math problems. We found that priming power affected math performance; specifically, females primed with high power (HP) performed better on approximate math calculation compared to females primed with LP. Furthermore, neural response within the left inferior frontal gyrus (IFG), a region previously associated with cognitive interference, was reduced for females in the HP compared to LP group. Taken together, these results indicate that even temporarily heightening a person's sense of social power can increase their math performance, possibly by reducing cognitive interference during math performance. PMID:23390415
Neural Dynamics of Autistic Behaviors: Cognitive, Emotional, and Timing Substrates
ERIC Educational Resources Information Center
Grossberg, Stephen; Seidman, Don
2006-01-01
What brain mechanisms underlie autism, and how do they give rise to autistic behavioral symptoms? This article describes a neural model, called the Imbalanced Spectrally Timed Adaptive Resonance Theory (iSTART) model, that proposes how cognitive, emotional, timing, and motor processes that involve brain regions such as the prefrontal and temporal…
Neural field theory of nonlinear wave-wave and wave-neuron processes
NASA Astrophysics Data System (ADS)
Robinson, P. A.; Roy, N.
2015-06-01
Systematic expansion of neural field theory equations in terms of nonlinear response functions is carried out to enable a wide variety of nonlinear wave-wave and wave-neuron processes to be treated systematically in systems involving multiple neural populations. The results are illustrated by analyzing second-harmonic generation, and they can also be applied to wave-wave coalescence, multiharmonic generation, facilitation, depression, refractoriness, and other nonlinear processes.
Relating the sequential dynamics of excitatory neural networks to synaptic cellular automata.
Nekorkin, V I; Dmitrichev, A S; Kasatkin, D V; Afraimovich, V S
2011-12-01
We have developed a new approach for the description of sequential dynamics of excitatory neural networks. Our approach is based on the dynamics of synapses possessing the short-term plasticity property. We suggest a model of such synapses in the form of a second-order system of nonlinear ODEs. In the framework of the model two types of responses are realized-the fast and the slow ones. Under some relations between their timescales a cellular automaton (CA) on the graph of connections is constructed. Such a CA has only a finite number of attractors and all of them are periodic orbits. The attractors of the CA determine the regimes of sequential dynamics of the original neural network, i.e., itineraries along the network and the times of successive firing of neurons in the form of bunches of spikes. We illustrate our approach on the example of a Morris-Lecar neural network. PMID:22225361
Lebedev, Dmitry V; Steil, Jochen J; Ritter, Helge J
2005-04-01
We introduce a new type of neural network--the dynamic wave expansion neural network (DWENN)--for path generation in a dynamic environment for both mobile robots and robotic manipulators. Our model is parameter-free, computationally efficient, and its complexity does not explicitly depend on the dimensionality of the configuration space. We give a review of existing neural networks for trajectory generation in a time-varying domain, which are compared to the presented model. We demonstrate several representative simulative comparisons as well as the results of long-run comparisons in a number of randomly-generated scenes, which reveal that the proposed model yields dominantly shorter paths, especially in highly-dynamic environments. PMID:15896575
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
A neural network dynamics that resembles protein evolution
NASA Astrophysics Data System (ADS)
Ferrán, Edgardo A.; Ferrara, Pascual
1992-06-01
We use neutral networks to classify proteins according to their sequence similarities. A network composed by 7 × 7 neurons, was trained with the Kohonen unsupervised learning algorithm using, as inputs, matrix patterns derived from the bipeptide composition of cytochrome c proteins belonging to 76 different species. As a result of the training, the network self-organized the activation of its neurons into topologically ordered maps, wherein phylogenetically related sequences were positioned close to each other. The evolution of the topological map during learning, in a representative computational experiment, roughly resembles the way in which one species evolves into several others. For instance, sequences corresponding to vertebrates, initially grouped together into one neuron, were placed in a contiguous zone of the final neural map, with sequences of fishes, amphibia, reptiles, birds and mammals associated to different neurons. Some apparent wrong classifications are due to the fact that some proteins have a greater degree of sequence identity than the one expected by phylogenetics. In the final neural map, each synaptic vector may be considered as the pattern corresponding to the ancestor of all the proteins that are attached to that neuron. Although it may be also tempting to link real time with learning epochs and to use this relationship to calibrate the molecular evolutionary clock, this is not correct because the evolutionary time schedule obtained with the neural network depends highly on the discrete way in which the winner neighborhood is decreased during learning.
Dynamic scalp topography reveals neural signs just before performance errors
Ora, Hiroki; Sekiguchi, Tatsuhiko; Miyake, Yoshihiro
2015-01-01
Performance errors may cause serious consequences. It has been reported that ongoing activity of the frontal control regions across trials associates with the occurrence of performance errors. However, neural mechanisms that cause performance errors remain largely unknown. In this study, we hypothesized that some neural functions required for correct outcomes are lacking just before performance errors, and to determine this lack of neural function we applied a spatiotemporal analysis to high-density electroencephalogram signals recorded during a visual discrimination task, a d2 test of attention. To our knowledge, this is the first report of a difference in the temporal development of scalp ERP between trials with error, and correct outcomes as seen by topography during the d2 test of attention. We observed differences in the signal potential in the frontal region and then the occipital region between reaction times matched with correct and error outcomes. Our observations suggest that lapses of top-down signals from frontal control regions cause performance errors just after the lapses. PMID:26289925
Hamiltonian dynamics of the parametrized electromagnetic field
NASA Astrophysics Data System (ADS)
Barbero G, J. Fernando; Margalef-Bentabol, Juan; Villaseñor, Eduardo J. S.
2016-06-01
We study the Hamiltonian formulation for a parametrized electromagnetic field with the purpose of clarifying the interplay between parametrization and gauge symmetries. We use a geometric approach which is tailor-made for theories where embeddings are part of the dynamical variables. Our point of view is global and coordinate free. The most important result of the paper is the identification of sectors in the primary constraint submanifold in the phase space of the model where the number of independent components of the Hamiltonian vector fields that define the dynamics changes. This explains the non-trivial behavior of the system and some of its pathologies.
NASA Astrophysics Data System (ADS)
Yang, Gang; Tang, Zheng; Dai, Hongwei
Through analyzing the dynamics characteristic of maximum neural network with an added vertex, we find that the solution quality is mainly determined by the added vertex weights. In order to increase maximum neural network ability, a stochastic nonlinear self-feedback and flexible annealing strategy are embedded in maximum neural network, which makes the network more powerful to escape local minima and be independent of the initial values. Simultaneously, we present that solving ability of maximum neural network is dependence on problem. We introduce a new parameter into our network to improve the solving ability. The simulation in k random graph and some DIMACS clique instances in the second DIMACS challenge shows that our improved network is superior to other algorithms in light of the solution quality and CPU time.
Quantum perceptron over a field and neural network architecture selection in a quantum computer.
da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa
2016-04-01
In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. PMID:26878722
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min
2013-08-01
Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the
Empirical modeling ENSO dynamics with complex-valued artificial neural networks
NASA Astrophysics Data System (ADS)
Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry
2016-04-01
The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Kimura, Masahiro
2002-12-01
This article extends previous mathematical studies on elucidating the redundancy for describing functions by feedforward neural networks (FNNs) to the elucidation of redundancy for describing dynamical systems (DSs) by continuous-time recurrent neural networks (RNNs). In order to approximate a DS on R(n) using an RNN with n visible units, an n-dimensional affine neural dynamical system (A-NDS) can be used as the DS actually produced by the above RNN under an affine map from its visible state-space R(n) to its hidden state-space. Therefore, we consider the problem of clarifying the redundancy for describing A-NDSs by RNNs and affine maps. We clarify to what extent a pair of an RNN and an affine map is uniquely determined by its corresponding A-NDS and also give a nonredundant sufficient search set for the DS approximation problem based on A-NDS. PMID:12487801
Codevelopmental learning between human and humanoid robot using a dynamic neural-network model.
Tani, Jun; Nishimoto, Ryu; Namikawa, Jun; Ito, Masato
2008-02-01
This paper examines characteristics of interactive learning between human tutors and a robot having a dynamic neural-network model, which is inspired by human parietal cortex functions. A humanoid robot, with a recurrent neural network that has a hierarchical structure, learns to manipulate objects. Robots learn tasks in repeated self-trials with the assistance of human interaction, which provides physical guidance until the tasks are mastered and learning is consolidated within the neural networks. Experimental results and the analyses showed the following: 1) codevelopmental shaping of task behaviors stems from interactions between the robot and a tutor; 2) dynamic structures for articulating and sequencing of behavior primitives are self-organized in the hierarchically organized network; and 3) such structures can afford both generalization and context dependency in generating skilled behaviors. PMID:18270081
The neural dynamics of reward value and risk coding in the human orbitofrontal cortex.
Li, Yansong; Vanni-Mercier, Giovanna; Isnard, Jean; Mauguière, François; Dreher, Jean-Claude
2016-04-01
The orbitofrontal cortex is known to carry information regarding expected reward, risk and experienced outcome. Yet, due to inherent limitations in lesion and neuroimaging methods, the neural dynamics of these computations has remained elusive in humans. Here, taking advantage of the high temporal definition of intracranial recordings, we characterize the neurophysiological signatures of the intact orbitofrontal cortex in processing information relevant for risky decisions. Local field potentials were recorded from the intact orbitofrontal cortex of patients suffering from drug-refractory partial epilepsy with implanted depth electrodes as they performed a probabilistic reward learning task that required them to associate visual cues with distinct reward probabilities. We observed three successive signals: (i) around 400 ms after cue presentation, the amplitudes of the local field potentials increased with reward probability; (ii) a risk signal emerged during the late phase of reward anticipation and during the outcome phase; and (iii) an experienced value signal appeared at the time of reward delivery. Both the medial and lateral orbitofrontal cortex encoded risk and reward probability while the lateral orbitofrontal cortex played a dominant role in coding experienced value. The present study provides the first evidence from intracranial recordings that the human orbitofrontal cortex codes reward risk both during late reward anticipation and during the outcome phase at a time scale of milliseconds. Our findings offer insights into the rapid mechanisms underlying the ability to learn structural relationships from the environment. PMID:26811252
Dark-field differential dynamic microscopy.
Bayles, Alexandra V; Squires, Todd M; Helgeson, Matthew E
2016-02-28
Differential dynamic microscopy (DDM) is an emerging technique to measure the ensemble dynamics of colloidal and complex fluid motion using optical microscopy in systems that would otherwise be difficult to measure using other methods. To date, DDM has successfully been applied to linear space invariant imaging modes including bright-field, fluorescence, confocal, polarised, and phase-contrast microscopy to study diverse dynamic phenomena. In this work, we show for the first time how DDM analysis can be extended to dark-field imaging, i.e. a linear space variant (LSV) imaging mode. Specifically, we present a particle-based framework for describing dynamic image correlations in DDM, and use it to derive a correction to the image structure function obtained by DDM that accounts for scatterers with non-homogeneous intensity distributions as they move within the imaging plane. To validate the analysis, we study the Brownian motion of gold nanoparticles, whose plasmonic structure allows for nanometer-scale particles to be imaged under dark-field illumination, in Newtonian liquids. We find that diffusion coefficients of the nanoparticles can be reliably measured by dark-field DDM, even under optically dense concentrations where analysis via multiple-particle tracking microrheology fails. These results demonstrate the potential for DDM analysis to be applied to linear space variant forms of microscopy, providing access to experimental systems unavailable to other imaging modes. PMID:26822331
From Behavior to Neural Dynamics: An Integrated Theory of Attention.
Buschman, Timothy J; Kastner, Sabine
2015-10-01
The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one's current behavior. We refer to these mechanisms as "attention." Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain's large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits, and neurons. We then integrate these disparate results into a unified theory of attention. PMID:26447577
NASA Astrophysics Data System (ADS)
Shoaib, Muhammad; Shamseldin, Asaad Y.; Melville, Bruce W.; Khan, Mudasser Muneer
2016-04-01
In order to predict runoff accurately from a rainfall event, the multilayer perceptron type of neural network models are commonly used in hydrology. Furthermore, the wavelet coupled multilayer perceptron neural network (MLPNN) models has also been found superior relative to the simple neural network models which are not coupled with wavelet. However, the MLPNN models are considered as static and memory less networks and lack the ability to examine the temporal dimension of data. Recurrent neural network models, on the other hand, have the ability to learn from the preceding conditions of the system and hence considered as dynamic models. This study for the first time explores the potential of wavelet coupled time lagged recurrent neural network (TLRNN) models for runoff prediction using rainfall data. The Discrete Wavelet Transformation (DWT) is employed in this study to decompose the input rainfall data using six of the most commonly used wavelet functions. The performance of the simple and the wavelet coupled static MLPNN models is compared with their counterpart dynamic TLRNN models. The study found that the dynamic wavelet coupled TLRNN models can be considered as alternative to the static wavelet MLPNN models. The study also investigated the effect of memory depth on the performance of static and dynamic neural network models. The memory depth refers to how much past information (lagged data) is required as it is not known a priori. The db8 wavelet function is found to yield the best results with the static MLPNN models and with the TLRNN models having small memory depths. The performance of the wavelet coupled TLRNN models with large memory depths is found insensitive to the selection of the wavelet function as all wavelet functions have similar performance.
A Neural Network Model of the Structure and Dynamics of Human Personality
ERIC Educational Resources Information Center
Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.
2010-01-01
We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…
ERIC Educational Resources Information Center
Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo
2010-01-01
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident…
Effects of refractory periods in the dynamics of a diluted neural network
NASA Astrophysics Data System (ADS)
Tamarit, F. A.; Stariolo, D. A.; Cannas, S. A.; Serra, P.
1996-05-01
We propose a stochastic dynamics for a neural network which accounts for the effects of the refractory periods (absolute and relative) in the dynamics of a single neuron. The dynamics can be solved analytically in an extremely diluted network. We found a very rich scenario that presents retrieval phases and a period doubling route to chaos in the attractors of the overlap order parameter. Our model incorporates some characteristics that make it biologically appealing, such as asymmetric synaptic efficacies, dilution of the synaptic matrix, absolute and relative refractory periods, complex retrieval dynamics, and low levels of activity in the retrieval regime.
Rigotti, Mattia; Rubin, Daniel Ben Dayan; Wang, Xiao-Jing; Fusi, Stefano
2010-01-01
Neural activity of behaving animals, especially in the prefrontal cortex, is highly heterogeneous, with selective responses to diverse aspects of the executed task. We propose a general model of recurrent neural networks that perform complex rule-based tasks, and we show that the diversity of neuronal responses plays a fundamental role when the behavioral responses are context-dependent. Specifically, we found that when the inner mental states encoding the task rules are represented by stable patterns of neural activity (attractors of the neural dynamics), the neurons must be selective for combinations of sensory stimuli and inner mental states. Such mixed selectivity is easily obtained by neurons that connect with random synaptic strengths both to the recurrent network and to neurons encoding sensory inputs. The number of randomly connected neurons needed to solve a task is on average only three times as large as the number of neurons needed in a network designed ad hoc. Moreover, the number of needed neurons grows only linearly with the number of task-relevant events and mental states, provided that each neuron responds to a large proportion of events (dense/distributed coding). A biologically realistic implementation of the model captures several aspects of the activity recorded from monkeys performing context-dependent tasks. Our findings explain the importance of the diversity of neural responses and provide us with simple and general principles for designing attractor neural networks that perform complex computation. PMID:21048899
Direct imaging of neural currents using ultra-low field magnetic resonance techniques
Volegov, Petr L.; Matlashov, Andrei N.; Mosher, John C.; Espy, Michelle A.; Kraus, Jr., Robert H.
2009-08-11
Using resonant interactions to directly and tomographically image neural activity in the human brain using magnetic resonance imaging (MRI) techniques at ultra-low field (ULF), the present inventors have established an approach that is sensitive to magnetic field distributions local to the spin population in cortex at the Larmor frequency of the measurement field. Because the Larmor frequency can be readily manipulated (through varying B.sub.m), one can also envision using ULF-DNI to image the frequency distribution of the local fields in cortex. Such information, taken together with simultaneous acquisition of MEG and ULF-NMR signals, enables non-invasive exploration of the correlation between local fields induced by neural activity in cortex and more `distant` measures of brain activity such as MEG and EEG.
Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI
Yu, Theodore; Sejnowski, Terrence J.; Cauwenberghs, Gert
2011-01-01
We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris–Lecar and Hodgkin–Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces. PMID:22227949
Topological field theory of dynamical systems
Ovchinnikov, Igor V.
2012-09-15
Here, it is shown that the path-integral representation of any stochastic or deterministic continuous-time dynamical model is a cohomological or Witten-type topological field theory, i.e., a model with global topological supersymmetry (Q-symmetry). As many other supersymmetries, Q-symmetry must be perturbatively stable due to what is generically known as non-renormalization theorems. As a result, all (equilibrium) dynamical models are divided into three major categories: Markovian models with unbroken Q-symmetry, chaotic models with Q-symmetry spontaneously broken on the mean-field level by, e.g., fractal invariant sets (e.g., strange attractors), and intermittent or self-organized critical (SOC) models with Q-symmetry dynamically broken by the condensation of instanton-antiinstanton configurations (earthquakes, avalanches, etc.) SOC is a full-dimensional phase separating chaos and Markovian dynamics. In the deterministic limit, however, antiinstantons disappear and SOC collapses into the 'edge of chaos.' Goldstone theorem stands behind spatio-temporal self-similarity of Q-broken phases known under such names as algebraic statistics of avalanches, 1/f noise, sensitivity to initial conditions, etc. Other fundamental differences of Q-broken phases is that they can be effectively viewed as quantum dynamics and that they must also have time-reversal symmetry spontaneously broken. Q-symmetry breaking in non-equilibrium situations (quenches, Barkhausen effect, etc.) is also briefly discussed.
Stability of bumps in piecewise smooth neural fields with nonlinear adaptation
NASA Astrophysics Data System (ADS)
Kilpatrick, Zachary P.; Bressloff, Paul C.
2010-06-01
We study the linear stability of stationary bumps in piecewise smooth neural fields with local negative feedback in the form of synaptic depression or spike frequency adaptation. The continuum dynamics is described in terms of a nonlocal integrodifferential equation, in which the integral kernel represents the spatial distribution of synaptic weights between populations of neurons whose mean firing rate is taken to be a Heaviside function of local activity. Discontinuities in the adaptation variable associated with a bump solution means that bump stability cannot be analyzed by constructing the Evans function for a network with a sigmoidal gain function and then taking the high-gain limit. In the case of synaptic depression, we show that linear stability can be formulated in terms of solutions to a system of pseudo-linear equations. We thus establish that sufficiently strong synaptic depression can destabilize a bump that is stable in the absence of depression. These instabilities are dominated by shift perturbations that evolve into traveling pulses. In the case of spike frequency adaptation, we show that for a wide class of perturbations the activity and adaptation variables decouple in the linear regime, thus allowing us to explicitly determine stability in terms of the spectrum of a smooth linear operator. We find that bumps are always unstable with respect to this class of perturbations, and destabilization of a bump can result in either a traveling pulse or a spatially localized breather.
Mean Field Analysis of Stochastic Neural Network Models with Synaptic Depression
NASA Astrophysics Data System (ADS)
Yasuhiko Igarashi,; Masafumi Oizumi,; Masato Okada,
2010-08-01
We investigated the effects of synaptic depression on the macroscopic behavior of stochastic neural networks. Dynamical mean field equations were derived for such networks by taking the average of two stochastic variables: a firing-state variable and a synaptic variable. In these equations, the average product of thesevariables is decoupled as the product of their averages because the two stochastic variables are independent. We proved the independence of these two stochastic variables assuming that the synaptic weight Jij is of the order of 1/N with respect to the number of neurons N. Using these equations, we derived macroscopic steady-state equations for a network with uniform connections and for a ring attractor network with Mexican hat type connectivity and investigated the stability of the steady-state solutions. An oscillatory uniform state was observed in the network with uniform connections owing to a Hopf instability. For the ring network, high-frequency perturbations were shown not to affect system stability. Two mechanisms destabilize the inhomogeneous steady state, leading to two oscillatory states. A Turing instability leads to a rotating bump state, while a Hopf instability leads to an oscillatory bump state, which was previously unreported. Various oscillatory states take place in a network with synaptic depression depending on the strength of the interneuron connections.
Force field dependence of riboswitch dynamics.
Hanke, Christian A; Gohlke, Holger
2015-01-01
Riboswitches are noncoding regulatory elements that control gene expression in response to the presence of metabolites, which bind to the aptamer domain. Metabolite binding appears to occur through a combination of conformational selection and induced fit mechanism. This demands to characterize the structural dynamics of the apo state of aptamer domains. In principle, molecular dynamics (MD) simulations can give insights at the atomistic level into the dynamics of the aptamer domain. However, it is unclear to what extent contemporary force fields can bias such insights. Here, we show that the Amber force field ff99 yields the best agreement with detailed experimental observations on differences in the structural dynamics of wild type and mutant aptamer domains of the guanine-sensing riboswitch (Gsw), including a pronounced influence of Mg2+. In contrast, applying ff99 with parmbsc0 and parmχOL modifications (denoted ff10) results in strongly damped motions and overly stable tertiary loop-loop interactions. These results are based on 58 MD simulations with an aggregate simulation time>11 μs, careful modeling of Mg2+ ions, and thorough statistical testing. Our results suggest that the moderate stabilization of the χ-anti region in ff10 can have an unwanted damping effect on functionally relevant structural dynamics of marginally stable RNA systems. This suggestion is supported by crystal structure analyses of Gsw aptamer domains that reveal χ torsions with high-anti values in the most mobile regions. We expect that future RNA force field development will benefit from considering marginally stable RNA systems and optimization toward good representations of dynamics in addition to structural characteristics. PMID:25726465
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M.
1996-12-31
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.`s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the {open_quotes}brainstate{close_quotes} using logs as predicters and core permeability as {open_quotes}truth{close_quotes}. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Use of artifical neural nets to predict permeability in Hugoton Field
Thompson, K.A.; Franklin, M.H.; Olson, T.M. )
1996-01-01
One of the most difficult tasks in petrophysics is establishing a quantitative relationship between core permeability and wireline logs. This is a tough problem in Hugoton Field, where a complicated mix of carbonates and clastics further obscure the correlation. One can successfully model complex relationships such as permeability-to-logs using artificial neural networks. Mind and Vision, Inc.'s neural net software was used because of its orientation toward depth-related data (such as logs) and its ability to run on a variety of log analysis platforms. This type of neural net program allows the expert geologist to select a few (10-100) points of control to train the [open quotes]brainstate[close quotes] using logs as predicters and core permeability as [open quotes]truth[close quotes]. In Hugoton Field, the brainstate provides an estimate of permeability at each depth in 474 logged wells. These neural net-derived permeabilities are being used in reservoir characterization models for fluid saturations. Other applications of this artificial neural network technique include deterministic relationships of logs to: core lithology, core porosity, pore type, and other wireline logs (e.g., predicting a sonic log from a density log).
Phase field approximation of dynamic brittle fracture
NASA Astrophysics Data System (ADS)
Schlüter, Alexander; Willenbücher, Adrian; Kuhn, Charlotte; Müller, Ralf
2014-11-01
Numerical methods that are able to predict the failure of technical structures due to fracture are important in many engineering applications. One of these approaches, the so-called phase field method, represents cracks by means of an additional continuous field variable. This strategy avoids some of the main drawbacks of a sharp interface description of cracks. For example, it is not necessary to track or model crack faces explicitly, which allows a simple algorithmic treatment. The phase field model for brittle fracture presented in Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) assumes quasi-static loading conditions. However dynamic effects have a great impact on the crack growth in many practical applications. Therefore this investigation presents an extension of the quasi-static phase field model for fracture from Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) to the dynamic case. First of all Hamilton's principle is applied to derive a coupled set of Euler-Lagrange equations that govern the mechanical behaviour of the body as well as the crack growth. Subsequently the model is implemented in a finite element scheme which allows to solve several test problems numerically. The numerical examples illustrate the capabilities of the developed approach to dynamic fracture in brittle materials.
Discrete neural dynamic programming in wheeled mobile robot control
NASA Astrophysics Data System (ADS)
Hendzel, Zenon; Szuster, Marcin
2011-05-01
In this paper we propose a discrete algorithm for a tracking control of a two-wheeled mobile robot (WMR), using an advanced Adaptive Critic Design (ACD). We used Dual-Heuristic Programming (DHP) algorithm, that consists of two parametric structures implemented as Neural Networks (NNs): an actor and a critic, both realized in a form of Random Vector Functional Link (RVFL) NNs. In the proposed algorithm the control system consists of the DHP adaptive critic, a PD controller and a supervisory term, derived from the Lyapunov stability theorem. The supervisory term guaranties a stable realization of a tracking movement in a learning phase of the adaptive critic structure and robustness in face of disturbances. The discrete tracking control algorithm works online, uses the WMR model for a state prediction and does not require a preliminary learning. Verification has been conducted to illustrate the performance of the proposed control algorithm, by a series of experiments on the WMR Pioneer 2-DX.
Reconstructing neural dynamics using data assimilation with multiple models
NASA Astrophysics Data System (ADS)
Hamilton, Franz; Cressman, John; Peixoto, Nathalia; Sauer, Timothy
2014-09-01
Assimilation of data with models of physical processes is a critical component of modern scientific analysis. In recent years, nonlinear versions of Kalman filtering have been developed, in addition to methods that estimate model parameters in parallel with the system state. We propose a substantial extension of these tools to deal with the specific case of unmodeled variables, when training data from the variable is avaiable. The method uses a stack of several, nonidentical copies of a physical model to jointly reconstruct the variable in question. We demonstrate the ability of this technique to accurately recover an unmodeled experimental quantity, such as an ion concentration, from a single voltage trace after the training period is completed. The method is applied to reconstruct the potassium concentration in a neural culture from multielectrode array voltage measurements.
Dynamic Neural Processing of Linguistic Cues Related to Death
Ma, Yina; Qin, Jungang; Han, Shihui
2013-01-01
Behavioral studies suggest that humans evolve the capacity to cope with anxiety induced by the awareness of death’s inevitability. However, the neurocognitive processes that underlie online death-related thoughts remain unclear. Our recent functional MRI study found that the processing of linguistic cues related to death was characterized by decreased neural activity in human insular cortex. The current study further investigated the time course of neural processing of death-related linguistic cues. We recorded event-related potentials (ERP) to death-related, life-related, negative-valence, and neutral-valence words in a modified Stroop task that required color naming of words. We found that the amplitude of an early frontal/central negativity at 84–120 ms (N1) decreased to death-related words but increased to life-related words relative to neutral-valence words. The N1 effect associated with death-related and life-related words was correlated respectively with individuals’ pessimistic and optimistic attitudes toward life. Death-related words also increased the amplitude of a frontal/central positivity at 124–300 ms (P2) and of a frontal/central positivity at 300–500 ms (P3). However, the P2 and P3 modulations were observed for both death-related and negative-valence words but not for life-related words. The ERP results suggest an early inverse coding of linguistic cues related to life and death, which is followed by negative emotional responses to death-related information. PMID:23840787
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. PMID:26881999
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. PMID:26456201
Dynamic control of ROV`s making use of the neural network concept
Ooi, Tadashi; Yoshida, Yuki; Takahashi, Yoshiaki; Kidoushi, Hideki
1994-12-31
An attempt is made to combine the classical controller with the concept of neural network, the result of which is a control system that they have named the Robust Adaptive Neural-net Controller (RANC). The RANC identifies the dynamic characteristics of the remotely operated vehicle (ROV) including its ambient environment involving cyclic disturbances such as forces induced by waves, and organizes automatically an optimized controller. A tank experiment is described in which the RANC is set to maintain a model ROV at a prescribed depth of water under artificially generated wave disturbance.
Travelling waves in a neural field model with refractoriness.
Meijer, Hil G E; Coombes, Stephen
2014-04-01
At one level of abstraction neural tissue can be regarded as a medium for turning local synaptic activity into output signals that propagate over large distances via axons to generate further synaptic activity that can cause reverberant activity in networks that possess a mixture of excitatory and inhibitory connections. This output is often taken to be a firing rate, and the mathematical form for the evolution equation of activity depends upon a spatial convolution of this rate with a fixed anatomical connectivity pattern. Such formulations often neglect the metabolic processes that would ultimately limit synaptic activity. Here we reinstate such a process, in the spirit of an original prescription by Wilson and Cowan (Biophys J 12:1-24, 1972), using a term that multiplies the usual spatial convolution with a moving time average of local activity over some refractory time-scale. This modulation can substantially affect network behaviour, and in particular give rise to periodic travelling waves in a purely excitatory network (with exponentially decaying anatomical connectivity), which in the absence of refractoriness would only support travelling fronts. We construct these solutions numerically as stationary periodic solutions in a co-moving frame (of both an equivalent delay differential model as well as the original delay integro-differential model). Continuation methods are used to obtain the dispersion curve for periodic travelling waves (speed as a function of period), and found to be reminiscent of those for spatially extended models of excitable tissue. A kinematic analysis (based on the dispersion curve) predicts the onset of wave instabilities, which are confirmed numerically. PMID:23546637
Dynamic Magnetic Field Applications for Materials Processing
NASA Technical Reports Server (NTRS)
Mazuruk, K.; Grugel, Richard N.; Motakef, S.; Whitaker, Ann F. (Technical Monitor)
2001-01-01
Magnetic fields, variable in time and space, can be used to control convection in electrically conducting melts. Flow induced by these fields has been found to be beneficial for crystal growth applications. It allows increased crystal growth rates, and improves homogeneity and quality. Particularly beneficial is the natural convection damping capability of alternating magnetic fields. One well-known example is the rotating magnetic field (RMF) configuration. RMF induces liquid motion consisting of a swirling basic flow and a meridional secondary flow. In addition to crystal growth applications, RMF can also be used for mixing non-homogeneous melts in continuous metal castings. These applied aspects have stimulated increasing research on RMF-induced fluid dynamics. A novel type of magnetic field configuration consisting of an axisymmetric magnetostatic wave, designated the traveling magnetic field (TMF), has been recently proposed. It induces a basic flow in the form of a single vortex. TMF may find use in crystal growth techniques such as the vertical Bridgman (VB), float zone (FZ), and the traveling heater method. In this review, both methods, RMF and TMF are presented. Our recent theoretical and experimental results include such topics as localized TMF, natural convection dumping using TMF in a vertical Bridgman configuration, the traveling heater method, and the Lorentz force induced by TMF as a function of frequency. Experimentally, alloy mixing results, with and without applied TMF, will be presented. Finally, advantages of the traveling magnetic field, in comparison to the more mature rotating magnetic field method, will be discussed.
Nonequilibrium dynamics of emergent field configurations
NASA Astrophysics Data System (ADS)
Howell, Rafael Cassidy
The processes by which nonlinear physical systems approach thermal equilibrium is of great importance in many areas of science. Central to this is the mechanism by which energy is transferred between the many degrees of freedom comprising these systems. With this in mind, in this research the nonequilibrium dynamics of nonperturbative fluctuations within Ginzburg-Landau models are investigated. In particular, two questions are addressed. In both cases the system is initially prepared in one of two minima of a double-well potential. First, within the context of a (2 + 1) dimensional field theory, we investigate whether emergent spatio-temporal coherent structures play a dynamcal role in the equilibration of the field. We find that the answer is sensitive to the initial temperature of the system. At low initial temperatures, the dynamics are well approximated with a time-dependent mean-field theory. For higher temperatures, the strong nonlinear coupling between the modes in the field does give rise to the synchronized emergence of coherent spatio-temporal configurations, identified with oscillons. These are long-lived coherent field configurations characterized by their persistent oscillatory behavior at their core. This initial global emergence is seen to be a consequence of resonant behavior in the long wavelength modes in the system. A second question concerns the emergence of disorder in a highly viscous system modeled by a (3 + 1) dimensional field theory. An integro-differential Boltzmann equation is derived to model the thermal nucleation of precursors of one phase within the homogeneous background. The fraction of the volume populated by these precursors is computed as a function of temperature. This model is capable of describing the onset of percolation, characterizing the approach to criticality (i.e. disorder). It also provides a nonperturbative correction to the critical temperature based on the nonequilibrium dynamics of the system.
Dynamic Neural Networks for Kinematic Redundancy Resolution of Parallel Stewart Platforms.
Mohammed, Aquil Mirza; Li, Shuai
2016-07-01
Redundancy resolution is a critical problem in the control of parallel Stewart platform. The redundancy endows us with extra design degree to improve system performance. In this paper, the kinematic control problem of Stewart platforms is formulated to a constrained quadratic programming. The Karush-Kuhn-Tucker conditions of the problem is obtained by considering the problem in its dual space, and then a dynamic neural network is designed to solve the optimization problem recurrently. Theoretical analysis reveals the global convergence of the employed dynamic neural network to the optimal solution in terms of the defined criteria. Simulation results verify the effectiveness in the tracking control of the Stewart platform for dynamic motions. PMID:26219101
Neural population dynamics in human motor cortex during movements in people with ALS.
Pandarinath, Chethan; Gilja, Vikash; Blabe, Christine H; Nuyujukian, Paul; Sarma, Anish A; Sorice, Brittany L; Eskandar, Emad N; Hochberg, Leigh R; Henderson, Jaimie M; Shenoy, Krishna V
2015-01-01
The prevailing view of motor cortex holds that motor cortical neural activity represents muscle or movement parameters. However, recent studies in non-human primates have shown that neural activity does not simply represent muscle or movement parameters; instead, its temporal structure is well-described by a dynamical system where activity during movement evolves lawfully from an initial pre-movement state. In this study, we analyze neuronal ensemble activity in motor cortex in two clinical trial participants diagnosed with Amyotrophic Lateral Sclerosis (ALS). We find that activity in human motor cortex has similar dynamical structure to that of non-human primates, indicating that human motor cortex contains a similar underlying dynamical system for movement generation. PMID:26099302
On Mean Field Limits for Dynamical Systems
NASA Astrophysics Data System (ADS)
Boers, Niklas; Pickl, Peter
2016-07-01
We present a purely probabilistic proof of propagation of molecular chaos for N-particle systems in dimension 3 with interaction forces scaling like 1/\\vert q\\vert ^{3λ - 1} with λ smaller but close to one and cut-off at q = N^{-1/3}. The proof yields a Gronwall estimate for the maximal distance between exact microscopic and approximate mean-field dynamics. This can be used to show weak convergence of the one-particle marginals to solutions of the respective mean-field equation without cut-off in a quantitative way. Our results thus lead to a derivation of the Vlasov equation from the microscopic N-particle dynamics with force term arbitrarily close to the physically relevant Coulomb- and gravitational forces.
Modeling emotional dynamics : currency versus field.
Sallach, D .L.; Decision and Information Sciences; Univ. of Chicago
2008-08-01
Randall Collins has introduced a simplified model of emotional dynamics in which emotional energy, heightened and focused by interaction rituals, serves as a common denominator for social exchange: a generic form of currency, except that it is active in a far broader range of social transactions. While the scope of this theory is attractive, the specifics of the model remain unconvincing. After a critical assessment of the currency theory of emotion, a field model of emotion is introduced that adds expressiveness by locating emotional valence within its cognitive context, thereby creating an integrated orientation field. The result is a model which claims less in the way of motivational specificity, but is more satisfactory in modeling the dynamic interaction between cognitive and emotional orientations at both individual and social levels.
NASA Astrophysics Data System (ADS)
Deng, Xinyi; Eskandar, Emad N.; Eden, Uri T.
2013-12-01
Understanding the role of rhythmic dynamics in normal and diseased brain function is an important area of research in neural electrophysiology. Identifying and tracking changes in rhythms associated with spike trains present an additional challenge, because standard approaches for continuous-valued neural recordings—such as local field potential, magnetoencephalography, and electroencephalography data—require assumptions that do not typically hold for point process data. Additionally, subtle changes in the history dependent structure of a spike train have been shown to lead to robust changes in rhythmic firing patterns. Here, we propose a point process modeling framework to characterize the rhythmic spiking dynamics in spike trains, test for statistically significant changes to those dynamics, and track the temporal evolution of such changes. We first construct a two-state point process model incorporating spiking history and develop a likelihood ratio test to detect changes in the firing structure. We then apply adaptive state-space filters and smoothers to track these changes through time. We illustrate our approach with a simulation study as well as with experimental data recorded in the subthalamic nucleus of Parkinson's patients performing an arm movement task. Our analyses show that during the arm movement task, neurons underwent a complex pattern of modulation of spiking intensity characterized initially by a release of inhibitory control at 20-40 ms after a spike, followed by a decrease in excitatory influence at 40-60 ms after a spike.
Xu, Bin; Yang, Chenguang; Pan, Yongping
2015-10-01
This paper studies both indirect and direct global neural control of strict-feedback systems in the presence of unknown dynamics, using the dynamic surface control (DSC) technique in a novel manner. A new switching mechanism is designed to combine an adaptive neural controller in the neural approximation domain, together with the robust controller that pulls the transient states back into the neural approximation domain from the outside. In comparison with the conventional control techniques, which could only achieve semiglobally uniformly ultimately bounded stability, the proposed control scheme guarantees all the signals in the closed-loop system are globally uniformly ultimately bounded, such that the conventional constraints on initial conditions of the neural control system can be relaxed. The simulation studies of hypersonic flight vehicle (HFV) are performed to demonstrate the effectiveness of the proposed global neural DSC design. PMID:26259222
Electromagnetic field dynamics in Binary Neutron Stars
NASA Astrophysics Data System (ADS)
Palenzuela, Carlos; Anderson, Matthew; Hirschmann, Eric; Lehner, Luis; Liebling, Steven; Neilsen, David; Motl, Patrick
2011-04-01
Neutron star mergers represent one of the most promising sources of gravitational waves (GW) within the bandwidth of advLIGO. In addition to GW, strong magnetic fields may offer the possibility of a characteristic electromagnetic signature allowing for concurrent detection. In this talk we present results from numerical evolutions of such mergers, studying the dynamics of both the gravitational and electromagnetic degrees of freedom.
Schmidt, Helmut; Petkov, George; Richardson, Mark P.; Terry, John R.
2014-01-01
Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of computational modeling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit, which in the field of complexity sciences is known as dynamics on networks. In this study we describe the development and application of this framework using modular networks of Kuramoto oscillators. We use this framework to understand functional networks inferred from resting state EEG recordings of a cohort of 35 adults with heterogeneous idiopathic generalized epilepsies and 40 healthy adult controls. Taking emergent synchrony across the global network as a proxy for seizures, our study finds that the critical strength of coupling required to synchronize the global network is significantly decreased for the epilepsy cohort for functional networks inferred from both theta (3–6 Hz) and low-alpha (6–9 Hz) bands. We further identify left frontal regions as a potential driver of seizure activity within these networks. We also explore the ability of our method to identify individuals with epilepsy, observing up to 80 predictive power through use of receiver operating characteristic analysis. Collectively these findings demonstrate that a computer model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which should ultimately enable a more appropriate mechanistic stratification of people
Degradation Prediction Model Based on a Neural Network with Dynamic Windows
Zhang, Xinghui; Xiao, Lei; Kang, Jianshe
2015-01-01
Tracking degradation of mechanical components is very critical for effective maintenance decision making. Remaining useful life (RUL) estimation is a widely used form of degradation prediction. RUL prediction methods when enough run-to-failure condition monitoring data can be used have been fully researched, but for some high reliability components, it is very difficult to collect run-to-failure condition monitoring data, i.e., from normal to failure. Only a certain number of condition indicators in certain period can be used to estimate RUL. In addition, some existing prediction methods have problems which block RUL estimation due to poor extrapolability. The predicted value converges to a certain constant or fluctuates in certain range. Moreover, the fluctuant condition features also have bad effects on prediction. In order to solve these dilemmas, this paper proposes a RUL prediction model based on neural network with dynamic windows. This model mainly consists of three steps: window size determination by increasing rate, change point detection and rolling prediction. The proposed method has two dominant strengths. One is that the proposed approach does not need to assume the degradation trajectory is subject to a certain distribution. The other is it can adapt to variation of degradation indicators which greatly benefits RUL prediction. Finally, the performance of the proposed RUL prediction model is validated by real field data and simulation data. PMID:25806873
A dynamic model of thundercloud electric fields
NASA Technical Reports Server (NTRS)
Nisbet, J. S.
1983-01-01
A description is given of the first results obtained with a new type of dynamic electrical model of a thundercloud that allows the charge rearrangement produced in arc breakdown, as well as the conduction and displacement currents, to be calculated with realistic generator configurations. The model demonstrates the great complexity of behavior of thunderclouds owing to the interaction of the nonlinear breakdown mechanisms, the energy stored in the electric field, and a conductivity that varies with altitude. It is also seen that dynamic charge distributions and electric fields are quite different from static distributions. It is noted that these differences affect the initial conditions before and after lightning strokes. The conduction current density to the ionosphere is very much larger in the dynamic cases than in static simulations. Such basic properties of thunderclouds as the production of cloud-to-ground strokes are seen as compatible only with a very limited range of thundercloud models. Another finding is that coronal and convection currents cause the electric fields at the surface to be much smaller than they would be in their absence.
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Neural network assisted inverse dynamic guidance for terminally constrained entry flight.
Zhou, Hao; Rahman, Tawfiqur; Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.
Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong
2015-03-01
This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique. PMID:25720014
Molecular dynamics in high electric fields
NASA Astrophysics Data System (ADS)
Apostol, M.; Cune, L. C.
2016-06-01
Molecular rotation spectra, generated by the coupling of the molecular electric-dipole moments to an external time-dependent electric field, are discussed in a few particular conditions which can be of some experimental interest. First, the spherical-pendulum molecular model is reviewed, with the aim of introducing an approximate method which consists in the separation of the azimuthal and zenithal motions. Second, rotation spectra are considered in the presence of a static electric field. Two particular cases are analyzed, corresponding to strong and weak fields. In both cases the classical motion of the dipoles consists of rotations and vibrations about equilibrium positions; this motion may exhibit parametric resonances. For strong fields a large macroscopic electric polarization may appear. This situation may be relevant for polar matter (like pyroelectrics, ferroelectrics), or for heavy impurities embedded in a polar solid. The dipolar interaction is analyzed in polar condensed matter, where it is shown that new polarization modes appear for a spontaneous macroscopic electric polarization (these modes are tentatively called "dipolons"); one of the polarization modes is related to parametric resonances. The extension of these considerations to magnetic dipoles is briefly discussed. The treatment is extended to strong electric fields which oscillate with a high frequency, as those provided by high-power lasers. It is shown that the effect of such fields on molecular dynamics is governed by a much weaker, effective, renormalized, static electric field.
Magnetization dynamics using ultrashort magnetic field pulses
NASA Astrophysics Data System (ADS)
Tudosa, Ioan
Very short and well shaped magnetic field pulses can be generated using ultra-relativistic electron bunches at Stanford Linear Accelerator. These fields of several Tesla with duration of several picoseconds are used to study the response of magnetic materials to a very short excitation. Precession of a magnetic moment by 90 degrees in a field of 1 Tesla takes about 10 picoseconds, so we explore the range of fast switching of the magnetization by precession. Our experiments are in a region of magnetic excitation that is not yet accessible by other methods. The current table top experiments can generate fields longer than 100 ps and with strength of 0.1 Tesla only. Two types of magnetic were used, magnetic recording media and model magnetic thin films. Information about the magnetization dynamics is extracted from the magnetic patterns generated by the magnetic field. The shape and size of these patterns are influenced by the dissipation of angular momentum involved in the switching process. The high-density recording media, both in-plane and perpendicular type, shows a pattern which indicates a high spin momentum dissipation. The perpendicular magnetic recording media was exposed to multiple magnetic field pulses. We observed an extended transition region between switched and non-switched areas indicating a stochastic switching behavior that cannot be explained by thermal fluctuations. The model films consist of very thin crystalline Fe films on GaAs. Even with these model films we see an enhanced dissipation compared to ferromagnetic resonance studies. The magnetic patterns show that damping increases with time and it is not a constant as usually assumed in the equation describing the magnetization dynamics. The simulation using the theory of spin-wave scattering explains only half of the observed damping. An important feature of this theory is that the spin dissipation is time dependent and depends on the large angle between the magnetization and the magnetic
Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances
Fiete, Ila R.; Seung, H. Sebastian
2006-07-28
We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source.
Vortex dynamics in a wave field
NASA Astrophysics Data System (ADS)
Perret, Gaele; Poupardin, Adrien; Brossard, Jerome
2010-11-01
The interaction of waves and current with submerged structures in coastal zones generates some complex hydrodynamics features which may considerably impact the local environment. The geometrical singularities of the structures produce concentrated vortex filaments which may impact the sea bed and/or the free surface. The objective of the present study is to characterize the vortex dynamics generated by a horizontal plate considered as a vortex generator, in a regular wave field. Vortices are generated at the edges of the plate. They undergo three-dimensional instabilities leading to their destruction. Their dynamics is investigated thanks to laboratory experiments conducted in two different wave flumes to study the impact of the scale on the dynamics. The two-dimensional vortex dynamics is characterized using PIV measurements. Vortex intensity, trajectory and life time are determined. The three-dimensional dynamics is studied thanks to stereo photography. The vortices are visualised with hydrogen bubbles generated at the edges of the plate by electrolyse. The evolution of the vortices is visualized by two CCD cameras located in different planes. Two most unstable wavelengths are observed which do not seem to depend on the width of the wave flume.
Dynamically Partitionable Autoassociative Networks as a Solution to the Neural Binding Problem
Hayworth, Kenneth J.
2012-01-01
An outstanding question in theoretical neuroscience is how the brain solves the neural binding problem. In vision, binding can be summarized as the ability to represent that certain properties belong to one object while other properties belong to a different object. I review the binding problem in visual and other domains, and review its simplest proposed solution – the anatomical binding hypothesis. This hypothesis has traditionally been rejected as a true solution because it seems to require a type of one-to-one wiring of neurons that would be impossible in a biological system (as opposed to an engineered system like a computer). I show that this requirement for one-to-one wiring can be loosened by carefully considering how the neural representation is actually put to use by the rest of the brain. This leads to a solution where a symbol is represented not as a particular pattern of neural activation but instead as a piece of a global stable attractor state. I introduce the Dynamically Partitionable AutoAssociative Network (DPAAN) as an implementation of this solution and show how DPANNs can be used in systems which perform perceptual binding and in systems that implement syntax-sensitive rules. Finally I show how the core parts of the cognitive architecture ACT-R can be neurally implemented using a DPAAN as ACT-R’s global workspace. Because the DPAAN solution to the binding problem requires only “flat” neural representations (as opposed to the phase encoded representation hypothesized in neural synchrony solutions) it is directly compatible with the most well developed neural models of learning, memory, and pattern recognition. PMID:23060784
NASA Astrophysics Data System (ADS)
Ticchi, Alessandro; Faisal, Aldo A.; Brain; Behaviour Lab Team
2015-03-01
Experimental evidence at the behavioural-level shows that the brains are able to make Bayes-optimal inference and decisions (Kording and Wolpert 2004, Nature; Ernst and Banks, 2002, Nature), yet at the circuit level little is known about how neural circuits may implement Bayesian learning and inference (but see (Ma et al. 2006, Nat Neurosci)). Molecular sources of noise are clearly established to be powerful enough to pose limits to neural function and structure in the brain (Faisal et al. 2008, Nat Rev Neurosci; Faisal et al. 2005, Curr Biol). We propose a spking neuron model where we exploit molecular noise as a useful resource to implement close-to-optimal inference by sampling. Specifically, we derive a synaptic plasticity rule which, coupled with integrate-and-fire neural dynamics and recurrent inhibitory connections, enables a neural population to learn the statistical properties of the received sensory input (prior). Moreover, the proposed model allows to combine prior knowledge with additional sources of information (likelihood) from another neural population, and to implement in spiking neurons a Markov Chain Monte Carlo algorithm which generates samples from the inferred posterior distribution.
Autonomous dynamics in neural networks: the dHAN concept and associative thought processes
NASA Astrophysics Data System (ADS)
Gros, Claudius
2007-02-01
The neural activity of the human brain is dominated by self-sustained activities. External sensory stimuli influence this autonomous activity but they do not drive the brain directly. Most standard artificial neural network models are however input driven and do not show spontaneous activities. It constitutes a challenge to develop organizational principles for controlled, self-sustained activity in artificial neural networks. Here we propose and examine the dHAN concept for autonomous associative thought processes in dense and homogeneous associative networks. An associative thought-process is characterized, within this approach, by a time-series of transient attractors. Each transient state corresponds to a stored information, a memory. The subsequent transient states are characterized by large associative overlaps, which are identical to acquired patterns. Memory states, the acquired patterns, have such a dual functionality. In this approach the self-sustained neural activity has a central functional role. The network acquires a discrimination capability, as external stimuli need to compete with the autonomous activity. Noise in the input is readily filtered-out. Hebbian learning of external patterns occurs coinstantaneous with the ongoing associative thought process. The autonomous dynamics needs a long-term working-point optimization which acquires within the dHAN concept a dual functionality: It stabilizes the time development of the associative thought process and limits runaway synaptic growth, which generically occurs otherwise in neural networks with self-induced activities and Hebbian-type learning rules.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-01-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313
NASA Astrophysics Data System (ADS)
Bruton, C. P.; West, M. E.
2013-12-01
Earthquakes and seismicity have long been used to monitor volcanoes. In addition to time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia.
Knudstrup, Scott; Zochowski, Michal; Booth, Victoria
2016-05-01
The characteristics of neural network activity depend on intrinsic neural properties and synaptic connectivity in the network. In brain networks, both of these properties are critically affected by the type and levels of neuromodulators present. The expression of many of the most powerful neuromodulators, including acetylcholine (ACh), varies tonically and phasically with behavioural state, leading to dynamic, heterogeneous changes in intrinsic neural properties and synaptic connectivity properties. Namely, ACh significantly alters neural firing properties as measured by the phase response curve in a manner that has been shown to alter the propensity for network synchronization. The aim of this simulation study was to build an understanding of how heterogeneity in cholinergic modulation of neural firing properties and heterogeneity in synaptic connectivity affect the initiation and maintenance of synchronous network bursting in excitatory networks. We show that cells that display different levels of ACh modulation have differential roles in generating network activity: weakly modulated cells are necessary for burst initiation and provide synchronizing drive to the rest of the network, whereas strongly modulated cells provide the overall activity level necessary to sustain burst firing. By applying several quantitative measures of network activity, we further show that the existence of network bursting and its characteristics, such as burst duration and intraburst synchrony, are dependent on the fraction of cell types providing the synaptic connections in the network. These results suggest mechanisms underlying ACh modulation of brain oscillations and the modulation of seizure activity during sleep states. PMID:26869313
Mixing Dynamics Induced by Traveling Magnetic Fields
NASA Technical Reports Server (NTRS)
Grugel, Richard N.; Mazuruk, Konstantin; Rose, M. Franklin (Technical Monitor)
2001-01-01
Microstructural and compositional homogeneity in metals and alloys can only be achieved if the initial melt is homogeneous prior to the onset of solidification processing. Naturally induced convection may initially facilitate this requirement but upon the onset of solidification significant compositional variations generally arise leading to undesired segregation. Application of alternating magnetic fields to promote a uniform bulk liquid concentration during solidification processing has been suggested. To investigate such possibilities an initial study of using traveling magnetic fields (TMF) to promote melt homogenization is reported in this work. Theoretically, the effect of TMF-induced convection on mixing phenomena is studied in the laminar regime of flow. Experimentally, with and without applied fields, both 1) mixing dynamics by optically monitoring the spreading of an initially localized dye in transparent fluids and, 2) compositional variations in metal alloys have been investigated.
Mixing Dynamics Induced by Traveling Magnetic Fields
NASA Technical Reports Server (NTRS)
Grugel, Richard N.; Mazuruk, Konstantin
2000-01-01
Microstructural and compositional homogeneity in metals and alloys can only be achieved if the initial melt is homogeneous prior to the onset of solidification processing. Naturally induced convection may initially facilitate this requirement but upon the onset of solidification significant compositional variations generally arise leading to undesired segregation. Application of alternating magnetic fields to promote a uniform bulk liquid concentration during solidification processing has been suggested. To investigate such possibilities an initial study of using traveling magnetic fields (TMF) to promote melt homogenization is reported in this work. Theoretically, the effect of TMF-induced convection on mixing phenomena is studied in the laminar regime of flow. Experimentally, with and without applied fields, both: mixing dynamics by optically monitoring the spreading of an initially localized dye in transparent fluids and, compositional variations in metal alloys have been investigated.
Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Robust fault detection of wind energy conversion systems based on dynamic neural networks.
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
Dynamical similarity of geomagnetic field reversals.
Valet, Jean-Pierre; Fournier, Alexandre; Courtillot, Vincent; Herrero-Bervera, Emilio
2012-10-01
No consensus has been reached so far on the properties of the geomagnetic field during reversals or on the main features that might reveal its dynamics. A main characteristic of the reversing field is a large decrease in the axial dipole and the dominant role of non-dipole components. Other features strongly depend on whether they are derived from sedimentary or volcanic records. Only thermal remanent magnetization of lava flows can capture faithful records of a rapidly varying non-dipole field, but, because of episodic volcanic activity, sequences of overlying flows yield incomplete records. Here we show that the ten most detailed volcanic records of reversals can be matched in a very satisfactory way, under the assumption of a common duration, revealing common dynamical characteristics. We infer that the reversal process has remained unchanged, with the same time constants and durations, at least since 180 million years ago. We propose that the reversing field is characterized by three successive phases: a precursory event, a 180° polarity switch and a rebound. The first and third phases reflect the emergence of the non-dipole field with large-amplitude secular variation. They are rarely both recorded at the same site owing to the rapidly changing field geometry and last for less than 2,500 years. The actual transit between the two polarities does not last longer than 1,000 years and might therefore result from mechanisms other than those governing normal secular variation. Such changes are too brief to be accurately recorded by most sediments. PMID:23038471
Dynamic modeling of physical phenomena for PRAs using neural networks
Benjamin, A.S.; Brown, N.N.; Paez, T.L.
1998-04-01
In most probabilistic risk assessments, there is a set of accident scenarios that involves the physical responses of a system to environmental challenges. Examples include the effects of earthquakes and fires on the operability of a nuclear reactor safety system, the effects of fires and impacts on the safety integrity of a nuclear weapon, and the effects of human intrusions on the transport of radionuclides from an underground waste facility. The physical responses of the system to these challenges can be quite complex, and their evaluation may require the use of detailed computer codes that are very time consuming to execute. Yet, to perform meaningful probabilistic analyses, it is necessary to evaluate the responses for a large number of variations in the input parameters that describe the initial state of the system, the environments to which it is exposed, and the effects of human interaction. Because the uncertainties of the system response may be very large, it may also be necessary to perform these evaluations for various values of modeling parameters that have high uncertainties, such as material stiffnesses, surface emissivities, and ground permeabilities. The authors have been exploring the use of artificial neural networks (ANNs) as a means for estimating the physical responses of complex systems to phenomenological events such as those cited above. These networks are designed as mathematical constructs with adjustable parameters that can be trained so that the results obtained from the networks will simulate the results obtained from the detailed computer codes. The intent is for the networks to provide an adequate simulation of the detailed codes over a significant range of variables while requiring only a small fraction of the computer processing time required by the detailed codes. This enables the authors to integrate the physical response analyses into the probabilistic models in order to estimate the probabilities of various responses.
Neural network architecture for cognitive navigation in dynamic environments.
Villacorta-Atienza, José Antonio; Makarov, Valeri A
2013-12-01
Navigation in time-evolving environments with moving targets and obstacles requires cognitive abilities widely demonstrated by even simplest animals. However, it is a long-standing challenging problem for artificial agents. Cognitive autonomous robots coping with this problem must solve two essential tasks: 1) understand the environment in terms of what may happen and how I can deal with this and 2) learn successful experiences for their further use in an automatic subconscious way. The recently introduced concept of compact internal representation (CIR) provides the ground for both the tasks. CIR is a specific cognitive map that compacts time-evolving situations into static structures containing information necessary for navigation. It belongs to the class of global approaches, i.e., it finds trajectories to a target when they exist but also detects situations when no solution can be found. Here we extend the concept of situations with mobile targets. Then using CIR as a core, we propose a closed-loop neural network architecture consisting of conscious and subconscious pathways for efficient decision-making. The conscious pathway provides solutions to novel situations if the default subconscious pathway fails to guide the agent to a target. Employing experiments with roving robots and numerical simulations, we show that the proposed architecture provides the robot with cognitive abilities and enables reliable and flexible navigation in realistic time-evolving environments. We prove that the subconscious pathway is robust against uncertainty in the sensory information. Thus if a novel situation is similar but not identical to the previous experience (because of, e.g., noisy perception) then the subconscious pathway is able to provide an effective solution. PMID:24805224
Altered temporal dynamics of neural adaptation in the aging human auditory cortex.
Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas
2016-09-01
Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging. PMID:27459921
Dynamic neural networking as a basis for plasticity in the control of heart rate.
Kember, G; Armour, J A; Zamir, M
2013-01-21
A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. PMID:23041448
Dynamic functional integration of distinct neural empathy systems
2014-01-01
Recent evidence points to two separate systems for empathy: a vicarious sharing emotional system that supports our ability to share emotions and mental states and a cognitive system that involves cognitive understanding of the perspective of others. Several recent models offer new evidence regarding the brain regions involved in these systems, but no study till date has examined how regions within each system dynamically interact. The study by Raz et al. in this issue of Social, Cognitive, & Affective Neuroscience is among the first to use a novel approach of functional magnetic resonance imaging analysis of fluctuations in network cohesion while an individual is experiencing empathy. Their results substantiate the approach positing two empathy mechanisms and, more broadly, demonstrate how dynamic analysis of emotions can further our understanding of social behavior. PMID:23956080
Wang Rubin; Yu Wei
2005-08-25
In this paper, we investigate how the population of neuronal oscillators deals with information and the dynamic evolution of neural coding when the external stimulation acts on it. Numerically computing method is used to describe the evolution process of neural coding in three-dimensioned space. The numerical result proves that only the suitable stimulation can change the coupling structure and plasticity of neurons.
Quantum dynamics in strong fluctuating fields
NASA Astrophysics Data System (ADS)
Goychuk, Igor; Hänggi, Peter
A large number of multifaceted quantum transport processes in molecular systems and physical nanosystems, such as e.g. nonadiabatic electron transfer in proteins, can be treated in terms of quantum relaxation processes which couple to one or several fluctuating environments. A thermal equilibrium environment can conveniently be modelled by a thermal bath of harmonic oscillators. An archetype situation provides a two-state dissipative quantum dynamics, commonly known under the label of a spin-boson dynamics. An interesting and nontrivial physical situation emerges, however, when the quantum dynamics evolves far away from thermal equilibrium. This occurs, for example, when a charge transferring medium possesses nonequilibrium degrees of freedom, or when a strong time-dependent control field is applied externally. Accordingly, certain parameters of underlying quantum subsystem acquire stochastic character. This may occur, for example, for the tunnelling coupling between the donor and acceptor states of the transferring electron, or for the corresponding energy difference between electronic states which assume via the coupling to the fluctuating environment an explicit stochastic or deterministic time-dependence. Here, we review the general theoretical framework which is based on the method of projector operators, yielding the quantum master equations for systems that are exposed to strong external fields. This allows one to investigate on a common basis, the influence of nonequilibrium fluctuations and periodic electrical fields on those already mentioned dynamics and related quantum transport processes. Most importantly, such strong fluctuating fields induce a whole variety of nonlinear and nonequilibrium phenomena. A characteristic feature of such dynamics is the absence of thermal (quantum) detailed balance.ContentsPAGE1. Introduction5262. Quantum dynamics in stochastic fields531 2.1. Stochastic Liouville equation531 2.2. Non-Markovian vs. Markovian discrete
Dynamic recurrent neural networks for stable adaptive control of wing rock motion
NASA Astrophysics Data System (ADS)
Kooi, Steven Boon-Lam
Wing rock is a self-sustaining limit cycle oscillation (LCO) which occurs as the result of nonlinear coupling between the dynamic response of the aircraft and the unsteady aerodynamic forces. In this thesis, dynamic recurrent RBF (Radial Basis Function) network control methodology is proposed to control the wing rock motion. The concept based on the properties of the Presiach hysteresis model is used in the design of dynamic neural networks. The structure and memory mechanism in the Preisach model is analogous to the parallel connectivity and memory formation in the RBF neural networks. The proposed dynamic recurrent neural network has a feature for adding or pruning the neurons in the hidden layer according to the growth criteria based on the properties of ensemble average memory formation of the Preisach model. The recurrent feature of the RBF network deals with the dynamic nonlinearities and endowed temporal memories of the hysteresis model. The control of wing rock is a tracking problem, the trajectory starts from non-zero initial conditions and it tends to zero as time goes to infinity. In the proposed neural control structure, the recurrent dynamic RBF network performs identification process in order to approximate the unknown non-linearities of the physical system based on the input-output data obtained from the wing rock phenomenon. The design of the RBF networks together with the network controllers are carried out in discrete time domain. The recurrent RBF networks employ two separate adaptation schemes where the RBF's centre and width are adjusted by the Extended Kalman Filter in order to give a minimum networks size, while the outer networks layer weights are updated using the algorithm derived from Lyapunov stability analysis for the stable closed loop control. The issue of the robustness of the recurrent RBF networks is also addressed. The effectiveness of the proposed dynamic recurrent neural control methodology is demonstrated through simulations to
Pattwell, Siobhan S.; Liston, Conor; Jing, Deqiang; Ninan, Ipe; Yang, Rui R.; Witztum, Jonathan; Murdock, Mitchell H.; Dincheva, Iva; Bath, Kevin G.; Casey, B. J.; Deisseroth, Karl; Lee, Francis S.
2016-01-01
Fear can be highly adaptive in promoting survival, yet it can also be detrimental when it persists long after a threat has passed. Flexibility of the fear response may be most advantageous during adolescence when animals are prone to explore novel, potentially threatening environments. Two opposing adolescent fear-related behaviours—diminished extinction of cued fear and suppressed expression of contextual fear—may serve this purpose, but the neural basis underlying these changes is unknown. Using microprisms to image prefrontal cortical spine maturation across development, we identify dynamic BLA-hippocampal-mPFC circuit reorganization associated with these behavioural shifts. Exploiting this sensitive period of neural development, we modified existing behavioural interventions in an age-specific manner to attenuate adolescent fear memories persistently into adulthood. These findings identify novel strategies that leverage dynamic neurodevelopmental changes during adolescence with the potential to extinguish pathological fears implicated in anxiety and stress-related disorders. PMID:27215672
Pattwell, Siobhan S; Liston, Conor; Jing, Deqiang; Ninan, Ipe; Yang, Rui R; Witztum, Jonathan; Murdock, Mitchell H; Dincheva, Iva; Bath, Kevin G; Casey, B J; Deisseroth, Karl; Lee, Francis S
2016-01-01
Fear can be highly adaptive in promoting survival, yet it can also be detrimental when it persists long after a threat has passed. Flexibility of the fear response may be most advantageous during adolescence when animals are prone to explore novel, potentially threatening environments. Two opposing adolescent fear-related behaviours-diminished extinction of cued fear and suppressed expression of contextual fear-may serve this purpose, but the neural basis underlying these changes is unknown. Using microprisms to image prefrontal cortical spine maturation across development, we identify dynamic BLA-hippocampal-mPFC circuit reorganization associated with these behavioural shifts. Exploiting this sensitive period of neural development, we modified existing behavioural interventions in an age-specific manner to attenuate adolescent fear memories persistently into adulthood. These findings identify novel strategies that leverage dynamic neurodevelopmental changes during adolescence with the potential to extinguish pathological fears implicated in anxiety and stress-related disorders. PMID:27215672
The Dynamical Recollection of Interconnected Neural Networks Using Meta-heuristics
NASA Astrophysics Data System (ADS)
Kuremoto, Takashi; Watanabe, Shun; Kobayashi, Kunikazu; Feng, Laing-Bing; Obayashi, Masanao
The interconnected recurrent neural networks are well-known with their abilities of associative memory of characteristic patterns. For example, the traditional Hopfield network (HN) can recall stored pattern stably, meanwhile, Aihara's chaotic neural network (CNN) is able to realize dynamical recollection of a sequence of patterns. In this paper, we propose to use meta-heuristic (MH) methods such as the particle swarm optimization (PSO) and the genetic algorithm (GA) to improve traditional associative memory systems. Using PSO or GA, for CNN, optimal parameters are found to accelerate the recollection process and raise the rate of successful recollection, and for HN, optimized bias current is calculated to improve the network with dynamical association of a series of patterns. Simulation results of binary pattern association showed effectiveness of the proposed methods.
Nonlinear dynamic system identification using Chebyshev functional link artificial neural networks.
Patra, J C; Kot, A C
2002-01-01
A computationally efficient artificial neural network (ANN) for the purpose of dynamic nonlinear system identification is proposed. The major drawback of feedforward neural networks, such as multilayer perceptrons (MLPs) trained with the backpropagation (BP) algorithm, is that they require a large amount of computation for learning. We propose a single-layer functional-link ANN (FLANN) in which the need for a hidden layer is eliminated by expanding the input pattern by Chebyshev polynomials. The novelty of this network is that it requires much less computation than that of a MLP. We have shown its effectiveness in the problem of nonlinear dynamic system identification. In the presence of additive Gaussian noise, the performance of the proposed network is found to be similar or superior to that of a MLP. A performance comparison in terms of computational complexity has also been carried out. PMID:18238146
Emerging phenomena in neural networks with dynamic synapses and their computational implications.
Torres, Joaquin J; Kappen, Hilbert J
2013-01-01
In this paper we review our research on the effect and computational role of dynamical synapses on feed-forward and recurrent neural networks. Among others, we report on the appearance of a new class of dynamical memories which result from the destabilization of learned memory attractors. This has important consequences for dynamic information processing allowing the system to sequentially access the information stored in the memories under changing stimuli. Although storage capacity of stable memories also decreases, our study demonstrated the positive effect of synaptic facilitation to recover maximum storage capacity and to enlarge the capacity of the system for memory recall in noisy conditions. Possibly, the new dynamical behavior can be associated with the voltage transitions between up and down states observed in cortical areas in the brain. We investigated the conditions for which the permanence times in the up state are power-law distributed, which is a sign for criticality, and concluded that the experimentally observed large variability of permanence times could be explained as the result of noisy dynamic synapses with large recovery times. Finally, we report how short-term synaptic processes can transmit weak signals throughout more than one frequency range in noisy neural networks, displaying a kind of stochastic multi-resonance. This effect is due to competition between activity-dependent synaptic fluctuations (due to dynamic synapses) and the existence of neuron firing threshold which adapts to the incoming mean synaptic input. PMID:23637657
Dynamic errors modeling of CMM based on generalized regression neural network
NASA Astrophysics Data System (ADS)
Zhong, Weihong; Guan, Hongwei; Li, Yingdao; Ma, Xiushui
2010-12-01
The development of modern manufacturing requires a higher speed and accuracy of coordinate measuring machines (CMM). The dynamic error is the main factor affecting the measurement accuracy at high-speed. The dynamic error modeling and estimation are the basis of dynamic error correcting. This paper applies generalize regression neural network (GRNN) to establish and estimate dynamic error model. Compared with BP neural network (BPNN), GRNN has less parameters, only one smoothing factor parameter should to be adjusted. So that it can predict the network faster and with greater computing advantage. The running speed of CMM axis is set through software. Let it running for the X axis motion. The values of the grating and the dual frequency laser interferometer are gained synchronously at the same measure point. The difference between the two values is the real-time dynamic measurement error. The 150 values are collected. The first 100 values of the error sequence are used as training data to establish GRNN model, and the next 50 values are used to test the estimation results. When the smooth factor is set at 0.5, the estimation of GRNN training data is better.The simulation with the experimental data shows that GRNN method obtains better error estimation accuracy and higher computing speed compared with BPNN. GRNN can be applied to dynamic error estimation of CMM under certain conditions.
Dynamic errors modeling of CMM based on generalized regression neural network
NASA Astrophysics Data System (ADS)
Zhong, Weihong; Guan, Hongwei; Li, Yingdao; Ma, Xiushui
2011-05-01
The development of modern manufacturing requires a higher speed and accuracy of coordinate measuring machines (CMM). The dynamic error is the main factor affecting the measurement accuracy at high-speed. The dynamic error modeling and estimation are the basis of dynamic error correcting. This paper applies generalize regression neural network (GRNN) to establish and estimate dynamic error model. Compared with BP neural network (BPNN), GRNN has less parameters, only one smoothing factor parameter should to be adjusted. So that it can predict the network faster and with greater computing advantage. The running speed of CMM axis is set through software. Let it running for the X axis motion. The values of the grating and the dual frequency laser interferometer are gained synchronously at the same measure point. The difference between the two values is the real-time dynamic measurement error. The 150 values are collected. The first 100 values of the error sequence are used as training data to establish GRNN model, and the next 50 values are used to test the estimation results. When the smooth factor is set at 0.5, the estimation of GRNN training data is better.The simulation with the experimental data shows that GRNN method obtains better error estimation accuracy and higher computing speed compared with BPNN. GRNN can be applied to dynamic error estimation of CMM under certain conditions.
Synthesis of minimum-time feedback laws for dynamic systems using neural networks
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Smyth, Padhraic
1994-01-01
The paper presents the synthesis of neural network based feedback laws for dynamic systems using the computed optimal and time histories of the state and control variables. The efficacy of the proposed approach has been successfully demonstrated on a minimum time orbit injection problem. If the method is found to be effective to real life problems with many state and control variables, it can used for a variety of guidance and control problems.
Dynamical Field Model of Hand Preference
NASA Astrophysics Data System (ADS)
Franceschetti, Donald R.; Cantalupo, Claudio
2000-11-01
Dynamical field models of information processing in the nervous system are being developed by a number of groups of psychologists and physicists working together to explain The details of behaviors exhibited by a number of animal species. Here we adapt such a model to the expression of hand preference in a small primate, the bushbaby (Otolemur garnetti) . The model provides a theoretical foundation for the interpretation of an experiment currently underway in which a several of these animals are forced to extend either right or left hand to retrieve a food item from a rotating turntable.
Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network
NASA Astrophysics Data System (ADS)
Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao
2009-10-01
A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.
NASA Astrophysics Data System (ADS)
Gao, Shigen; Dong, Hairong; Lyu, Shihang; Ning, Bin
2016-07-01
This paper studies decentralised neural adaptive control of a class of interconnected nonlinear systems, each subsystem is in the presence of input saturation and external disturbance and has independent system order. Using a novel truncated adaptation design, dynamic surface control technique and minimal-learning-parameters algorithm, the proposed method circumvents the problems of 'explosion of complexity' and 'dimension curse' that exist in the traditional backstepping design. Comparing to the methodology that neural weights are online updated in the controllers, only one scalar needs to be updated in the controllers of each subsystem when dealing with unknown systematic dynamics. Radial basis function neural networks (NNs) are used in the online approximation of unknown systematic dynamics. It is proved using Lyapunov stability theory that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded. The tracking errors of each subsystems, the amplitude of NN approximation residuals and external disturbances can be attenuated to arbitrarily small by tuning proper design parameters. Simulation results are given to demonstrate the effectiveness of the proposed method.
Managing heterogeneity in the study of neural oscillator dynamics
2012-01-01
We consider a coupled, heterogeneous population of relaxation oscillators used to model rhythmic oscillations in the pre-Bötzinger complex. By choosing specific values of the parameter used to describe the heterogeneity, sampled from the probability distribution of the values of that parameter, we show how the effects of heterogeneity can be studied in a computationally efficient manner. When more than one parameter is heterogeneous, full or sparse tensor product grids are used to select appropriate parameter values. The method allows us to effectively reduce the dimensionality of the model, and it provides a means for systematically investigating the effects of heterogeneity in coupled systems, linking ideas from uncertainty quantification to those for the study of network dynamics. PMID:22658163
Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy
Quirin, Sean; Vladimirov, Nikita; Yang, Chao-Tsung; Peterka, Darcy S.; Yuste, Rafael; Ahrens, Misha B.
2016-01-01
Increasing the volumetric imaging speed of light-sheet microscopy will improve its ability to detect fast changes in neural activity. Here, a system is introduced for brain-wide imaging of neural activity in the larval zebrafish by coupling structured illumination with cubic phase extended depth-of-field (EDoF) pupil encoding. This microscope enables faster light-sheet imaging and facilitates arbitrary plane scanning—removing constraints on acquisition speed, alignment tolerances, and physical motion near the sample. The usefulness of this method is demonstrated by performing multi-plane calcium imaging in the fish brain with a 416 × 832 × 160 µm field of view at 33 Hz. The optomotor response behavior of the zebrafish is monitored at high speeds, and time-locked correlations of neuronal activity are resolved across its brain. PMID:26974063
Disappearing inflaton potential via heavy field dynamics
NASA Astrophysics Data System (ADS)
Kitajima, Naoya; Takahashi, Fuminobu
2016-02-01
We propose a possibility that the inflaton potential is significantly modified after inflation due to heavy field dynamics. During inflation such a heavy scalar field may be stabilized at a value deviated from the low-energy minimum. In extreme cases, the inflaton potential vanishes and the inflaton becomes almost massless at some time after inflation. Such transition of the inflaton potential has interesting implications for primordial density perturbations, reheating, creation of unwanted relics, dark radiation, and experimental search for light degrees of freedom. To be concrete, we consider a chaotic inflation in supergravity where the inflaton mass parameter is promoted to a modulus field, finding that the inflaton becomes stable after the transition and contributes to dark matter. Another example is a hilltop inflation (also called new inflation) by the MSSM Higgs field which acquires a large expectation value just after inflation, but it returns to the origin after the transition and finally rolls down to the electroweak vacuum. Interestingly, the smallness of the electroweak scale compared to the Planck scale is directly related to the flatness of the inflaton potential.
Optimally designed fields for controlling molecular dynamics
NASA Astrophysics Data System (ADS)
Rabitz, Herschel
1991-10-01
This research concerns the development of molecular control theory techniques for designing optical fields capable of manipulating molecular dynamic phenomena. Although is has been long recognized that lasers should be capable of manipulating dynamic events, many frustrating years of intuitively driven laboratory studies only serve to illustrate the point that the task is complex and defies intuition. The principal new component in the present research is the recognition that this problem falls into the category of control theory and its inherent complexities require the use of modern control theory tools largely developed in the engineering disciplines. Thus, the research has initiated a transfer of the control theory concepts to the molecular scale. Although much contained effort will be needed to fully develop these concepts, the research in this grant set forth the basic components of the theory and carried out illustrative studies involving the design of optical fields capable of controlling rotational, vibrational and electronic degrees of freedom. Optimal control within the quantum mechanical molecular realm represents a frontier area with many possible ultimate applications. At this stage, the theoretical tools need to be joined with merging laboratory optical pulse shaping capabilities to illustrate the power of the concepts.
The Role of Direct Current Electric Field-Guided Stem Cell Migration in Neural Regeneration.
Yao, Li; Li, Yongchao
2016-06-01
Effective directional axonal growth and neural cell migration are crucial in the neural regeneration of the central nervous system (CNS). Endogenous currents have been detected in many developing nervous systems. Experiments have demonstrated that applied direct current (DC) electric fields (EFs) can guide axonal growth in vitro, and attempts have been made to enhance the regrowth of damaged spinal cord axons using DC EFs in in vivo experiments. Recent work has revealed that the migration of stem cells and stem cell-derived neural cells can be guided by DC EFs. These studies have raised the possibility that endogenous and applied DC EFs can be used to direct neural tissue regeneration. Although the mechanism of EF-directed axonal growth and cell migration has not been fully understood, studies have shown that the polarization of cell membrane proteins and the activation of intracellular signaling molecules are involved in the process. The application of EFs is a promising biotechnology for regeneration of the CNS. PMID:27108005
Neural dynamics for landmark orientation and angular path integration
Seelig, Johannes D.; Jayaraman, Vivek
2015-01-01
Summary Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed flies walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body — a toroidal structure in the center of the fly brain. The population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity — a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors — network structures proposed to support the function of navigational brain circuits. PMID:25971509
Neural processing of dynamic emotional facial expressions in psychopaths.
Decety, Jean; Skelly, Laurie; Yoder, Keith J; Kiehl, Kent A
2014-02-01
Facial expressions play a critical role in social interactions by eliciting rapid responses in the observer. Failure to perceive and experience a normal range and depth of emotion seriously impact interpersonal communication and relationships. As has been demonstrated across a number of domains, abnormal emotion processing in individuals with psychopathy plays a key role in their lack of empathy. However, the neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear and perhaps sadness. Moreover, findings are inconsistent across studies. In the current experiment, 80 incarcerated adult males scoring high, medium, and low on the Hare Psychopathy Checklist-Revised (PCL-R) underwent functional magnetic resonance imaging (fMRI) scanning while viewing dynamic facial expressions of fear, sadness, happiness, and pain. Participants who scored high on the PCL-R showed a reduction in neuro-hemodynamic response to all four categories of facial expressions in the face processing network (inferior occipital gyrus, fusiform gyrus, and superior temporal sulcus (STS)) as well as the extended network (inferior frontal gyrus and orbitofrontal cortex (OFC)), which supports a pervasive deficit across emotion domains. Unexpectedly, the response in dorsal insula to fear, sadness, and pain was greater in psychopaths than non-psychopaths. Importantly, the orbitofrontal cortex and ventromedial prefrontal cortex (vmPFC), regions critically implicated in affective and motivated behaviors, were significantly less active in individuals with psychopathy during the perception of all four emotional expressions. PMID:24359488
An Implantable Wireless Neural Interface for Recording Cortical Circuit Dynamics in Moving Primates
Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto
2013-01-01
Objective Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims, and those living with severe neuromotor disease. Such systems must be chronically safe, durable, and effective. Approach We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous, and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based MEA via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1Hz to 7.8kHz, ×200 gain) and multiplexed by a custom application specific integrated circuit, digitized, and then packaged for transmission. The neural data (24 Mbps) was transmitted by a wireless data link carried on an frequency shift key modulated signal at 3.2GHz and 3.8GHz to a receiver 1 meter away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7-hour continuous operation between recharge via an inductive transcutaneous wireless power link at 2MHz. Main results Device verification and early validation was performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight on how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile patient use, have
An implantable wireless neural interface for recording cortical circuit dynamics in moving primates
NASA Astrophysics Data System (ADS)
Borton, David A.; Yin, Ming; Aceros, Juan; Nurmikko, Arto
2013-04-01
Objective. Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims and those living with severe neuromotor disease. Such systems must be chronically safe, durable and effective. Approach. We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based microelectrode array via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1 Hz to 7.8 kHz, 200× gain) and multiplexed by a custom application specific integrated circuit, digitized and then packaged for transmission. The neural data (24 Mbps) were transmitted by a wireless data link carried on a frequency-shift-key-modulated signal at 3.2 and 3.8 GHz to a receiver 1 m away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7 h continuous operation between recharge via an inductive transcutaneous wireless power link at 2 MHz. Main results. Device verification and early validation were performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance. We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight into how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile
Neural networks with dynamical synapses: From mixed-mode oscillations and spindles to chaos
NASA Astrophysics Data System (ADS)
Lee, K.; Goltsev, A. V.; Lopes, M. A.; Mendes, J. F. F.
2013-01-01
Understanding of short-term synaptic depression (STSD) and other forms of synaptic plasticity is a topical problem in neuroscience. Here we study the role of STSD in the formation of complex patterns of brain rhythms. We use a cortical circuit model of neural networks composed of irregular spiking excitatory and inhibitory neurons having type 1 and 2 excitability and stochastic dynamics. In the model, neurons form a sparsely connected network and their spontaneous activity is driven by random spikes representing synaptic noise. Using simulations and analytical calculations, we found that if the STSD is absent, the neural network shows either asynchronous behavior or regular network oscillations depending on the noise level. In networks with STSD, changing parameters of synaptic plasticity and the noise level, we observed transitions to complex patters of collective activity: mixed-mode and spindle oscillations, bursts of collective activity, and chaotic behavior. Interestingly, these patterns are stable in a certain range of the parameters and separated by critical boundaries. Thus, the parameters of synaptic plasticity can play a role of control parameters or switchers between different network states. However, changes of the parameters caused by a disease may lead to dramatic impairment of ongoing neural activity. We analyze the chaotic neural activity by use of the 0-1 test for chaos (Gottwald, G. & Melbourne, I., 2004) and show that it has a collective nature.
Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes.
Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice; Tamietto, Marco
2014-11-01
The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion. PMID:24214921
Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes
Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice
2014-01-01
The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion. PMID:24214921
Dynamical Field Line Connectivity in Magnetic Turbulence
NASA Astrophysics Data System (ADS)
Ruffolo, D. J.; Matthaeus, W. H.
2014-12-01
Point-to-point magnetic connectivity has a stochastic character whenever magnetic fluctuations cause a field line random walk, with observable manifestations such as dropouts of solar energetic particles and upstream events at Earth's bow shock. This can also change due to dynamical activity. Comparing the instantaneous magnetic connectivity to the same point at two different times, we provide a nonperturbative analytic theory for the ensemble average perpendicular displacement of the magnetic field line, given the power spectrum of magnetic fluctuations. For simplicity, the theory is developed in the context of transverse turbulence, and is numerically evaluated for two specific models: reduced magnetohydrodynanmics (RMHD), a quasi-two dimensional model of anisotropic turbulence that is applicable to low-beta plasmas, and two-dimensional (2D) plus slab turbulence, which is a good parameterization for solar wind turbulence. We take into account the dynamical decorrelation of magnetic fluctuations due to wave propagation, nonlinear distortion, random sweeping, and convection by a bulk wind flow relative to the observer. The mean squared time-differenced displacement increases with time and with parallel distance, becoming twice the field line random walk displacement at long distances and/or times, corresponding to a pair of uncorrelated random walks. These results are relevant to a variety of astrophysical processes, such as electron transport and heating patterns in coronal loops and the solar transition region, changing magnetic connection to particle sources near the Sun or at a planetary bow shock, and thickening of coronal hole boundaries. Partially supported by the Thailand Research Fund, the US NSF (AGS-1063439 and SHINE AGS-1156094), NASA (Heliophysics Theory NNX11AJ44G), and by the Solar Probe Plus Project through the ISIS Theory team.
Dynamic nuclear polarization at high magnetic fields
Maly, Thorsten; Debelouchina, Galia T.; Bajaj, Vikram S.; Hu, Kan-Nian; Joo, Chan-Gyu; Mak–Jurkauskas, Melody L.; Sirigiri, Jagadishwar R.; van der Wel, Patrick C. A.; Herzfeld, Judith; Temkin, Richard J.; Griffin, Robert G.
2009-01-01
Dynamic nuclear polarization (DNP) is a method that permits NMR signal intensities of solids and liquids to be enhanced significantly, and is therefore potentially an important tool in structural and mechanistic studies of biologically relevant molecules. During a DNP experiment, the large polarization of an exogeneous or endogeneous unpaired electron is transferred to the nuclei of interest (I) by microwave (μw) irradiation of the sample. The maximum theoretical enhancement achievable is given by the gyromagnetic ratios (γe/γl), being ∼660 for protons. In the early 1950s, the DNP phenomenon was demonstrated experimentally, and intensively investigated in the following four decades, primarily at low magnetic fields. This review focuses on recent developments in the field of DNP with a special emphasis on work done at high magnetic fields (≥5 T), the regime where contemporary NMR experiments are performed. After a brief historical survey, we present a review of the classical continuous wave (cw) DNP mechanisms—the Overhauser effect, the solid effect, the cross effect, and thermal mixing. A special section is devoted to the theory of coherent polarization transfer mechanisms, since they are potentially more efficient at high fields than classical polarization schemes. The implementation of DNP at high magnetic fields has required the development and improvement of new and existing instrumentation. Therefore, we also review some recent developments in μw and probe technology, followed by an overview of DNP applications in biological solids and liquids. Finally, we outline some possible areas for future developments. PMID:18266416
Field-driven dynamics of nematic microcapillaries.
Khayyatzadeh, Pouya; Fu, Fred; Abukhdeir, Nasser Mohieddin
2015-12-01
Polymer-dispersed liquid-crystal (PDLC) composites long have been a focus of study for their unique electro-optical properties which have resulted in various applications such as switchable (transparent or translucent) windows. These composites are manufactured using desirable "bottom-up" techniques, such as phase separation of a liquid-crystal-polymer mixture, which enable production of PDLC films at very large scales. LC domains within PDLCs are typically spheroidal, as opposed to rectangular for an LCD panel, and thus exhibit substantially different behavior in the presence of an external field. The fundamental difference between spheroidal and rectangular nematic domains is that the former results in the presence of nanoscale orientational defects in LC order while the latter does not. Progress in the development and optimization of PDLC electro-optical properties has progressed at a relatively slow pace due to this increased complexity. In this work, continuum simulations are performed in order to capture the complex formation and electric field-driven switching dynamics of approximations of PDLC domains. Using a simplified elliptic cylinder (microcapillary) geometry as an approximation of spheroidal PDLC domains, the effects of geometry (aspect ratio), surface anchoring, and external field strength are studied through the use of the Landau-de Gennes model of the nematic LC phase. PMID:26764713
Araújo, Rui
2006-09-01
Mobile robots must be able to build their own maps to navigate in unknown worlds. Expanding a previously proposed method based on the fuzzy ART neural architecture (FARTNA), this paper introduces a new online method for learning maps of unknown dynamic worlds. For this purpose the new Prune-able fuzzy adaptive resonance theory neural architecture (PAFARTNA) is introduced. It extends the FARTNA self-organizing neural network with novel mechanisms that provide important dynamic adaptation capabilities. Relevant PAFARTNA properties are formulated and demonstrated. A method is proposed for the perception of object removals, and then integrated with PAFARTNA. The proposed methods are integrated into a navigation architecture. With the new navigation architecture the mobile robot is able to navigate in changing worlds, and a degree of optimality is maintained, associated to a shortest path planning approach implemented in real-time over the underlying global world model. Experimental results obtained with a Nomad 200 robot are presented demonstrating the feasibility and effectiveness of the proposed methods. PMID:17001984
Zion-Golumbic, Elana; Kutas, Marta; Bentin, Shlomo
2009-01-01
Prior semantic knowledge facilitates episodic recognition memory for faces. To examine the neural manifestation of the interplay between semantic and episodic memory, we investigated neuroelectric dynamics during the creation (study) and the retrieval (test) of episodic memories for famous and nonfamous faces. Episodic memory effects were evident in several EEG frequency bands: theta (4–8 Hz), alpha (9–13 Hz), and gamma (40–100 Hz). Activity in these bands was differentially modulated by preexisting semantic knowledge and by episodic memory, implicating their different functional roles in memory. More specifically, theta activity and alpha suppression were larger for old compared to new faces at test regardless of fame, but were both larger for famous faces during study. This pattern of selective semantic effects suggests that the theta and alpha responses, which are primarily associated with episodic memory, reflect utilization of semantic information only when it is beneficial for task performance. In contrast, gamma activity decreased between the first (study) and second (test) presentation of a face, but overall was larger for famous than nonfamous faces. Hence, the gamma rhythm seems to be primarily related to activation of preexisting neural representations that may contribute to the formation of new episodic traces. Although the latter process is affected by the episodic status of a stimulus, gamma activity might not be a direct index of episodic memory. Taken together, these data provide new insights into the complex interaction between semantic and episodic memory for faces and the neural dynamics associated with mnemonic processes. PMID:19400676
Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884
Design of neural networks for fast convergence and accuracy: dynamics and control.
Maghami, P G; Sparks, D R
2000-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach. PMID:18249744
Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1997-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Kwong, C. K.; Fung, K. Y.; Jiang, Huimin; Chan, K. Y.
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
NASA Astrophysics Data System (ADS)
Bruton, Christopher Patrick
Earthquakes and seismicity have long been used to monitor volcanoes. In addition to the time, location, and magnitude of an earthquake, the characteristics of the waveform itself are important. For example, low-frequency or hybrid type events could be generated by magma rising toward the surface. A rockfall event could indicate a growing lava dome. Classification of earthquake waveforms is thus a useful tool in volcano monitoring. A procedure to perform such classification automatically could flag certain event types immediately, instead of waiting for a human analyst's review. Inspired by speech recognition techniques, we have developed a procedure to classify earthquake waveforms using artificial neural networks. A neural network can be "trained" with an existing set of input and desired output data; in this case, we use a set of earthquake waveforms (input) that has been classified by a human analyst (desired output). After training the neural network, new sets of waveforms can be classified automatically as they are presented. Our procedure uses waveforms from multiple stations, making it robust to seismic network changes and outages. The use of a dynamic time-delay neural network allows waveforms to be presented without precise alignment in time, and thus could be applied to continuous data or to seismic events without clear start and end times. We have evaluated several different training algorithms and neural network structures to determine their effects on classification performance. We apply this procedure to earthquakes recorded at Mount Spurr and Katmai in Alaska, and Uturuncu Volcano in Bolivia. The procedure can successfully distinguish between slab and volcanic events at Uturuncu, between events from four different volcanoes in the Katmai region, and between volcano-tectonic and long-period events at Spurr. Average recall and overall accuracy were greater than 80% in all three cases.
Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control
Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind
2016-01-01
There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system. PMID:26829673
Recovery of Dynamics and Function in Spiking Neural Networks with Closed-Loop Control.
Vlachos, Ioannis; Deniz, Taşkin; Aertsen, Ad; Kumar, Arvind
2016-02-01
There is a growing interest in developing novel brain stimulation methods to control disease-related aberrant neural activity and to address basic neuroscience questions. Conventional methods for manipulating brain activity rely on open-loop approaches that usually lead to excessive stimulation and, crucially, do not restore the original computations performed by the network. Thus, they are often accompanied by undesired side-effects. Here, we introduce delayed feedback control (DFC), a conceptually simple but effective method, to control pathological oscillations in spiking neural networks (SNNs). Using mathematical analysis and numerical simulations we show that DFC can restore a wide range of aberrant network dynamics either by suppressing or enhancing synchronous irregular activity. Importantly, DFC, besides steering the system back to a healthy state, also recovers the computations performed by the underlying network. Finally, using our theory we identify the role of single neuron and synapse properties in determining the stability of the closed-loop system. PMID:26829673
Understanding quantum entanglement by thermo field dynamics
NASA Astrophysics Data System (ADS)
Hashizume, Yoichiro; Suzuki, Masuo
2013-09-01
We propose a new method to understand quantum entanglement using the thermo field dynamics (TFD) described by a double Hilbert space. The entanglement states show a quantum-mechanically complicated behavior. Our new method using TFD makes it easy to understand the entanglement states, because the states in the tilde space in TFD play a role of tracer of the initial states. For our new treatment, we define an extended density matrix on the double Hilbert space. From this study, we make a general formulation of this extended density matrix and examine some simple cases using this formulation. Consequently, we have found that we can distinguish intrinsic quantum entanglement from the thermal fluctuations included in the definition of the ordinary quantum entanglement at finite temperatures. Through the above examination, our method using TFD can be applied not only to equilibrium states but also to non-equilibrium states. This is shown using some simple finite systems in the present paper.
Machine Learning for Dynamical Mean Field Theory
NASA Astrophysics Data System (ADS)
Arsenault, Louis-Francois; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; Littlewood, P. B.; Millis, Andy
2014-03-01
Machine Learning (ML), an approach that infers new results from accumulated knowledge, is in use for a variety of tasks ranging from face and voice recognition to internet searching and has recently been gaining increasing importance in chemistry and physics. In this talk, we investigate the possibility of using ML to solve the equations of dynamical mean field theory which otherwise requires the (numerically very expensive) solution of a quantum impurity model. Our ML scheme requires the relation between two functions: the hybridization function describing the bare (local) electronic structure of a material and the self-energy describing the many body physics. We discuss the parameterization of the two functions for the exact diagonalization solver and present examples, beginning with the Anderson Impurity model with a fixed bath density of states, demonstrating the advantages and the pitfalls of the method. DOE contract DE-AC02-06CH11357.
Field-induced superdiffusion and dynamical heterogeneity.
Gradenigo, Giacomo; Bertin, Eric; Biroli, Giulio
2016-06-01
By analyzing two kinetically constrained models of supercooled liquids we show that the anomalous transport of a driven tracer observed in supercooled liquids is another facet of the phenomenon of dynamical heterogeneity. We focus on the Fredrickson-Andersen and the Bertin-Bouchaud-Lequeux models. By numerical simulations and analytical arguments we demonstrate that the violation of the Stokes-Einstein relation and the field-induced superdiffusion observed during a long preasymptotic regime have the same physical origin: while a fraction of probes do not move, others jump repeatedly because they are close to local mobile regions. The anomalous fluctuations observed out of equilibrium in the presence of a pulling force ε,σ_{x}^{2}(t)=〈x_{ε}^{2}(t)〉-〈x_{ε}(t)〉^{2}∼t^{3/2}, which are accompanied by the asymptotic decay α_{ε}(t)∼t^{-1/2} of the non-Gaussian parameter from nontrivial values to zero, are due to the splitting of the probes population in the two (mobile and immobile) groups and to dynamical correlations, a mechanism expected to happen generically in supercooled liquids. PMID:27415189
A Neural Network Model to Learn Multiple Tasks under Dynamic Environments
NASA Astrophysics Data System (ADS)
Tsumori, Kenji; Ozawa, Seiichi
When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.
Optimal system size for complex dynamics in random neural networks near criticality
Wainrib, Gilles; García del Molino, Luis Carlos
2013-12-15
In this article, we consider a model of dynamical agents coupled through a random connectivity matrix, as introduced by Sompolinsky et al. [Phys. Rev. Lett. 61(3), 259–262 (1988)] in the context of random neural networks. When system size is infinite, it is known that increasing the disorder parameter induces a phase transition leading to chaotic dynamics. We observe and investigate here a novel phenomenon in the sub-critical regime for finite size systems: the probability of observing complex dynamics is maximal for an intermediate system size when the disorder is close enough to criticality. We give a more general explanation of this type of system size resonance in the framework of extreme values theory for eigenvalues of random matrices.
Noto, M; Nishikawa, J; Tateno, T
2016-03-24
A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self
Nonlinear dynamics analysis of a self-organizing recurrent neural network: chaos waning.
Eser, Jürgen; Zheng, Pengsheng; Triesch, Jochen
2014-01-01
Self-organization is thought to play an important role in structuring nervous systems. It frequently arises as a consequence of plasticity mechanisms in neural networks: connectivity determines network dynamics which in turn feed back on network structure through various forms of plasticity. Recently, self-organizing recurrent neural network models (SORNs) have been shown to learn non-trivial structure in their inputs and to reproduce the experimentally observed statistics and fluctuations of synaptic connection strengths in cortex and hippocampus. However, the dynamics in these networks and how they change with network evolution are still poorly understood. Here we investigate the degree of chaos in SORNs by studying how the networks' self-organization changes their response to small perturbations. We study the effect of perturbations to the excitatory-to-excitatory weight matrix on connection strengths and on unit activities. We find that the network dynamics, characterized by an estimate of the maximum Lyapunov exponent, becomes less chaotic during its self-organization, developing into a regime where only few perturbations become amplified. We also find that due to the mixing of discrete and (quasi-)continuous variables in SORNs, small perturbations to the synaptic weights may become amplified only after a substantial delay, a phenomenon we propose to call deferred chaos. PMID:24466301
NASA Astrophysics Data System (ADS)
Poza, Jesús; Gómez, Carlos; García, María; Corralejo, Rebeca; Fernández, Alberto; Hornero, Roberto
2014-04-01
Objective. Current diagnostic guidelines encourage further research for the development of novel Alzheimer's disease (AD) biomarkers, especially in its prodromal form (i.e. mild cognitive impairment, MCI). Magnetoencephalography (MEG) can provide essential information about AD brain dynamics; however, only a few studies have addressed the characterization of MEG in incipient AD. Approach. We analyzed MEG rhythms from 36 AD patients, 18 MCI subjects and 27 controls, introducing a new wavelet-based parameter to quantify their dynamical properties: the wavelet turbulence. Main results. Our results suggest that AD progression elicits statistically significant regional-dependent patterns of abnormalities in the neural activity (p < 0.05), including a progressive loss of irregularity, variability, symmetry and Gaussianity. Furthermore, the highest accuracies to discriminate AD and MCI subjects from controls were 79.4% and 68.9%, whereas, in the three-class setting, the accuracy reached 67.9%. Significance. Our findings provide an original description of several dynamical properties of neural activity in early AD and offer preliminary evidence that the proposed methodology is a promising tool for assessing brain changes at different stages of dementia.
Modeling of CMM dynamic error based on optimization of neural network using genetic algorithm
NASA Astrophysics Data System (ADS)
Ying, Qu; Zai, Luo; Yi, Lu
2010-08-01
By analyzing the dynamic error of CMM, a model is established using BP neural network for CMM .The most important 5 input parameters which affect the dynamic error of CMM are approximate rate, length of rod, diameter of probe, coordinate values of X and coordinate values of Y. But the training of BP neural network can be easily trapped in local minimums and its training speed is slow. In order to overcome these disadvantages, genetic algorithm (GA) is introduced for optimization. So the model of GA-BP network is built up. In order to verify the model, experiments are done on the CMM of type 9158. Experimental results indicate that the entire optimizing capability of genetic algorithm is perfect. Compared with traditional BP network, the GA-BP network has better accuracy and adaptability and the training time is halved using GA-BP network. The average dynamic error can be reduced from 3.5μm to 0.7μm. So the precision is improved by 76%.
The temporal derivative of expected utility: a neural mechanism for dynamic decision-making.
Zhang, Xian; Hirsch, Joy
2013-01-15
Real world tasks involving moving targets, such as driving a vehicle, are performed based on continuous decisions thought to depend upon the temporal derivative of the expected utility (∂V/∂t), where the expected utility (V) is the effective value of a future reward. However, the neural mechanisms that underlie dynamic decision-making are not well understood. This study investigates human neural correlates of both V and ∂V/∂t using fMRI and a novel experimental paradigm based on a pursuit-evasion game optimized to isolate components of dynamic decision processes. Our behavioral data show that players of the pursuit-evasion game adopt an exponential discounting function, supporting the expected utility theory. The continuous functions of V and ∂V/∂t were derived from the behavioral data and applied as regressors in fMRI analysis, enabling temporal resolution that exceeded the sampling rate of image acquisition, hyper-temporal resolution, by taking advantage of numerous trials that provide rich and independent manipulation of those variables. V and ∂V/∂t were each associated with distinct neural activity. Specifically, ∂V/∂t was associated with anterior and posterior cingulate cortices, superior parietal lobule, and ventral pallidum, whereas V was primarily associated with supplementary motor, pre and post central gyri, cerebellum, and thalamus. The association between the ∂V/∂t and brain regions previously related to decision-making is consistent with the primary role of the temporal derivative of expected utility in dynamic decision-making. PMID:22963852
McDermott, Timothy J.; Badura-Brack, Amy S.; Becker, Katherine M.; Ryan, Tara J.; Khanna, Maya M.; Heinrichs-Graham, Elizabeth; Wilson, Tony W.
2016-01-01
Background Posttraumatic stress disorder (PTSD) is associated with executive functioning deficits, including disruptions in working memory. In this study, we examined the neural dynamics of working memory processing in veterans with PTSD and a matched healthy control sample using magnetoencephalography (MEG). Methods Our sample of recent combat veterans with PTSD and demographically matched participants without PTSD completed a working memory task during a 306-sensor MEG recording. The MEG data were preprocessed and transformed into the time-frequency domain. Significant oscillatory brain responses were imaged using a beamforming approach to identify spatiotemporal dynamics. Results Fifty-one men were included in our analyses: 27 combat veterans with PTSD and 24 controls. Across all participants, a dynamic wave of neural activity spread from posterior visual cortices to left frontotemporal regions during encoding, consistent with a verbal working memory task, and was sustained throughout maintenance. Differences related to PTSD emerged during early encoding, with patients exhibiting stronger α oscillatory responses than controls in the right inferior frontal gyrus (IFG). Differences spread to the right supramarginal and temporal cortices during later encoding where, along with the right IFG, they persisted throughout the maintenance period. Limitations This study focused on men with combat-related PTSD using a verbal working memory task. Future studies should evaluate women and the impact of various traumatic experiences using diverse tasks. Conclusion Posttraumatic stress disorder is associated with neurophysiological abnormalities during working memory encoding and maintenance. Veterans with PTSD engaged a bilateral network, including the inferior prefrontal cortices and supramarginal gyri. Right hemispheric neural activity likely reflects compensatory processing, as veterans with PTSD work to maintain accurate performance despite known cognitive deficits
NASA Astrophysics Data System (ADS)
Liao, Xiaofeng; Wong, Kwok-Wo; Yang, Shizhong
2003-09-01
In this Letter, the characteristics of the convergence dynamics of hybrid bidirectional associative memory neural networks with distributed transmission delays are studied. Without assuming the symmetry of synaptic connection weights and the monotonicity and differentiability of activation functions, the Lyapunov functionals are constructed and the generalized Halanay-type inequalities are employed to derive the delay-independent sufficient conditions under which the networks converge exponentially to the equilibria associated with temporally uniform external inputs. Some examples are given to illustrate the correctness of our results.
Recognition with self-control in neural networks
NASA Astrophysics Data System (ADS)
Lewenstein, Maciej; Nowak, Andrzej
1989-10-01
We present a theory of fully connected neural networks that incorporates mechanisms of dynamical self-control of recognition process. Using a functional integral technique, we formulate mean-field dynamics for such systems.
An energy-efficient, dynamic voltage scaling neural stimulator for a proprioceptive prosthesis.
Williams, Ian; Constandinou, Timothy G
2013-04-01
This paper presents an 8 channel energy-efficient neural stimulator for generating charge-balanced asymmetric pulses. Power consumption is reduced by implementing a fully-integrated DC-DC converter that uses a reconfigurable switched capacitor topology to provide 4 output voltages for Dynamic Voltage Scaling (DVS). DC conversion efficiencies of up to 82% are achieved using integrated capacitances of under 1 nF and the DVS approach offers power savings of up to 50% compared to the front end of a typical current controlled neural stimulator. A novel charge balancing method is implemented which has a low level of accuracy on a single pulse and a much higher accuracy over a series of pulses. The method used is robust to process and component variation and does not require any initial or ongoing calibration. Measured results indicate that the charge imbalance is typically between 0.05%-0.15% of charge injected for a series of pulses. Ex-vivo experiments demonstrate the viability in using this circuit for neural activation. The circuit has been implemented in a commercially-available 0.18 μm HV CMOS technology and occupies a core die area of approximately 2.8 mm(2) for an 8 channel implementation. PMID:23853295
Bouchard, Kristofer E.; Brainard, Michael S.
2016-01-01
Predicting future events is a critical computation for both perception and behavior. Despite the essential nature of this computation, there are few studies demonstrating neural activity that predicts specific events in learned, probabilistic sequences. Here, we test the hypotheses that the dynamics of internally generated neural activity are predictive of future events and are structured by the learned temporal–sequential statistics of those events. We recorded neural activity in Bengalese finch sensory-motor area HVC in response to playback of sequences from individuals’ songs, and examined the neural activity that continued after stimulus offset. We found that the strength of response to a syllable in the sequence depended on the delay at which that syllable was played, with a maximal response when the delay matched the intersyllable gap normally present for that specific syllable during song production. Furthermore, poststimulus neural activity induced by sequence playback resembled the neural response to the next syllable in the sequence when that syllable was predictable, but not when the next syllable was uncertain. Our results demonstrate that the dynamics of internally generated HVC neural activity are predictive of the learned temporal–sequential structure of produced song and that the strength of this prediction is modulated by uncertainty. PMID:27506786
NASA Astrophysics Data System (ADS)
Yu, Yiqun; Koller, Josef; Jordanova, Vania K.; Zaharia, Sorin G.; Friedel, Reinhard W.; Morley, Steven K.; Chen, Yue; Baker, Daniel; Reeves, Geoffrey D.; Spence, Harlan E.
2014-03-01
We expanded our previous work on L* neural networks that used empirical magnetic field models as the underlying models by applying and extending our technique to drift shells calculated from a physics-based magnetic field model. While empirical magnetic field models represent an average, statistical magnetospheric state, the RAM-SCB model, a first-principles magnetically self-consistent code, computes magnetic fields based on fundamental equations of plasma physics. Unlike the previous L* neural networks that include McIlwain L and mirror point magnetic field as part of the inputs, the new L* neural network only requires solar wind conditions and the Dst index, allowing for an easier preparation of input parameters. This new neural network is compared against those previously trained networks and validated by the tracing method in the International Radiation Belt Environment Modeling (IRBEM) library. The accuracy of all L* neural networks with different underlying magnetic field models is evaluated by applying the electron phase space density (PSD)-matching technique derived from the Liouville's theorem to the Van Allen Probes observations. Results indicate that the uncertainty in the predicted L* is statistically (75%) below 0.7 with a median value mostly below 0.2 and the median absolute deviation around 0.15, regardless of the underlying magnetic field model. We found that such an uncertainty in the calculated L* value can shift the peak location of electron phase space density (PSD) profile by 0.2 RE radially but with its shape nearly preserved.
Open systems dynamics for propagating quantum fields
NASA Astrophysics Data System (ADS)
Baragiola, Ben Quinn
In this dissertation, I explore interactions between matter and propagating light. The electromagnetic field is modeled as a Markovian reservoir of quantum harmonic oscillators successively streaming past a quantum system. Each weak and fleeting interaction entangles the light and the system, and the light continues its course. In the context of quantum tomography or metrology one attempts, using measure- ments of the light, to extract information about the quantum state of the system. An inevitable consequence of these measurements is a disturbance of the system's quantum state. These ideas focus on the system and regard the light as ancillary. It serves its purpose as a probe or as a mechanism to generate interesting dynamics or system states but is eventually traced out, leaving the reduced quantum state of the system as the primary mathematical subject. What, then, when the state of light itself harbors intrinsic self-entanglement? One such set of states, those where a traveling wave packet is prepared with a defi- nite number of photons, is a focal point of this dissertation. These N-photon states are ideal candidates as couriers in quantum information processing device. In con- trast to quasi-classical states, such as coherent or thermal fields, N-photon states possess temporal mode entanglement, and local interactions in time have nonlocal consequences. The reduced state of a system probed by an N-photon state evolves in a non-Markovian way, and to describe its dynamics one is obliged to keep track of the field's evolution. I present a method to do this for an arbitrary quantum system using a set of coupled master equations. Many models set aside spatial degrees of freedom as an unnecessary complicating factor. By doing so the precision of predictions is limited. Consider a ensemble of cold, trapped atomic spins dispersively probed by a paraxial laser beam. Atom-light coupling across the ensemble is spatially inhomogeneous as is the radiation pattern of
Impaired neural processing of dynamic faces in left-onset Parkinson's disease.
Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Sehm, Bernhard; Kotz, Sonja A
2016-02-01
Parkinson's disease (PD) affects patients beyond the motor domain. According to previous evidence, one mechanism that may be impaired in the disease is face processing. However, few studies have investigated this process at the neural level in PD. Moreover, research using dynamic facial displays rather than static pictures is scarce, but highly warranted due to the higher ecological validity of dynamic stimuli. In the present study we aimed to investigate how PD patients process emotional and non-emotional dynamic face stimuli at the neural level using event-related potentials. Since the literature has revealed a predominantly right-lateralized network for dynamic face processing, we divided the group into patients with left (LPD) and right (RPD) motor symptom onset (right versus left cerebral hemisphere predominantly affected, respectively). Participants watched short video clips of happy, angry, and neutral expressions and engaged in a shallow gender decision task in order to avoid confounds of task difficulty in the data. In line with our expectations, the LPD group showed significant face processing deficits compared to controls. While there were no group differences in early, sensory-driven processing (fronto-central N1 and posterior P1), the vertex positive potential, which is considered the fronto-central counterpart of the face-specific posterior N170 component, had a reduced amplitude and delayed latency in the LPD group. This may indicate disturbances of structural face processing in LPD. Furthermore, the effect was independent of the emotional content of the videos. In contrast, static facial identity recognition performance in LPD was not significantly different from controls, and comprehensive testing of cognitive functions did not reveal any deficits in this group. We therefore conclude that PD, and more specifically the predominant right-hemispheric affection in left-onset PD, is associated with impaired processing of dynamic facial expressions
Context dependence of spectro-temporal receptive fields with implications for neural coding.
Eggermont, Jos J
2011-01-01
The spectro-temporal receptive field (STRF) is frequently used to characterize the linear frequency-time filter properties of the auditory system up to the neuron recorded from. STRFs are extremely stimulus dependent, reflecting the strong non-linearities in the auditory system. Changes in the STRF with stimulus type (tonal, noise-like, vocalizations), sound level and spectro-temporal sound density are reviewed here. Effects on STRF shape of task and attention are also briefly reviewed. Models to account for these changes, potential improvements to STRF analysis, and implications for neural coding are discussed. PMID:20123121
The Emergent Executive: A Dynamic Field Theory of the Development of Executive Function
Buss, Aaron T.; Spencer, John P.
2015-01-01
A dynamic neural field (DNF) model is presented which provides a process-based account of behavior and developmental change in a key task used to probe the early development of executive function—the Dimensional Change Card Sort (DCCS) task. In the DCCS, children must flexibly switch from sorting cards either by shape or color to sorting by the other dimension. Typically, 3-year-olds, but not 4-year-olds, lack the flexibility to do so and perseverate on the first set of rules when instructed to switch. In the DNF model, rule-use and behavioral flexibility come about through a form of dimensional attention which modulates activity within different cortical fields tuned to specific feature dimensions. In particular, we capture developmental change by increasing the strength of excitatory and inhibitory neural interactions in the dimensional attention system as well as refining the connectivity between this system and the feature-specific cortical fields. Note that although this enables the model to effectively switch tasks, the dimensional attention system does not ‘know’ the details of task-specific performance. Rather, correct performance emerges as a property of system-wide neural interactions. We show how this captures children's behavior in quantitative detail across 12 versions of the DCCS task. Moreover, we successfully test a set of novel predictions with 3-year-old children from a version of the task not explained by other theories. PMID:24818836
Chapin, Heather; Jantzen, Kelly; Scott Kelso, J. A.; Steinberg, Fred; Large, Edward
2010-01-01
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies. PMID:21179549
Chapin, Heather; Jantzen, Kelly; Kelso, J A Scott; Steinberg, Fred; Large, Edward
2010-01-01
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies. PMID:21179549
NASA Astrophysics Data System (ADS)
Liu, Derong; Huang, Yuzhu; Wang, Ding; Wei, Qinglai
2013-09-01
In this paper, an observer-based optimal control scheme is developed for unknown nonlinear systems using adaptive dynamic programming (ADP) algorithm. First, a neural-network (NN) observer is designed to estimate system states. Then, based on the observed states, a neuro-controller is constructed via ADP method to obtain the optimal control. In this design, two NN structures are used: a three-layer NN is used to construct the observer which can be applied to systems with higher degrees of nonlinearity and without a priori knowledge of system dynamics, and a critic NN is employed to approximate the value function. The optimal control law is computed using the critic NN and the observer NN. Uniform ultimate boundedness of the closed-loop system is guaranteed. The actor, critic, and observer structures are all implemented in real-time, continuously and simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the proposed control scheme.
Flaw sizing method based on ultrasonic dynamic thresholds and neural network
NASA Astrophysics Data System (ADS)
Song, Yongfeng; Wang, Yiling; Ni, Peijun; Qiao, Ridong; Li, Xiongbing
2016-02-01
A dynamic threshold method for ultrasonic C-Scan imaging is developed to improve the performance of flaw sizing: the reference test blocks with flat-bottom hole flaws of different depths and sizes are used for ultrasonic C-Scan imaging. After preprocessing, flaw regions are separated from the C-scan image. Then the flaws are sized roughly by 6-dB-drop method. Based on the real size of flat-bottom holes, enumeration method is used to get the optimal threshold for the flaw. The neural network is trained using the combination of amplitude and depth of flaw echo, the rough size of flaw and the optimal threshold. Finally, the C-Scan image can be reconstructed according to dynamic threshold generated by trained RBF NN. The experimental results show that the presented method has better performance and it is ideally suited for automatic analysis of ultrasonic C-scan images.
Complex dynamics of a delayed discrete neural network of two nonidentical neurons
Chen, Yuanlong; Huang, Tingwen; Huang, Yu
2014-03-15
In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291–303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415–432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869–1878 (2013)]. We also give some numeric simulations to verify our theoretical results.
Effects of neuronal loss in the dynamic model of neural networks
NASA Astrophysics Data System (ADS)
Yoon, B.-G.; Choi, J.; Choi, M. Y.
2008-09-01
We study the phase transitions and dynamic behavior of the dynamic model of neural networks, with an emphasis on the effects of neuronal loss due to external stress. In the absence of loss the overall results obtained numerically are found to agree excellently with the theoretical ones. When the external stress is turned on, some neurons may deteriorate and die; such loss of neurons, in general, weakens the memory in the system. As the loss increases beyond a critical value, the order parameter measuring the strength of memory decreases to zero either continuously or discontinuously, namely, the system loses its memory via a second- or a first-order transition, depending on the ratio of the refractory period to the duration of action potential.
Investigation of neural-net based control strategies for improved power system dynamic performance
Sobajic, D.J.
1995-12-31
The ability to accurately predict the behavior of a dynamic system is of essential importance in monitoring and control of complex processes. In this regard recent advances in neural-net base system identification represent a significant step toward development and design of a new generation of control tools for increased system performance and reliability. The enabling functionality is the one of accurate representation of a model of a nonlinear and nonstationary dynamic system. This functionality provides valuable new opportunities including: (1) The ability to predict future system behavior on the basis of actual system observations, (2) On-line evaluation and display of system performance and design of early warning systems, and (3) Controller optimization for improved system performance. In this presentation, we discuss the issues involved in definition and design of learning control systems and their impact on power system control. Several numerical examples are provided for illustrative purpose.
Wide field fluorescent imaging of extracellular spatiotemporal potassium dynamics in vivo.
Bazzigaluppi, Paolo; Dufour, Suzie; Carlen, Peter L
2015-01-01
Potassium homeostasis is fundamental for the physiological functioning of the brain. Increased [K(+)] in the extracellular fluid has a major impact on neuronal physiology and can lead to ictal events. Compromised regulation of extracellular [K(+)] is involved in generation of seizures in animal models and potentially also in humans. For this reason, the investigation of K(+) spatio-temporal dynamics is of fundamental importance for neuroscientists in the field of epilepsy and other related pathologies. To date, the majority of studies investigating changes in extracellular K(+) have been conducted using a micropipette filled with a K(+) sensitive solution. However, this approach presents a major limitation: the area of the measurement is circumscribed to the tip of the pipette and it is not possible to know the spatiotemporal distribution or origin of the focally measured K(+) signal. Here we propose a novel approach, based on wide field fluorescence, to measure extracellular K(+) dynamics in neural tissue. Recording the local field potential from the somatosensory cortex of the mouse, we compared responses obtained from a K(+)-sensitive microelectrode to the spatiotemporal increases in fluorescence of the fluorophore, Asante Potassium Green-2, in physiological conditions and during 4-AP induced ictal activity. We conclude that wide field imaging is a valuable and versatile tool to measure K(+) dynamics over a large area of the cerebral cortex and is capable of capturing fast dynamics such as during ictal events. Moreover, the present technique is potentially adaptable to address questions regarding spatiotemporal dynamics of other ionic species. PMID:25312775
On Parsing the Neural Code in the Prefrontal Cortex of Primates using Principal Dynamic Modes
Marmarelis, V. Z.; Shin, D. C.; Song, D.; Hampson, R. E.; Deadwyler, S. A.; Berger, T. W.
2013-01-01
Nonlinear modeling of multi-input multi-output (MIMO) neuronal systems using Principal Dynamic Modes (PDMs) provides a novel method for analyzing the functional connectivity between neuronal groups. This paper presents the PDM-based modeling methodology and initial results from actual multi-unit recordings in the prefrontal cortex of non-human primates. We used the PDMs to analyze the dynamic transformations of spike train activity from Layer 2 (input) to Layer 5 (output) of the prefrontal cortex in primates performing a Delayed-Match-to-Sample task. The PDM-based models reduce the complexity of representing large-scale neural MIMO systems that involve large numbers of neurons, and also offer the prospect of improved biological/physiological interpretation of the obtained models. PDM analysis of neuronal connectivity in this system revealed "input-output channels of communication" corresponding to specific bands of neural rhythms that quantify the relative importance of these frequency-specific PDMs across a variety of different tasks. We found that behavioral performance during the Delayed-Match-to-Sample task (correct vs. incorrect outcome) was associated with differential activation of frequency-specific PDMs in the prefrontal cortex. PMID:23929124
Micromotion-induced dynamic effects from a neural probe and brain tissue interface
NASA Astrophysics Data System (ADS)
Polanco, Michael; Yoon, Hargsoon; Bawab, Sebastian
2014-04-01
Neural probes contain the potential to cause injury to surrounding neural cells due to a discrepancy in stiffness values between them and the surrounding brain tissue when subjected to mechanical micromotion of the brain. To evaluate the effects of the mechanical mismatch, a series of dynamic simulations are conducted to better understand the design enhancements required to improve the feasibility of the neuron probe. The simulations use a nonlinear transient explicit finite element code, LS-DYNA. A three-dimensional quarter-symmetry finite element model is utilized for the transient analysis to capture the time-dependent dynamic deformations on the brain tissue from the implant as a function of different frequency shapes and stiffness values. When micromotion-induced pulses are applied, reducing the neuron probe stiffness by three orders of magnitude leads up to a 41.6% reduction in stress and 39.1% reduction in strain. The simulation conditions assume a case where sheath bonding has begun to take place around the probe implantation site, but no full bond to the probe has occurred. The analyses can provide guidance on the materials necessary to design a probe for injury reduction.
Hoppensteadt, F C; Izhikevich, E M
1996-08-01
This is the second of two articles devoted to analyzing the relationship between synaptic organizations (anatomy) and dynamical properties (function) of networks of neural oscillators near multiple supercritical Andronov-Hopf bifurcation points. Here we analyze learning processes in such networks. Regarding learning dynamics, we assume (1) learning is local (i.e. synaptic modification depends on pre- and postsynaptic neurons but not on others), (2) synapses modify slowly relative to characteristic neuron response times, (3) in the absence of either pre- or postsynaptic activity, the synapse weakens (forgets). Our major goal is to analyze all synaptic organizations of oscillatory neural networks that can memorize and retrieve phase information or time delays. We show that such network have the following attributes: (1) the rate of synaptic plasticity connected with learning is determined locally by the presynaptic neurons, (2) the excitatory neurons must be long-axon relay neurons capable of forming distant connections with other excitatory and inhibitory neurons, (3) if inhibitory neurons have long axons, then the network can learn, passively forget and actively unlearn information by adjusting synaptic plasticity rates. PMID:8855351
Brain Dynamics in Predicting Driving Fatigue Using a Recurrent Self-Evolving Fuzzy Neural Network.
Liu, Yu-Ting; Lin, Yang-Yin; Wu, Shang-Lin; Chuang, Chun-Hsiang; Lin, Chin-Teng
2016-02-01
This paper proposes a generalized prediction system called a recurrent self-evolving fuzzy neural network (RSEFNN) that employs an on-line gradient descent learning rule to address the electroencephalography (EEG) regression problem in brain dynamics for driving fatigue. The cognitive states of drivers significantly affect driving safety; in particular, fatigue driving, or drowsy driving, endangers both the individual and the public. For this reason, the development of brain-computer interfaces (BCIs) that can identify drowsy driving states is a crucial and urgent topic of study. Many EEG-based BCIs have been developed as artificial auxiliary systems for use in various practical applications because of the benefits of measuring EEG signals. In the literature, the efficacy of EEG-based BCIs in recognition tasks has been limited by low resolutions. The system proposed in this paper represents the first attempt to use the recurrent fuzzy neural network (RFNN) architecture to increase adaptability in realistic EEG applications to overcome this bottleneck. This paper further analyzes brain dynamics in a simulated car driving task in a virtual-reality environment. The proposed RSEFNN model is evaluated using the generalized cross-subject approach, and the results indicate that the RSEFNN is superior to competing models regardless of the use of recurrent or nonrecurrent structures. PMID:26595929
NASA Astrophysics Data System (ADS)
Sun, W.; Chiang, Y.; Chang, F.
2010-12-01
Evaporation is a substantial factor in hydrological circle, moreover a significant reference to the management of both water resources and agricultural irrigation. In general, evaporation can be directly measured by evaporation pan. As for its estimation, the accuracy of traditional empirical equation is not very precise. Therefore, in this study the Dynamic Factor Analysis (DFA) is first applied to investigating the interaction and the tendency of each gauging station. Additionally, the analysis can effectively establish the common trend at each gauging station by evaluating the corresponding AIC (Akaike Information Criterion) values. Furthermore, the meteorological factors such as relative humidity and temperature are also conducted to identify the explanatory variables which have higher relation to evaporation. These variables are further used as inputs to the Back-Propagation Neural Network (BPNN) and are expected to provide meaningful information for successfully estimating evaporation. The applicability and reliability of the BPNN was demonstrated by comparing its performance with that of empirical formula. Keywords: Evaporation, Dynamic Factor Analysis, Artificial Neural Network.
Lewis, Ashley G; Bastiaansen, Marcel
2015-07-01
There is a growing literature investigating the relationship between oscillatory neural dynamics measured using electroencephalography (EEG) and/or magnetoencephalography (MEG), and sentence-level language comprehension. Recent proposals have suggested a strong link between predictive coding accounts of the hierarchical flow of information in the brain, and oscillatory neural dynamics in the beta and gamma frequency ranges. We propose that findings relating beta and gamma oscillations to sentence-level language comprehension might be unified under such a predictive coding account. Our suggestion is that oscillatory activity in the beta frequency range may reflect both the active maintenance of the current network configuration responsible for representing the sentence-level meaning under construction, and the top-down propagation of predictions to hierarchically lower processing levels based on that representation. In addition, we suggest that oscillatory activity in the low and middle gamma range reflect the matching of top-down predictions with bottom-up linguistic input, while evoked high gamma might reflect the propagation of bottom-up prediction errors to higher levels of the processing hierarchy. We also discuss some of the implications of this predictive coding framework, and we outline ideas for how these might be tested experimentally. PMID:25840879
Dynamic neural networks based on-line identification and control of high performance motor drives
NASA Technical Reports Server (NTRS)
Rubaai, Ahmed; Kotaru, Raj
1995-01-01
In the automated and high-tech industries of the future, there wil be a need for high performance motor drives both in the low-power range and in the high-power range. To meet very straight demands of tracking and regulation in the two quadrants of operation, advanced control technologies are of a considerable interest and need to be developed. In response a dynamics learning control architecture is developed with simultaneous on-line identification and control. the feature of the proposed approach, to efficiently combine the dual task of system identification (learning) and adaptive control of nonlinear motor drives into a single operation is presented. This approach, therefore, not only adapts to uncertainties of the dynamic parameters of the motor drives but also learns about their inherent nonlinearities. In fact, most of the neural networks based adaptive control approaches in use have an identification phase entirely separate from the control phase. Because these approaches separate the identification and control modes, it is not possible to cope with dynamic changes in a controlled process. Extensive simulation studies have been conducted and good performance was observed. The robustness characteristics of neuro-controllers to perform efficiently in a noisy environment is also demonstrated. With this initial success, the principal investigator believes that the proposed approach with the suggested neural structure can be used successfully for the control of high performance motor drives. Two identification and control topologies based on the model reference adaptive control technique are used in this present analysis. No prior knowledge of load dynamics is assumed in either topology while the second topology also assumes no knowledge of the motor parameters.
Laws of Large Numbers and Langevin Approximations for Stochastic Neural Field Equations
2013-01-01
In this study, we consider limit theorems for microscopic stochastic models of neural fields. We show that the Wilson–Cowan equation can be obtained as the limit in uniform convergence on compacts in probability for a sequence of microscopic models when the number of neuron populations distributed in space and the number of neurons per population tend to infinity. This result also allows to obtain limits for qualitatively different stochastic convergence concepts, e.g., convergence in the mean. Further, we present a central limit theorem for the martingale part of the microscopic models which, suitably re-scaled, converges to a centred Gaussian process with independent increments. These two results provide the basis for presenting the neural field Langevin equation, a stochastic differential equation taking values in a Hilbert space, which is the infinite-dimensional analogue of the chemical Langevin equation in the present setting. On a technical level, we apply recently developed law of large numbers and central limit theorems for piecewise deterministic processes taking values in Hilbert spaces to a master equation formulation of stochastic neuronal network models. These theorems are valid for processes taking values in Hilbert spaces, and by this are able to incorporate spatial structures of the underlying model. Mathematics Subject Classification (2000): 60F05, 60J25, 60J75, 92C20. PMID:23343328
Wilson, M T; Fung, P K; Robinson, P A; Shemmell, J; Reynolds, J N J
2016-08-01
The calcium dependent plasticity (CaDP) approach to the modeling of synaptic weight change is applied using a neural field approach to realistic repetitive transcranial magnetic stimulation (rTMS) protocols. A spatially-symmetric nonlinear neural field model consisting of populations of excitatory and inhibitory neurons is used. The plasticity between excitatory cell populations is then evaluated using a CaDP approach that incorporates metaplasticity. The direction and size of the plasticity (potentiation or depression) depends on both the amplitude of stimulation and duration of the protocol. The breaks in the inhibitory theta-burst stimulation protocol are crucial to ensuring that the stimulation bursts are potentiating in nature. Tuning the parameters of a spike-timing dependent plasticity (STDP) window with a Monte Carlo approach to maximize agreement between STDP predictions and the CaDP results reproduces a realistically-shaped window with two regions of depression in agreement with the existing literature. Developing understanding of how TMS interacts with cells at a network level may be important for future investigation. PMID:27259518
Traveling pulses in a stochastic neural field model of direction selectivity.
Bressloff, Paul C; Wilkerson, Jeremy
2012-01-01
We analyze the effects of extrinsic noise on traveling pulses in a neural field model of direction selectivity. The model consists of a one-dimensional scalar neural field with an asymmetric weight distribution consisting of an offset Mexican hat function. We first show how, in the absence of any noise, the system supports spontaneously propagating traveling pulses that can lock to externally moving stimuli. Using a separation of time-scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the wave from its uniformly translating position at long time-scales, and fluctuations in the wave profile around its instantaneous position at short time-scales. In the case of freely propagating pulses, the wandering is characterized by pure Brownian motion, whereas in the case of stimulus-locked pulses, it is given by an Ornstein-Uhlenbeck process. This establishes that stimulus-locked pulses are more robust to noise. PMID:23181018
Traveling pulses in a stochastic neural field model of direction selectivity
Bressloff, Paul C.; Wilkerson, Jeremy
2012-01-01
We analyze the effects of extrinsic noise on traveling pulses in a neural field model of direction selectivity. The model consists of a one-dimensional scalar neural field with an asymmetric weight distribution consisting of an offset Mexican hat function. We first show how, in the absence of any noise, the system supports spontaneously propagating traveling pulses that can lock to externally moving stimuli. Using a separation of time-scales and perturbation methods previously developed for stochastic reaction-diffusion equations, we then show how extrinsic noise in the activity variables leads to a diffusive-like displacement (wandering) of the wave from its uniformly translating position at long time-scales, and fluctuations in the wave profile around its instantaneous position at short time-scales. In the case of freely propagating pulses, the wandering is characterized by pure Brownian motion, whereas in the case of stimulus-locked pulses, it is given by an Ornstein–Uhlenbeck process. This establishes that stimulus-locked pulses are more robust to noise. PMID:23181018
Reconstructing magma reservoir dynamics from field evidence
NASA Astrophysics Data System (ADS)
Verberne, R.; Muntener, O.; Ulmer, P.
2013-12-01
Reconstructing the dynamics within magma reservoirs during and after emplacement greatly enhance our understanding of their formation and evolution. By determining the length and timescales over which magma remains mobile within magma reservoirs, fluxes of magma that is possibly extractable can be quantified, providing a link between plutonic and volcanic systems, and constraints on the likelihood of a pluton feeding volcanic eruptions. However, the general absence of marker beds and uncertainties regarding at which crystal fractions super-solidus foliation patterns are recorded make it difficult to reconstruct and quantify deformation inside plutons, especially the deformation that occurred at low crystal fractions. Here we present a case study of the Listino Ring Structure (LRS) of the Adamello Batholith in N-Italy, a 300-500 m-wide semi-circular zone of intensely foliated tonalite containing abundant evidence for magmatic deformation and magma mingling (Brack, 1984). The differences in the interaction between felsic and mafic magmas recorded in the form of mafic dikes, sheets and enclaves can be used to determine spatial and/or temporal differences of magma rheology during evolution of the reservoir. Detailed field mapping shows a clear difference in intrusion style between the southern and eastern sides of the LRS, as mafic magma intrudes into different felsic host magmas. An attempt is made to quantify these differences in terms of the physical state of the host magmas, using a variety of analyses pertaining to the breakup of mafic dikes into enclaves, the assimilation of phenocrysts from the host magma by the mafic magma, and the back-veining of mafic dikes and enclaves. The common component of these analyses is a parametrization of the phase petrology of the magmas as a function of temperature, which allows for the determination of melt fraction and composition at super-solidus conditions, from which physical properties such as density and viscosity can be
ERIC Educational Resources Information Center
Barca, Laura; Cornelissen, Piers; Simpson, Michael; Urooj, Uzma; Woods, Will; Ellis, Andrew W.
2011-01-01
Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the…
Utilizing neural networks in magnetic media modeling and field computation: A review
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2013-01-01
Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper. PMID:25685531
Boreland, B; Clement, G; Kunze, H
2015-08-01
After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ± 1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship. PMID:25984696
Takada, Ryu; Munetaka, Daigo; Kobayashi, Shoji; Suemitsu, Yoshikazu; Nara, Shigetoshi
2007-09-01
Chaotic dynamics in a recurrent neural network model and in two-dimensional cellular automata, where both have finite but large degrees of freedom, are investigated from the viewpoint of harnessing chaos and are applied to motion control to indicate that both have potential capabilities for complex function control by simple rule(s). An important point is that chaotic dynamics generated in these two systems give us autonomous complex pattern dynamics itinerating through intermediate state points between embedded patterns (attractors) in high-dimensional state space. An application of these chaotic dynamics to complex controlling is proposed based on an idea that with the use of simple adaptive switching between a weakly chaotic regime and a strongly chaotic regime, complex problems can be solved. As an actual example, a two-dimensional maze, where it should be noted that the spatial structure of the maze is one of typical ill-posed problems, is solved with the use of chaos in both systems. Our computer simulations show that the success rate over 300 trials is much better, at least, than that of a random number generator. Our functional simulations indicate that both systems are almost equivalent from the viewpoint of functional aspects based on our idea, harnessing of chaos. PMID:19003512
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S.; Agarwal, Dev P.
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data. PMID:26366169
dFasArt: dynamic neural processing in FasArt model.
Cano-Izquierdo, Jose-Manuel; Almonacid, Miguel; Pinzolas, Miguel; Ibarrola, Julio
2009-05-01
The temporal character of the input is, generally, not taken into account in the neural models. This paper presents an extension of the FasArt model focused on the treatment of temporal signals. FasArt model is proposed as an integration of the characteristic elements of the Fuzzy System Theory in an ART architecture. A duality between the activation concept and membership function is established. FasArt maintains the structure of the Fuzzy ARTMAP architecture, implying a static character since the dynamic response of the input is not considered. The proposed novel model, dynamic FasArt (dFasArt), uses dynamic equations for the processing stages of FasArt: activation, matching and learning. The new formulation of dFasArt includes time as another characteristic of the input. This allows the activation of the units to have a history-dependent character instead of being only a function of the last input value. Therefore, dFasArt model is robust to spurious values and noisy inputs. As experimental work, some cases have been used to check the robustness of dFasArt. A possible application has been proposed for the detection of variations in the system dynamics. PMID:19128936
Park, Gibeom; Tani, Jun
2015-12-01
The current study presents neurorobotics experiments on acquisition of skills for "communicable congruence" with human via learning. A dynamic neural network model which is characterized by its multiple timescale dynamics property was utilized as a neuromorphic model for controlling a humanoid robot. In the experimental task, the humanoid robot was trained to generate specific sequential movement patterns as responding to various sequences of imperative gesture patterns demonstrated by the human subjects by following predefined compositional semantic rules. The experimental results showed that (1) the adopted MTRNN can achieve generalization by learning in the lower feature perception level by using a limited set of tutoring patterns, (2) the MTRNN can learn to extract compositional semantic rules with generalization in its higher level characterized by slow timescale dynamics, (3) the MTRNN can develop another type of cognitive capability for controlling the internal contextual processes as situated to on-going task sequences without being provided with cues for explicitly indicating task segmentation points. The analysis on the dynamic property developed in the MTRNN via learning indicated that the aforementioned cognitive mechanisms were achieved by self-organization of adequate functional hierarchy by utilizing the constraint of the multiple timescale property and the topological connectivity imposed on the network configuration. These results of the current research could contribute to developments of socially intelligent robots endowed with cognitive communicative competency similar to that of human. PMID:26498195
Radial Basis Function Based Neural Network for Motion Detection in Dynamic Scenes.
Huang, Shih-Chia; Do, Ben-Hsiang
2014-01-01
Motion detection, the process which segments moving objects in video streams, is the first critical process and plays an important role in video surveillance systems. Dynamic scenes are commonly encountered in both indoor and outdoor situations and contain objects such as swaying trees, spouting fountains, rippling water, moving curtains, and so on. However, complete and accurate motion detection in dynamic scenes is often a challenging task. This paper presents a novel motion detection approach based on radial basis function artificial neural networks to accurately detect moving objects not only in dynamic scenes but also in static scenes. The proposed method involves two important modules: a multibackground generation module and a moving object detection module. The multibackground generation module effectively generates a flexible probabilistic model through an unsupervised learning process to fulfill the property of either dynamic background or static background. Next, the moving object detection module achieves complete and accurate detection of moving objects by only processing blocks that are highly likely to contain moving objects. This is accomplished by two procedures: the block alarm procedure and the object extraction procedure. The detection results of our method were evaluated by qualitative and quantitative comparisons with other state-of-the-art methods based on a wide range of natural video sequences. The overall results show that the proposed method substantially outperforms existing methods with Similarity and F1 accuracy rates of 69.37% and 65.50%, respectively. PMID:24108721
Topological phases of lattice bosons with a dynamical gauge field
NASA Astrophysics Data System (ADS)
Raventós, David; Graß, Tobias; Juliá-Díaz, Bruno; Santos, Luis; Lewenstein, Maciej
2016-03-01
Optical lattices with a complex-valued tunneling term have become a standard way of studying gauge-field physics with cold atoms. If the complex phase of the tunneling is made density dependent, such a system features even a self-interacting or dynamical magnetic field. In this paper we study the scenario of a few bosons in either a static or a dynamical gauge field by means of exact diagonalization. The topological structures are identified computing their Chern number. Upon decreasing the atom-atom contact interaction, the effect of the dynamical gauge field is enhanced, giving rise to a phase transition between two topologically nontrivial phases.
Unified description of the dynamics of quintessential scalar fields
Ureña-López, L. Arturo
2012-03-01
Using the dynamical system approach, we describe the general dynamics of cosmological scalar fields in terms of critical points and heteroclinic lines. It is found that critical points describe the initial and final states of the scalar field dynamics, but that heteroclinic lines give a more complete description of the evolution in between the critical points. In particular, the heteroclinic line that departs from the (saddle) critical point of perfect fluid-domination is the representative path in phase space of quintessence fields that may be viable dark energy candidates. We also discuss the attractor properties of the heteroclinic lines, and their importance for the description of thawing and freezing fields.
Muroski, Megan E; Morshed, Ramin A; Cheng, Yu; Vemulkar, Tarun; Mansell, Rhodri; Han, Yu; Zhang, Lingjiao; Aboody, Karen S; Cowburn, Russell P; Lesniak, Maciej S
2016-01-01
Stem cells have recently garnered attention as drug and particle carriers to sites of tumors, due to their natural ability to track to the site of interest. Specifically, neural stem cells (NSCs) have demonstrated to be a promising candidate for delivering therapeutics to malignant glioma, a primary brain tumor that is not curable by current treatments, and inevitably fatal. In this article, we demonstrate that NSCs are able to internalize 2 μm magnetic discs (SD), without affecting the health of the cells. The SD can then be remotely triggered in an applied 1 T rotating magnetic field to deliver a payload. Furthermore, we use this NSC-SD delivery system to deliver the SD themselves as a therapeutic agent to mechanically destroy glioma cells. NSCs were incubated with the SD overnight before treatment with a 1T rotating magnetic field to trigger the SD release. The potential timed release effects of the magnetic particles were tested with migration assays, confocal microscopy and immunohistochemistry for apoptosis. After the magnetic field triggered SD release, glioma cells were added and allowed to internalize the particles. Once internalized, another dose of the magnetic field treatment was administered to trigger mechanically induced apoptotic cell death of the glioma cells by the rotating SD. We are able to determine that NSC-SD and magnetic field treatment can achieve over 50% glioma cell death when loaded at 50 SD/cell, making this a promising therapeutic for the treatment of glioma. PMID:26734932
Muroski, Megan E.; Morshed, Ramin A.; Cheng, Yu; Vemulkar, Tarun; Mansell, Rhodri; Han, Yu; Zhang, Lingjiao; Aboody, Karen S.; Cowburn, Russell P.; Lesniak, Maciej S.
2016-01-01
Stem cells have recently garnered attention as drug and particle carriers to sites of tumors, due to their natural ability to track to the site of interest. Specifically, neural stem cells (NSCs) have demonstrated to be a promising candidate for delivering therapeutics to malignant glioma, a primary brain tumor that is not curable by current treatments, and inevitably fatal. In this article, we demonstrate that NSCs are able to internalize 2 μm magnetic discs (SD), without affecting the health of the cells. The SD can then be remotely triggered in an applied 1 T rotating magnetic field to deliver a payload. Furthermore, we use this NSC-SD delivery system to deliver the SD themselves as a therapeutic agent to mechanically destroy glioma cells. NSCs were incubated with the SD overnight before treatment with a 1T rotating magnetic field to trigger the SD release. The potential timed release effects of the magnetic particles were tested with migration assays, confocal microscopy and immunohistochemistry for apoptosis. After the magnetic field triggered SD release, glioma cells were added and allowed to internalize the particles. Once internalized, another dose of the magnetic field treatment was administered to trigger mechanically induced apoptotic cell death of the glioma cells by the rotating SD. We are able to determine that NSC-SD and magnetic field treatment can achieve over 50% glioma cell death when loaded at 50 SD/cell, making this a promising therapeutic for the treatment of glioma. PMID:26734932
Goodson, James L.; Kabelik, David
2009-01-01
Vertebrate animals exhibit a spectacular diversity of social behaviors, yet a variety of basic social behavior processes are essential to all species. These include social signaling; discrimination of conspecifics and sexual partners; appetitive and consummatory sexual behaviors; aggression and dominance behaviors; and parental behaviors (the latter with rare exceptions). These behaviors are of fundamental importance and are regulated by an evolutionarily conserved, core social behavior network (SBN) of the limbic forebrain and midbrain. The SBN encodes social information in a highly dynamic, distributed manner, such that behavior is most strongly linked to the pattern of neural activity across the SBN, not the activity of single loci. Thus, shifts in the relative weighting of activity across SBN nodes can conceivably produce almost limitless variation in behavior, including diversity across species (as weighting is modified through evolution), across behavioral contexts (as weights change temporally) and across behavioral phenotypes (as weighting is specified through heritable and developmental processes). Individual neural loci may also express diverse relationships to behavior, depending upon temporal variations in their functional connectivity to other brain regions (“neural context”). We here review the basic properties of the SBN and show how behavioral variation relates to functional connectivity of the network, and discuss ways in which neuroendocrine factors adjust network activity to produce behavioral diversity. In addition to the actions of steroid hormones on SBN state, we examine the temporally plastic and evolutionarily labile properties of the nonapeptides (the vasopressin- and oxytocin-like neuropeptides), and show how variations in nonapeptide signaling within the SBN serve to promote behavioral diversity across social contexts, seasons, phenotypes and species. Although this diversity is daunting in its complexity, the search for common
Cooper, Robert J; Spitzer, Nadja
2015-05-01
Silver nanoparticles (AgNPs) have potent antimicrobial properties at concentrations far below those that cause cytotoxic and genotoxic effects in eukaryotic cells. This property has resulted in the widespread use of AgNPs in consumer products, leading to environmental exposures at sub-lethal levels through ingestion and inhalation. Although the toxicity of AgNPs has been well characterized, effects of environmentally relevant exposures have not been extensively investigated in spite of studies that suggest accumulation of silver in tissues, including brain. To assess the sublethal effects of AgNPs on neural cell function, we used cultured SVZ-NSCs, a model of neurogenesis and neural cells. Throughout life, neural stem cells (NSCs) in the subventricular zone (SVZ) of the lateral ventricles proliferate and migrate via the rostral migratory stream to the olfactory bulb. Once there, they complete differentiation into neurons and glia and integrate into existing circuits. This process of neurogenesis is tightly regulated, and is considered a part of healthy brain function. We found that 1.0 μg/mL AgNP exposure in cultured differentiating NSCs induced the formation of f-actin inclusions, indicating a disruption of actin function. These inclusions did not co-localize with AgNPs, and therefore do not represent sequestered nanoparticles. Further, AgNP exposure led to a reduction in neurite extension and branching in live cells, cytoskeleton-mediated processes vital to neurogenesis. We conclude that AgNPs at sublethal concentrations disrupt actin dynamics in SVZ-NSCs, and that an associated disruption in neurogenesis may contribute to documented deficits in brain function following AgNP exposure. PMID:25952507
Gas dynamics in strong centrifugal fields
Bogovalov, S.V.; Kislov, V.A.; Tronin, I.V.
2015-03-10
Dynamics of waves generated by scopes in gas centrifuges (GC) for isotope separation is considered. The centrifugal acceleration in the GC reaches values of the order of 106g. The centrifugal and Coriolis forces modify essentially the conventional sound waves. Three families of the waves with different polarisation and dispersion exist in these conditions. Dynamics of the flow in the model GC Iguasu is investigated numerically. Comparison of the results of the numerical modelling of the wave dynamics with the analytical predictions is performed. New phenomena of the resonances in the GC is found. The resonances occur for the waves polarized along the rotational axis having the smallest dumping due to the viscosity.
NASA Astrophysics Data System (ADS)
Park, Choongseok; Worth, Robert M.; Rubchinsky, Leonid L.
2011-04-01
Synchronous oscillatory dynamics is frequently observed in the human brain. We analyze the fine temporal structure of phase-locking in a realistic network model and match it with the experimental data from Parkinsonian patients. We show that the experimentally observed intermittent synchrony can be generated just by moderately increased coupling strength in the basal ganglia circuits due to the lack of dopamine. Comparison of the experimental and modeling data suggest that brain activity in Parkinson's disease resides in the large boundary region between synchronized and nonsynchronized dynamics. Being on the edge of synchrony may allow for easy formation of transient neuronal assemblies.
Multi-bump solutions in a neural field model with external inputs
NASA Astrophysics Data System (ADS)
Ferreira, Flora; Erlhagen, Wolfram; Bicho, Estela
2016-07-01
We study the conditions for the formation of multiple regions of high activity or "bumps" in a one-dimensional, homogeneous neural field with localized inputs. Stable multi-bump solutions of the integro-differential equation have been proposed as a model of a neural population representation of remembered external stimuli. We apply a class of oscillatory coupling functions and first derive criteria to the input width and distance, which relate to the synaptic couplings that guarantee the existence and stability of one and two regions of high activity. These input-induced patterns are attracted by the corresponding stable one-bump and two-bump solutions when the input is removed. We then extend our analytical and numerical investigation to N-bump solutions showing that the constraints on the input shape derived for the two-bump case can be exploited to generate a memory of N > 2 localized inputs. We discuss the pattern formation process when either the conditions on the input shape are violated or when the spatial ranges of the excitatory and inhibitory connections are changed. An important aspect for applications is that the theoretical findings allow us to determine for a given coupling function the maximum number of localized inputs that can be stored in a given finite interval.
Criticality in neural ensembles: a mean field approach to expand network size from measured data
NASA Astrophysics Data System (ADS)
Wasnik, Vaibhav; Caracheo, Barak; Seamans, Jeremy; Emberly, Eldon
2014-03-01
At the point of a second order phase transition also termed as a critical point, systems display long range order and their macroscopic behaviours are independent of the microscopic details making up the system. This makes the idea of criticality interesting for studying biological systems which even though are different microscopically still have similar macroscopic behaviours. Recent high-throughput methods in neuroscience are making it possible to explore whether criticality exists in neural networks. Despite being high-throughput, many data sets are still only a minute sample of the neural system and methods towards expanding these data sets have to be considered in order to study the existence of criticality. Using measurements of firing neurons from the pre-frontal cortex (PFC) of rats, we map the data to a system of Ising spins and calculate the specific heat as a function of the measured network size, looking for the existence of critical points. In order to go to the thermodynamic limit, we propose a mean field approach for expanding such data. Our preliminary results show that such an approach can capture the statistical properties of much larger neuronal populations even when only a smaller subset is measured.
Mirabello, Claudio; Adelfio, Alessandro; Pollastri, Gianluca
2014-01-01
Predicting the fold of a protein from its amino acid sequence is one of the grand problems in computational biology. While there has been progress towards a solution, especially when a protein can be modelled based on one or more known structures (templates), in the absence of templates, even the best predictions are generally much less reliable. In this paper, we present an approach for predicting the three-dimensional structure of a protein from the sequence alone, when templates of known structure are not available. This approach relies on a simple reconstruction procedure guided by a novel knowledge-based evaluation function implemented as a class of artificial neural networks that we have designed: Neural Network Pairwise Interaction Fields (NNPIF). This evaluation function takes into account the contextual information for each residue and is trained to identify native-like conformations from non-native-like ones by using large sets of decoys as a training set. The training set is generated and then iteratively expanded during successive folding simulations. As NNPIF are fast at evaluating conformations, thousands of models can be processed in a short amount of time, and clustering techniques can be adopted for model selection. Although the results we present here are very preliminary, we consider them to be promising, with predictions being generated at state-of-the-art levels in some of the cases. PMID:24970210
Quantum analysis applied to thermo field dynamics on dissipative systems
Hashizume, Yoichiro; Okamura, Soichiro; Suzuki, Masuo
2015-03-10
Thermo field dynamics is one of formulations useful to treat statistical mechanics in the scheme of field theory. In the present study, we discuss dissipative thermo field dynamics of quantum damped harmonic oscillators. To treat the effective renormalization of quantum dissipation, we use the Suzuki-Takano approximation. Finally, we derive a dissipative von Neumann equation in the Lindbrad form. In the present treatment, we can easily obtain the initial damping shown previously by Kubo.
Neural substrates and behavioral profiles of romantic jealousy and its temporal dynamics
Sun, Yan; Yu, Hongbo; Chen, Jie; Liang, Jie; Lu, Lin; Zhou, Xiaolin; Shi, Jie
2016-01-01
Jealousy is not only a way of experiencing love but also a stabilizer of romantic relationships, although morbid romantic jealousy is maladaptive. Being engaged in a formal romantic relationship can tune one’s romantic jealousy towards a specific target. Little is known about how the human brain processes romantic jealousy by now. Here, by combining scenario-based imagination and functional MRI, we investigated the behavioral and neural correlates of romantic jealousy and their development across stages (before vs. after being in a formal relationship). Romantic jealousy scenarios elicited activations primarily in the basal ganglia (BG) across stages, and were significantly higher after the relationship was established in both the behavioral rating and BG activation. The intensity of romantic jealousy was related to the intensity of romantic happiness, which mainly correlated with ventral medial prefrontal cortex activation. The increase in jealousy across stages was associated with the tendency for interpersonal aggression. These results bridge the gap between the theoretical conceptualization of romantic jealousy and its neural correlates and shed light on the dynamic changes in jealousy. PMID:27273024
The neural circuit and synaptic dynamics underlying perceptual decision-making
NASA Astrophysics Data System (ADS)
Liu, Feng
2015-03-01
Decision-making with several choice options is central to cognition. To elucidate the neural mechanisms of multiple-choice motion discrimination, we built a continuous recurrent network model to represent a local circuit in the lateral intraparietal area (LIP). The network is composed of pyramidal cells and interneurons, which are directionally tuned. All neurons are reciprocally connected, and the synaptic connectivity strength is heterogeneous. Specifically, we assume two types of inhibitory connectivity to pyramidal cells: opposite-feature and similar-feature inhibition. The model accounted for both physiological and behavioral data from monkey experiments. The network is endowed with slow excitatory reverberation, which subserves the buildup and maintenance of persistent neural activity, and predominant feedback inhibition, which underlies the winner-take-all competition and attractor dynamics. The opposite-feature and opposite-feature inhibition have different effects on decision-making, and only their combination allows for a categorical choice among 12 alternatives. Together, our work highlights the importance of structured synaptic inhibition in multiple-choice decision-making processes.
Neural substrates and behavioral profiles of romantic jealousy and its temporal dynamics.
Sun, Yan; Yu, Hongbo; Chen, Jie; Liang, Jie; Lu, Lin; Zhou, Xiaolin; Shi, Jie
2016-01-01
Jealousy is not only a way of experiencing love but also a stabilizer of romantic relationships, although morbid romantic jealousy is maladaptive. Being engaged in a formal romantic relationship can tune one's romantic jealousy towards a specific target. Little is known about how the human brain processes romantic jealousy by now. Here, by combining scenario-based imagination and functional MRI, we investigated the behavioral and neural correlates of romantic jealousy and their development across stages (before vs. after being in a formal relationship). Romantic jealousy scenarios elicited activations primarily in the basal ganglia (BG) across stages, and were significantly higher after the relationship was established in both the behavioral rating and BG activation. The intensity of romantic jealousy was related to the intensity of romantic happiness, which mainly correlated with ventral medial prefrontal cortex activation. The increase in jealousy across stages was associated with the tendency for interpersonal aggression. These results bridge the gap between the theoretical conceptualization of romantic jealousy and its neural correlates and shed light on the dynamic changes in jealousy. PMID:27273024
Liu, Zongcheng; Dong, Xinmin; Xue, Jianping; Li, Hongbo; Chen, Yong
2016-09-01
This brief addresses the adaptive control problem for a class of pure-feedback systems with nonaffine functions possibly being nondifferentiable. Without using the mean value theorem, the difficulty of the control design for pure-feedback systems is overcome by modeling the nonaffine functions appropriately. With the help of neural network approximators, an adaptive neural controller is developed by combining the dynamic surface control (DSC) and minimal learning parameter (MLP) techniques. The key features of our approach are that, first, the restrictive assumptions on the partial derivative of nonaffine functions are removed, second, the DSC technique is used to avoid "the explosion of complexity" in the backstepping design, and the number of adaptive parameters is reduced significantly using the MLP technique, third, smooth robust compensators are employed to circumvent the influences of approximation errors and disturbances. Furthermore, it is proved that all the signals in the closed-loop system are semiglobal uniformly ultimately bounded. Finally, the simulation results are provided to demonstrate the effectiveness of the designed method. PMID:26277010
Kamiya, Atsunori; Kawada, Toru; Yamamoto, Kenta; Mizuno, Masaki; Shimizu, Shuji; Sugimachi, Masaru
2008-06-01
Maintenance of arterial pressure (AP) under orthostatic stress against gravitational fluid shift and pressure disturbance is of great importance. One of the mechanisms is that upright tilt resets steady-state baroreflex control to a higher sympathetic nerve activity (SNA). However, the dynamic feedback characteristics of the baroreflex system, a hallmark of fast-acting neural control, remain to be elucidated. In the present study, we tested the hypothesis that upright tilt resets the dynamic transfer function of the baroreflex neural arc to minify the pressure disturbance in total baroreflex control. Renal SNA and AP were recorded in ten anesthetized, vagotomized and aortic-denervated rabbits. Under baroreflex open-loop condition, isolated intracarotid sinus pressure (CSP) was changed according to a binary white noise sequence at operating pressure +/- 20 mmHg, while the animal was placed supine and at 60 degrees upright tilt. Regardless of the postures, the baroreflex neural (CSP to SNA) and peripheral (SNA to AP) arcs showed dynamic high-pass and low-pass characteristics, respectively. Upright tilt increased the transfer gain of the neural arc (resetting), decreased that of the peripheral arc, and consequently maintained the transfer characteristics of total baroreflex feedback system. A simulation study suggests that postural resetting of the neural arc would significantly increase the transfer gain of the total arc in upright position, and that in closed-loop baroreflex the resetting increases the stability of AP against pressure disturbance under orthostatic stress. In conclusion, upright tilt resets the dynamic transfer function of the baroreflex neural arc to minify the pressure disturbance in total baroreflex control. PMID:18471343
Agarwal, Rahul; Chen, Zhe; Kloosterman, Fabian; Wilson, Matthew A; Sarma, Sridevi V
2016-07-01
Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron's spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat's trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history-independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat's trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model's performance remains invariant to the apparent modality of the neuron's receptive field. PMID:27172447
Hoehl, Stefanie; Landt, Jennifer; Striano, Tricia
2008-01-01
This study investigates how human infants process and interpret human movement. Neural correlates to the perception of (i) possible biomechanical motion, (ii) impossible biomechanical motion and (iii) biomechanically possible motion but nonhuman ‘corrupted’ body schema were assessed in infants of 8 months. Analysis of event-related potentials resulting from the passive viewing of these point-light displays (PLDs) indicated a larger positive amplitude over parietal channels between 300 and 700 ms for observing biomechanically impossible PLDs when compared with other conditions. An early negative activation over frontal channels between 200 and 350 ms dissociated schematically impossible PLDs from other conditions. These results show that in infants, different cognitive systems underlie the processing of structural and dynamic features by 8 months of age. PMID:19015106
Han, Seong-Ik; Lee, Jang-Myung
2014-01-01
This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator. PMID:24055100
Neural Dynamics of Emotional Salience Processing in Response to Voices during the Stages of Sleep
Chen, Chenyi; Sung, Jia-Ying; Cheng, Yawei
2016-01-01
Sleep has been related to emotional functioning. However, the extent to which emotional salience is processed during sleep is unknown. To address this concern, we investigated night sleep in healthy adults regarding brain reactivity to the emotionally (happily, fearfully) spoken meaningless syllables dada, along with correspondingly synthesized nonvocal sounds. Electroencephalogram (EEG) signals were continuously acquired during an entire night of sleep while we applied a passive auditory oddball paradigm. During all stages of sleep, mismatch negativity (MMN) in response to emotional syllables, which is an index for emotional salience processing of voices, was detected. In contrast, MMN to acoustically matching nonvocal sounds was undetected during Sleep Stage 2 and 3 as well as rapid eye movement (REM) sleep. Post-MMN positivity (PMP) was identified with larger amplitudes during Stage 3, and at earlier latencies during REM sleep, relative to wakefulness. These findings clearly demonstrated the neural dynamics of emotional salience processing during the stages of sleep. PMID:27378870
Fung, C C Alan; Wong, K Y Michael; Wu, Si
2010-03-01
Understanding how the dynamics of a neural network is shaped by the network structure and, consequently, how the network structure facilitates the functions implemented by the neural system is at the core of using mathematical models to elucidate brain functions. This study investigates the tracking dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of neuronal recurrent interactions, CANNs can hold a continuous family of stationary states. They form a continuous manifold in which the neural system is neutrally stable. We systematically explore how this property facilitates the tracking performance of a CANN, which is believed to have clear correspondence with brain functions. By using the wave functions of the quantum harmonic oscillator as the basis, we demonstrate how the dynamics of a CANN is decomposed into different motion modes, corresponding to distortions in the amplitude, position, width, or skewness of the network state. We then develop a perturbation approach that utilizes the dominating movement of the network's stationary states in the state space. This method allows us to approximate the network dynamics up to an arbitrary accuracy depending on the order of perturbation used. We quantify the distortions of a gaussian bump during tracking and study their effects on tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable and the reaction time for the network to catch up with an abrupt change in the stimulus. PMID:19922292
ERIC Educational Resources Information Center
Zhang, Yaxu; Zhang, Jinlu; Min, Baoquan
2012-01-01
An event-related potential experiment was conducted to investigate the temporal neural dynamics of animacy processing in the interpretation of classifier-noun combinations. Participants read sentences that had a non-canonical structure, "object noun" + "subject noun" + "verb" + "numeral-classifier" + "adjective". The object noun and its classifier…
NASA Technical Reports Server (NTRS)
Ross, Muriel D.; Cutler, Lynn; Meyer, Glenn; Lam, Tony; Vaziri, Parshaw
1990-01-01
Computer-assisted, 3-dimensional reconstructions of macular receptive fields and of their linkages into a neural network have revealed new information about macular functional organization. Both type I and type II hair cells are included in the receptive fields. The fields are rounded, oblong, or elongated, but gradations between categories are common. Cell polarizations are divergent. Morphologically, each calyx of oblong and elongated fields appears to be an information processing site. Intrinsic modulation of information processing is extensive and varies with the kind of field. Each reconstructed field differs in detail from every other, suggesting that an element of randomness is introduced developmentally and contributes to endorgan adaptability.
Using Motor Imagery to Study the Neural Substrates of Dynamic Balance
Ferraye, Murielle Ursulla; Debû, Bettina; Heil, Lieke; Carpenter, Mark; Bloem, Bastiaan Roelof; Toni, Ivan
2014-01-01
This study examines the cerebral structures involved in dynamic balance using a motor imagery (MI) protocol. We recorded cerebral activity with functional magnetic resonance imaging while subjects imagined swaying on a balance board along the sagittal plane to point a laser at target pairs of different sizes (small, large). We used a matched visual imagery (VI) control task and recorded imagery durations during scanning. MI and VI durations were differentially influenced by the sway accuracy requirement, indicating that MI of balance is sensitive to the increased motor control necessary to point at a smaller target. Compared to VI, MI of dynamic balance recruited additional cortical and subcortical portions of the motor system, including frontal cortex, basal ganglia, cerebellum and mesencephalic locomotor region, the latter showing increased effective connectivity with the supplementary motor area. The regions involved in MI of dynamic balance were spatially distinct but contiguous to those involved in MI of gait (Bakker et al., 2008; Snijders et al., 2011; Crémers et al., 2012), in a pattern consistent with existing somatotopic maps of the trunk (for balance) and legs (for gait). These findings validate a novel, quantitative approach for studying the neural control of balance in humans. This approach extends previous reports on MI of static stance (Jahn et al., 2004, 2008), and opens the way for studying gait and balance impairments in patients with neurodegenerative disorders. PMID:24663383
Dynamic neural-based buffer management for Queuing systems with self-similar characteristics.
Yousefi'zadeh, Homayoun; Jonckheere, Edmond A
2005-09-01
Buffer management in queuing systems plays an important role in addressing the tradeoff between efficiency measured in terms of overall packet loss and fairness measured in terms of individual source packet loss. Complete partitioning (CP) of a buffer with the best fairness characteristic and complete sharing (CS) of a buffer with the best efficiency characteristic are at the opposite ends of the spectrum of buffer management techniques. Dynamic partitioning buffer management techniques aim at addressing the tradeoff between efficiency and fairness. Ease of implementation is the key issue when determining the practicality of a dynamic buffer management technique. In this paper, two novel dynamic buffer management techniques for queuing systems accommodating self-similar traffic patterns are introduced. The techniques take advantage of the adaptive learning power of perceptron neural networks when applied to arriving traffic patterns of queuing systems. Relying on the water-filling approach, our proposed techniques are capable of coping with the tradeoff between packet loss and fairness issues. Computer simulations reveal that both of the proposed techniques enjoy great efficiency and fairness characteristics as well as ease of implementation. PMID:16252824
Effects of correlation among stored patterns on associative dynamics of chaotic neural network
NASA Astrophysics Data System (ADS)
Iwai, Toshiya; Matsuzaki, Fuminari; Kuroiwa, Jousuke; Miyake, Shogo
2005-12-01
We numerically investigate the effects of correlation among stored patterns on the associative dynamics in a chaotic neural network model. In the model, there are two kinds of parameters: one is a measure of the Hopfield like behavior of the retrieval process and another controls the chaotic behavior. The parameter dependence of the associative dynamics is also examined. The following results are found. (i) Two dimensional parameter space is divided into two kinds of associative states by a distinct boundary. One is the retrieval state of the association such as the Hopfield like retrieval state, and another is the wandering state of the associative dynamics where the network retrieves stored patterns and their reverse patterns. (ii) The area of the wandering state becomes larger as the degree of correlation becomes larger. (iii) As the degree of correlation becomes larger, both the recall ratio of correlated patterns and the transition frequency between correlated patterns becomes larger in the wandering state. (iv) The whole region of the wandering state in the parameter space is not necessarily chaotic from the view point of the Lyapunov dimension, but most of the region of the wandering state is chaotic.
Sustained neural activity to gaze and emotion perception in dynamic social scenes.
Ulloa, José Luis; Puce, Aina; Hugueville, Laurent; George, Nathalie
2014-03-01
To understand social interactions, we must decode dynamic social cues from seen faces. Here, we used magnetoencephalography (MEG) to study the neural responses underlying the perception of emotional expressions and gaze direction changes as depicted in an interaction between two agents. Subjects viewed displays of paired faces that first established a social scenario of gazing at each other (mutual attention) or gazing laterally together (deviated group attention) and then dynamically displayed either an angry or happy facial expression. The initial gaze change elicited a significantly larger M170 under the deviated than the mutual attention scenario. At around 400 ms after the dynamic emotion onset, responses at posterior MEG sensors differentiated between emotions, and between 1000 and 2200 ms, left posterior sensors were additionally modulated by social scenario. Moreover, activity on right anterior sensors showed both an early and prolonged interaction between emotion and social scenario. These results suggest that activity in right anterior sensors reflects an early integration of emotion and social attention, while posterior activity first differentiated between emotions only, supporting the view of a dual route for emotion processing. Altogether, our data demonstrate that both transient and sustained neurophysiological responses underlie social processing when observing interactions between others. PMID:23202662
Laminar Neural Field Model of Laterally Propagating Waves of Orientation Selectivity.
Bressloff, Paul C; Carroll, Samuel R
2015-10-01
We construct a laminar neural-field model of primary visual cortex (V1) consisting of a superficial layer of neurons that encode the spatial location and orientation of a local visual stimulus coupled to a deep layer of neurons that only encode spatial location. The spatially-structured connections in the deep layer support the propagation of a traveling front, which then drives propagating orientation-dependent activity in the superficial layer. Using a combination of mathematical analysis and numerical simulations, we establish that the existence of a coherent orientation-selective wave relies on the presence of weak, long-range connections in the superficial layer that couple cells of similar orientation preference. Moreover, the wave persists in the presence of feedback from the superficial layer to the deep layer. Our results are consistent with recent experimental studies that indicate that deep and superficial layers work in tandem to determine the patterns of cortical activity observed in vivo. PMID:26491877
DeepCNF-D: Predicting Protein Order/Disorder Regions by Weighted Deep Convolutional Neural Fields
Wang, Sheng; Weng, Shunyan; Ma, Jianzhu; Tang, Qingming
2015-01-01
Intrinsically disordered proteins or protein regions are involved in key biological processes including regulation of transcription, signal transduction, and alternative splicing. Accurately predicting order/disorder regions ab initio from the protein sequence is a prerequisite step for further analysis of functions and mechanisms for these disordered regions. This work presents a learning method, weighted DeepCNF (Deep Convolutional Neural Fields), to improve the accuracy of order/disorder prediction by exploiting the long-range sequential information and the interdependency between adjacent order/disorder labels and by assigning different weights for each label during training and prediction to solve the label imbalance issue. Evaluated by the CASP9 and CASP10 targets, our method obtains 0.855 and 0.898 AUC values, which are higher than the state-of-the-art single ab initio predictors. PMID:26230689
Laminar Neural Field Model of Laterally Propagating Waves of Orientation Selectivity
2015-01-01
We construct a laminar neural-field model of primary visual cortex (V1) consisting of a superficial layer of neurons that encode the spatial location and orientation of a local visual stimulus coupled to a deep layer of neurons that only encode spatial location. The spatially-structured connections in the deep layer support the propagation of a traveling front, which then drives propagating orientation-dependent activity in the superficial layer. Using a combination of mathematical analysis and numerical simulations, we establish that the existence of a coherent orientation-selective wave relies on the presence of weak, long-range connections in the superficial layer that couple cells of similar orientation preference. Moreover, the wave persists in the presence of feedback from the superficial layer to the deep layer. Our results are consistent with recent experimental studies that indicate that deep and superficial layers work in tandem to determine the patterns of cortical activity observed in vivo. PMID:26491877
Wang, Yanan; Geng, Xinyi; Huang, Yongzhi; Wang, Shouyan
2016-02-01
The dysfunction of subthalamic nucleus is the main cause of Parkinson's disease. Local field potentials in human subthalamic nucleus contain rich physiological information. The present study aimed to quantify the oscillatory and dynamic characteristics of local field potentials of subthalamic nucleus, and their modulation by the medication therapy for Parkinson's disease. The subthalamic nucleus local field potentials were recorded from patients with Parkinson's disease at the states of on and off medication. The oscillatory features were characterised with the power spectral analysis. Furthermore, the dynamic features were characterised with time-frequency analysis and the coefficient of variation measure of the time-variant power at each frequency. There was a dominant peak at low beta-band with medication off. The medication significantly suppressed the low beta component and increased the theta component. The amplitude fluctuation of neural oscillations was measured by the coefficient of variation. The coefficient of variation in 4-7 Hz and 60-66 Hz was increased by medication. These effects proved that medication had significant modulation to subthalamic nucleus neural oscillatory synchronization and dynamic features. The subthalamic nucleus neural activities tend towards stable state under medication. The findings would provide quantitative biomarkers for studying the mechanisms of Parkinson's disease and clinical treatments of medication or deep brain stimulation. PMID:27382739
Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan
2014-01-01
The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442
Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano
2008-09-01
This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy. PMID:18779087
Dutt-Mazumder, Aviroop; Button, Chris; Robins, Anthony; Bartlett, Roger
2011-12-01
Recent studies have explored the organization of player movements in team sports using a range of statistical tools. However, the factors that best explain the performance of association football teams remain elusive. Arguably, this is due to the high-dimensional behavioural outputs that illustrate the complex, evolving configurations typical of team games. According to dynamical system analysts, movement patterns in team sports exhibit nonlinear self-organizing features. Nonlinear processing tools (i.e. Artificial Neural Networks; ANNs) are becoming increasingly popular to investigate the coordination of participants in sports competitions. ANNs are well suited to describing high-dimensional data sets with nonlinear attributes, however, limited information concerning the processes required to apply ANNs exists. This review investigates the relative value of various ANN learning approaches used in sports performance analysis of team sports focusing on potential applications for association football. Sixty-two research sources were summarized and reviewed from electronic literature search engines such as SPORTDiscus, Google Scholar, IEEE Xplore, Scirus, ScienceDirect and Elsevier. Typical ANN learning algorithms can be adapted to perform pattern recognition and pattern classification. Particularly, dimensionality reduction by a Kohonen feature map (KFM) can compress chaotic high-dimensional datasets into low-dimensional relevant information. Such information would be useful for developing effective training drills that should enhance self-organizing coordination among players. We conclude that ANN-based qualitative analysis is a promising approach to understand the dynamical attributes of association football players. PMID:22060175
Dynamical mass generation in QED with weak magnetic fields
Ayala, A.; Rojas, E.; Bashir, A.; Raya, A.
2006-09-25
We study the dynamical generation of masses for fundamental fermions in quenched quantum electrodynamics in the presence of magnetic fields using Schwinger-Dyson equations. We show that, contrary to the case where the magnetic field is strong, in the weak field limit eB << m(0)2, where m(0) is the value of the dynamically generated mass in the absence of the magnetic field, masses are generated above a critical value of the coupling and that this value is the same as in the case with no magnetic field. We carry out a numerical analysis to study the magnetic field dependence of the mass function above critical coupling and show that in this regime the dynamically generated mass and the chiral condensate for the lowest Landau level increase proportionally to (eB)2.
Static and dynamical Meissner force fields
NASA Technical Reports Server (NTRS)
Weinberger, B. R.; Lynds, L.; Hull, J. R.; Mulcahy, T. M.
1991-01-01
The coupling between copper-based high temperature superconductors (HTS) and magnets is represented by a force field. Zero-field cooled experiments were performed with several forms of superconductors: 1) cold-pressed sintered cylindrical disks; 2) small particles fixed in epoxy polymers; and 3) small particles suspended in hydrocarbon waxes. Using magnets with axial field symmetries, direct spatial force measurements in the range of 0.1 to 10(exp 4) dynes were performed with an analytical balance and force constants were obtained from mechanical vibrational resonances. Force constants increase dramatically with decreasing spatial displacement. The force field displays a strong temperature dependence between 20 and 90 K and decreases exponentially with increasing distance of separation. Distinct slope changes suggest the presence of B-field and temperature-activated processes that define the forces. Hysteresis measurements indicated that the magnitude of force scales roughly with the volume fraction of HTS in composite structures. Thus, the net force resulting from the field interaction appears to arise from regions as small or smaller than the grain size and does not depend on contiguous electron transport over large areas. Results of these experiments are discussed.
Technology Transfer Automated Retrieval System (TEKTRAN)
Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...
NASA Astrophysics Data System (ADS)
Wang, Dongshu; Huang, Lihong
2014-10-01
In this paper, we investigate the almost periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and nonsmooth analysis theory with generalized Lyapunov approach, we obtain the existence, uniqueness and global stability of almost periodic solution to the neural networks system. It is worthy to pointed out that, without assuming the boundedness or monotonicity of the discontinuous neuron activation functions, our results will also be valid. Finally, we give some numerical examples to show the applicability and effectiveness of our main results.
NASA Astrophysics Data System (ADS)
Ye, Ming; Khaleel, Raziuddin; Schaap, Marcel G.; Zhu, Jianting
2007-07-01
Simulations of moisture flow in heterogeneous soils are often hampered by lack of measurements of soil hydraulic parameters, making it necessary to rely on other sources of information. In this paper, we develop a methodology to integrate data that can be easily obtained (for example, initial moisture content, θi, bulk density, and soil texture) with data on soil hydraulic properties via cokriging and Artificial Neural Network (ANN)-based pedotransfer functions. The method is applied to generate heterogeneous soil hydraulic parameters at a field injection site in southeastern Washington State. Stratigraphy at the site consists of imperfectly stratified layers with irregular layer boundaries. Cokriging is first used to generate three-dimensional heterogeneous fields of bulk density and soil texture using an extensive data set of field-measured θi, which carry signature about site heterogeneity and stratigraphy. Soil texture and bulk density are subsequently input into an ANN-based site-specific pedotransfer function to generate three-dimensional heterogeneous soil hydraulic parameter fields. The stratigraphy at the site is well represented by the estimated pedotransfer variables and soil hydraulic parameters. The parameter estimates are then used to simulate a field injection experiment at the site. A relatively good agreement is obtained between the simulated and observed moisture contents. The spatial distribution pattern of observed moisture content as well as the southeastward moisture movement is captured well in the simulations. In contrast to earlier work using an effective parameter approach (Yeh et al., 2005), we are able to reproduce the observed splitting of the moisture plume in a coarse sand unit that is sandwiched between two fine-textured units. The simple method of combining cokriging and ANN for site characterization provides unbiased prediction of the observed moisture plume and is flexible so that additional measurements of various types can be
NASA Astrophysics Data System (ADS)
Xiao, Xiong; Zhao, Shengkui; Ha Nguyen, Duc Hoang; Zhong, Xionghu; Jones, Douglas L.; Chng, Eng Siong; Li, Haizhou
2016-01-01
This paper investigates deep neural networks (DNN) based on nonlinear feature mapping and statistical linear feature adaptation approaches for reducing reverberation in speech signals. In the nonlinear feature mapping approach, DNN is trained from parallel clean/distorted speech corpus to map reverberant and noisy speech coefficients (such as log magnitude spectrum) to the underlying clean speech coefficients. The constraint imposed by dynamic features (i.e., the time derivatives of the speech coefficients) are used to enhance the smoothness of predicted coefficient trajectories in two ways. One is to obtain the enhanced speech coefficients with a least square estimation from the coefficients and dynamic features predicted by DNN. The other is to incorporate the constraint of dynamic features directly into the DNN training process using a sequential cost function. In the linear feature adaptation approach, a sparse linear transform, called cross transform, is used to transform multiple frames of speech coefficients to a new feature space. The transform is estimated to maximize the likelihood of the transformed coefficients given a model of clean speech coefficients. Unlike the DNN approach, no parallel corpus is used and no assumption on distortion types is made. The two approaches are evaluated on the REVERB Challenge 2014 tasks. Both speech enhancement and automatic speech recognition (ASR) results show that the DNN-based mappings significantly reduce the reverberation in speech and improve both speech quality and ASR performance. For the speech enhancement task, the proposed dynamic feature constraint help to improve cepstral distance, frequency-weighted segmental signal-to-noise ratio (SNR), and log likelihood ratio metrics while moderately degrades the speech-to-reverberation modulation energy ratio. In addition, the cross transform feature adaptation improves the ASR performance significantly for clean-condition trained acoustic models.
Concentration Dynamics of Nanoparticles under a Periodic Light Field
NASA Astrophysics Data System (ADS)
Livashvili, A. I.; Krishtop, V. V.; Bryukhanova, T. N.; Kostina, G. V.
Using a system of heat and mass balance equations, we study the dynamics of the concentration of nanoparticles in nanofluids under the influence of a periodic light field. The review will be based on the thermal convection.
Dynamical Axion Field in a Magnetic Topological Insulator Superlattice
NASA Astrophysics Data System (ADS)
Wang, Jing; Lian, Biao; Zhang, Shou-Cheng
We propose that the dynamical axion field can be realized in a magnetic topological insulator superlattice or a topological paramagnetic insulator. The magnetic fluctuations of these systems produce a pseudoscalar field which has an axionic coupling to the electromagnetic field, and thus it gives a condensed-matter realization of the axion electrodynamics. Compared to the previously proposed dynamical axion materials where a long range antiferromagnetic order is required, the systems proposed here have the advantage that only an uniform magnetization or a paramagnetic state is needed for the dynamic axion. We further propose several experiments to detect such a dynamical axion field. This work is supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under Contract No. DE-AC02-76SF00515.
Dynamics of coupled vortices in perpendicular field
Jain, Shikha; Novosad, Valentyn Fradin, Frank Y.; Pearson, John E.; Bader, Samuel D.
2014-02-24
We explore the coupling mechanism of two magnetic vortices in the presence of a perpendicular bias field by pre-selecting the polarity combinations using the resonant-spin-ordering approach. First, out of the four vortex polarity combinations (two of which are degenerate), three stable core polarity states are achieved by lifting the degeneracy of one of the states. Second, the response of the stiffness constant for the vortex pair (similar polarity) in perpendicular bias is found to be asymmetric around the zero field, in contrast to the response obtained from a single vortex core. Finally, the collective response of the system for antiparallel core polarities is symmetric around zero bias. The vortex core whose polarization is opposite to the bias field dominates the response.
Exploring scalar field dynamics with Gaussian processes
Nair, Remya; Jhingan, Sanjay; Jain, Deepak E-mail: sanjay.jhingan@gmail.com
2014-01-01
The origin of the accelerated expansion of the Universe remains an unsolved mystery in Cosmology. In this work we consider a spatially flat Friedmann-Robertson-Walker (FRW) Universe with non-relativistic matter and a single scalar field contributing to the energy density of the Universe. Properties of this scalar field, like potential, kinetic energy, equation of state etc. are reconstructed from Supernovae and BAO data using Gaussian processes. We also reconstruct energy conditions and kinematic variables of expansion, such as the jerk and the slow roll parameter. We find that the reconstructed scalar field variables and the kinematic quantities are consistent with a flat ΛCDM Universe. Further, we find that the null energy condition is satisfied for the redshift range of the Supernovae data considered in the paper, but the strong energy condition is violated.
Bamford, Simeon A; Hogri, Roni; Giovannucci, Andrea; Taub, Aryeh H; Herreros, Ivan; Verschure, Paul F M J; Mintz, Matti; Del Giudice, Paolo
2012-07-01
A very-large-scale integration field-programmable mixed-signal array specialized for neural signal processing and neural modeling has been designed. This has been fabricated as a core on a chip prototype intended for use in an implantable closed-loop prosthetic system aimed at rehabilitation of the learning of a discrete motor response. The chosen experimental context is cerebellar classical conditioning of the eye-blink response. The programmable system is based on the intimate mixing of switched capacitor analog techniques with low speed digital computation; power saving innovations within this framework are presented. The utility of the system is demonstrated by the implementation of a motor classical conditioning model applied to eye-blink conditioning in real time with associated neural signal processing. Paired conditioned and unconditioned stimuli were repeatedly presented to an anesthetized rat and recordings were taken simultaneously from two precerebellar nuclei. These paired stimuli were detected in real time from this multichannel data. This resulted in the acquisition of a trigger for a well-timed conditioned eye-blink response, and repetition of unpaired trials constructed from the same data led to the extinction of the conditioned response trigger, compatible with natural cerebellar learning in awake animals. PMID:22481832
Jiang, Jun; Zhang, Qinglin; Van Gaal, Simon
2015-01-01
Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict awareness was manipulated by masking a conflict-inducing color word preceding a color patch target. We isolated semantic from response conflict by introducing four color words/patches, of which two were matched to the same response. We observed that both semantic as well as response conflict were associated with mid-frontal theta-band and parietal alpha-band power modulations, irrespective of the level of conflict awareness (high vs. low), although awareness of conflict increased these conflict-related power dynamics. These results show that both semantic and response conflict can be processed in the human brain and suggest that the neural oscillatory mechanisms in EEG reflect mainly “domain general” conflict processing mechanisms, instead of conflict source specific effects. PMID:26169473
Stochastic dynamic causal modelling of fMRI data: Should we care about neural noise?
Daunizeau, J.; Stephan, K.E.; Friston, K.J.
2012-01-01
Dynamic causal modelling (DCM) was introduced to study the effective connectivity among brain regions using neuroimaging data. Until recently, DCM relied on deterministic models of distributed neuronal responses to external perturbation (e.g., sensory stimulation or task demands). However, accounting for stochastic fluctuations in neuronal activity and their interaction with task-specific processes may be of particular importance for studying state-dependent interactions. Furthermore, allowing for random neuronal fluctuations may render DCM more robust to model misspecification and finesse problems with network identification. In this article, we examine stochastic dynamic causal models (sDCM) in relation to their deterministic counterparts (dDCM) and highlight questions that can only be addressed with sDCM. We also compare the network identification performance of deterministic and stochastic DCM, using Monte Carlo simulations and an empirical case study of absence epilepsy. For example, our results demonstrate that stochastic DCM can exploit the modelling of neural noise to discriminate between direct and mediated connections. We conclude with a discussion of the added value and limitations of sDCM, in relation to its deterministic homologue. PMID:22579726
Wai, Rong-Jong; Muthusamy, Rajkumar
2013-02-01
This paper presents the design and analysis of an intelligent control system that inherits the robust properties of sliding-mode control (SMC) for an n-link robot manipulator, including actuator dynamics in order to achieve a high-precision position tracking with a firm robustness. First, the coupled higher order dynamic model of an n-link robot manipulator is briefy introduced. Then, a conventional SMC scheme is developed for the joint position tracking of robot manipulators. Moreover, a fuzzy-neural-network inherited SMC (FNNISMC) scheme is proposed to relax the requirement of detailed system information and deal with chattering control efforts in the SMC system. In the FNNISMC strategy, the FNN framework is designed to mimic the SMC law, and adaptive tuning algorithms for network parameters are derived in the sense of projection algorithm and Lyapunov stability theorem to ensure the network convergence as well as stable control performance. Numerical simulations and experimental results of a two-link robot manipulator actuated by DC servo motors are provided to justify the claims of the proposed FNNISMC system, and the superiority of the proposed FNNISMC scheme is also evaluated by quantitative comparison with previous intelligent control schemes. PMID:24808281
The Neural Representation of Voluntary Task-Set Selection in Dynamic Environments.
Wisniewski, David; Reverberi, Carlo; Tusche, Anita; Haynes, John-Dylan
2015-12-01
When choosing actions, humans have to balance carefully between different task demands. On the one hand, they should perform tasks repeatedly to avoid frequent and effortful switching between different tasks. On the other hand, subjects have to retain their flexibility to adapt to changes in external task demands such as switching away from an increasingly difficult task. Here, we developed a difficulty-based choice task to investigate how subjects voluntarily select task-sets in predictably changing environments. Subjects were free to choose 1 of the 3 task-sets on a trial-by-trial basis, while the task difficulty changed dynamically over time. Subjects self-sequenced their behavior in this environment while we measured brain responses with functional magnetic resonance imaging (fMRI). Using multivariate decoding, we found that task choices were encoded in the medial prefrontal cortex (dorso-medial prefrontal cortex, dmPFC, and dorsal anterior cingulate cortex, dACC). The same regions were found to encode task difficulty, a major factor influencing choices. Importantly, the present paradigm allowed us to disentangle the neural code for task choices and task difficulty, ensuring that activation patterns in dmPFC/dACC independently encode these 2 factors. This finding provides new evidence for the importance of the dmPFC/dACC for task-selection and motivational functions in highly dynamic environments. PMID:25037922
Injury to the Spinal Cord Niche Alters the Engraftment Dynamics of Human Neural Stem Cells
Sontag, Christopher J.; Uchida, Nobuko; Cummings, Brian J.; Anderson, Aileen J.
2014-01-01
Summary The microenvironment is a critical mediator of stem cell survival, proliferation, migration, and differentiation. The majority of preclinical studies involving transplantation of neural stem cells (NSCs) into the CNS have focused on injured or degenerating microenvironments, leaving a dearth of information as to how NSCs differentially respond to intact versus damaged CNS. Furthermore, single, terminal histological endpoints predominate, providing limited insight into the spatiotemporal dynamics of NSC engraftment and migration. We investigated the early and long-term engraftment dynamics of human CNS stem cells propagated as neurospheres (hCNS-SCns) following transplantation into uninjured versus subacutely injured spinal cords of immunodeficient NOD-scid mice. We stereologically quantified engraftment, survival, proliferation, migration, and differentiation at 1, 7, 14, 28, and 98 days posttransplantation, and identified injury-dependent alterations. Notably, the injured microenvironment decreased hCNS-SCns survival, delayed and altered the location of proliferation, influenced both total and fate-specific migration, and promoted oligodendrocyte maturation. PMID:24936450
Brownian dynamics of charged particles in a constant magnetic field
Hou, L. J.; Piel, A.; Miskovic, Z. L.; Shukla, P. K.
2009-05-15
Numerical algorithms are proposed for simulating the Brownian dynamics of charged particles in an external magnetic field, taking into account the Brownian motion of charged particles, damping effect, and the effect of magnetic field self-consistently. Performance of these algorithms is tested in terms of their accuracy and long-time stability by using a three-dimensional Brownian oscillator model with constant magnetic field. Step-by-step recipes for implementing these algorithms are given in detail. It is expected that these algorithms can be directly used to study particle dynamics in various dispersed systems in the presence of a magnetic field, including polymer solutions, colloidal suspensions, and, particularly, complex (dusty) plasmas. The proposed algorithms can also be used as thermostat in the usual molecular dynamics simulation in the presence of magnetic field.
The magnetic field of Mars - Implications from gas dynamic modeling
NASA Technical Reports Server (NTRS)
Russell, C. T.; Luhmann, J. G.; Spreiter, J. R.; Stahara, S. S.
1984-01-01
On January 21, 1972, the Mars 3 spacecraft observed a variation in the magnetic field during its periapsis passage over the dayside of Mars that was suggestive of entry into a Martian magnetosphere. Original data and trajectory of the spacecraft have been obtained (Dolginov, 1983) and an attempt is made to simulate the observed variation of the magnetic field by using a gas dynamic simulation. In the gas dynamic model a flow field is generated and this flow field is used to carry the interplanetary magnetic field through the Martian magnetosheath. The independence of the flow field and magnetic field calculation makes it possible to converge rapidly on an IMF orientation that would result in a magnetic variation similar to that observed by Mars 3. There appears to be no need to invoke an entry into a Martian magnetosphere to explain these observations.
Dynamic changes in connexin expression following engraftment of neural stem cells to striatal tissue
Jaederstad, Johan Jaederstad, Linda Maria; Herlenius, Eric
2011-01-01
Gap-junctional intercellular communication between grafted neural stem cells (NSCs) and host cells seem to be essential for many of the beneficial effects associated with NSC engraftment. Utilizing murine NSCs (mNSCs) grafted into an organotypic ex vivo model system for striatal tissue we examined the prerequisites for formation of gap-junctional couplings between graft and host cells at different time points following implantation. We utilized flow cytometry (to quantify the proportion of connexin (Cx) 26 and 43 expressing cells), immunohistochemistry (for localization of the gap-junctional proteins in graft and host cells), dye-transfer studies with and without pharmacological gap-junctional blockers (assaying the functionality of the formed gap-junctional couplings), and proliferation assays (to estimate the role of gap junctions for NSC well-being) to this end. Immunohistochemical staining and dye-transfer studies revealed that the NSCs already form functional gap junctions prior to engraftment, thereby creating a substrate for subsequent graft and host communication. The expression of Cx43 by grafted NSCs was decreased by neurotrophin-3 overexpression in NSCs and culturing of grafted tissue in serum-free Neurobasal B27 medium. Cx43 expression in NSC-derived cells also changed significantly following engraftment. In host cells the expression of Cx43 peaked following traumatic stimulation and then declined within two weeks, suggesting a window of opportunity for successful host cell rescue by NSC engraftment. Further investigation of the dynamic changes in gap junction expression in graft and host cells and the associated variations in intercellular communication between implanted and endogenous cells might help to understand and control the early positive and negative effects evident following neural stem cell transplantation and thereby optimize the outcome of future clinical NSC transplantation therapies.
Qin, Zhong; Su, Gao-Li; Yu, Qiang; Hu, Bing-Min; Li, Jun
2005-05-01
In this work, datasets of water and carbon fluxes measured with eddy covariance technique above a summer maize field in the North China Plain were simulated with artificial neural networks (ANNs) to explore the fluxes responses to local environmental variables. The results showed that photosynthetically active radiation (PAR), vapor pressure deficit (VPD), air temperature (T) and leaf area index (LAI) were primary factors regulating both water vapor and carbon dioxide fluxes. Three-layer back-propagation neural networks (BP) could be applied to model fluxes exchange between cropland surface and atmosphere without using detailed physiological information or specific parameters of the plant. PMID:15822158
NASA Astrophysics Data System (ADS)
Martin, R. F., Jr.; Holland, D. L.; Svetich, J.
2014-12-01
We consider dynamical signatures of ion motion that discriminate between a current sheet magnetic field reversal and a magnetic neutral line field. These two related dynamical systems have been studied previously as chaotic scattering systems with application to the Earth's magnetotail. Both systems exhibit chaotic scattering over a wide range of parameter values. The structure and properties of their respective phase spaces have been used to elucidate potential dynamical signatures that affect spacecraft measured ion distributions. In this work we consider the problem of discrimination between these two magnetic structures using charged particle dynamics. For example we show that signatures based on the well known energy resonance in the current sheet field provide good discrimination since the resonance is not present in the neutral line case. While both fields can lead to fractal exit region structuring, their characteristics are different and also may provide some field discrimination. Application to magnetotail field and particle parameters will be presented
Saracoglu, Ö. Galip
2008-01-01
This paper describes artificial neural network (ANN) based prediction of the response of a fiber optic sensor using evanescent field absorption (EFA). The sensing probe of the sensor is made up a bundle of five PCS fibers to maximize the interaction of evanescent field with the absorbing medium. Different backpropagation algorithms are used to train the multilayer perceptron ANN. The Levenberg-Marquardt algorithm, as well as the other algorithms used in this work successfully predicts the sensor responses.
Advances in neural networks research: an introduction.
Kozma, Robert; Bressler, Steven; Perlovsky, Leonid; Venayagamoorthy, Ganesh Kumar
2009-01-01
The present Special Issue "Advances in Neural Networks Research: IJCNN2009" provides a state-of-art overview of the field of neural networks. It includes 39 papers from selected areas of the 2009 International Joint Conference on Neural Networks (IJCNN2009). IJCNN2009 took place on June 14-19, 2009 in Atlanta, Georgia, USA, and it represents an exemplary collaboration between the International Neural Networks Society and the IEEE Computational Intelligence Society. Topics in this issue include neuroscience and cognitive science, computational intelligence and machine learning, hybrid techniques, nonlinear dynamics and chaos, various soft computing technologies, intelligent signal processing and pattern recognition, bioinformatics and biomedicine, and engineering applications. PMID:19632811
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-01-01
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172
Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin
2016-01-01
Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization. PMID:26358243
Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong
2016-01-01
We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172
Raz, Gal; Winetraub, Yonatan; Jacob, Yael; Kinreich, Sivan; Maron-Katz, Adi; Shaham, Galit; Podlipsky, Ilana; Gilam, Gadi; Soreq, Eyal; Hendler, Talma
2012-04-01
Dynamic functional integration of distinct neural systems plays a pivotal role in emotional experience. We introduce a novel approach for studying emotion-related changes in the interactions within and between networks using fMRI. It is based on continuous computation of a network cohesion index (NCI), which is sensitive to both strength and variability of signal correlations between pre-defined regions. The regions encompass three clusters (namely limbic, medial prefrontal cortex (mPFC) and cognitive), each previously was shown to be involved in emotional processing. Two sadness-inducing film excerpts were viewed passively, and comparisons between viewer's rated sadness, parasympathetic, and inter-NCI and intra-NCI were obtained. Limbic intra-NCI was associated with reported sadness in both movies. However, the correlation between the parasympathetic-index, the rated sadness and the limbic-NCI occurred in only one movie, possibly related to a "deactivated" pattern of sadness. In this film, rated sadness intensity also correlated with the mPFC intra-NCI, possibly reflecting temporal correspondence between sadness and sympathy. Further, only for this movie, we found an association between sadness rating and the mPFC-limbic inter-NCI time courses. To the contrary, in the other film in which sadness was reported to commingle with horror and anger, dramatic events coincided with disintegration of these networks. Together, this may point to a difference between the cinematic experiences with regard to inter-network dynamics related to emotional regulation. These findings demonstrate the advantage of a multi-layered dynamic analysis for elucidating the uniqueness of emotional experiences with regard to an unguided processing of continuous and complex stimulation. PMID:22285693
Neural dynamics of feedforward and feedback processing in figure-ground segregation
Layton, Oliver W.; Mingolla, Ennio; Yazdanbakhsh, Arash
2014-01-01
Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation. PMID:25346703
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language–behavior relationships and the temporal patterns of interaction. Here, “internal dynamics” refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human’s linguistic instruction. After learning, the network actually formed the attractor structure representing both language–behavior relationships and the task’s temporal pattern in its internal dynamics. In the dynamics, language–behavior mapping was achieved by the branching structure. Repetition of human’s instruction and robot’s behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases. PMID:27471463
NASA Astrophysics Data System (ADS)
Pei, J.-S.; Smyth, A. W.; Kosmatopoulos, E. B.
2004-08-01
This study attempts to demystify a powerful neural network approach for modelling non-linear hysteretic systems and in turn to streamline its architecture to achieve better computational efficiency. The recently developed neural network modelling approach, the Volterra/Wiener neural network (VWNN), demonstrated its usefulness in identifying the restoring forces for hysteretic systems in an off-line or even in an adaptive (on-line) mode, however, the mechanism of how and why it works has not been thoroughly explored especially in terms of a physical interpretation. Artificial neural network are often treated as "black box" modelling tools, in contrast, here the authors carry out a detailed analysis in terms of problem formulation and network architecture to explore the inner workings of this neural network. Based on the understanding of the dynamics of hysteretic systems, some simplifications and modifications are made to the original VWNN in predicting accelerations of hysteretic systems under arbitrary force excitations. Through further examination of the algorithm related to the VWNN applications, the efficiency of the previously published approach is improved by reducing the number of the hidden nodes without affecting the modelling accuracy of the network. One training example is presented to illustrate the application of the VWNN; and another is provided to demonstrate that the VWNN is able to yield a unique set of weights when the values of the controlling design parameters are fixed. The practical issue of how to choose the values of these important parameters is discussed to aid engineering applications.
Dynamical mean-field theory for transition metal dioxide molecules
NASA Astrophysics Data System (ADS)
Lin, Nan; Zgid, Dominika; Marianetti, Chris; Reichman, David; Millis, Andrew
2012-02-01
The utility of the dynamical mean-field approximation in quantum chemistry is investigated in the context of transition metal dioxide molecules including TiO2 and CrO2. The choice of correlated orbitals and correlations to treat dynamically is discussed. The dynamical mean field solutions are compared to state of the art quantum chemical calculations. The dynamical mean-field method is found to capture about 50% of the total correlation energy, and to produce very good results for the d-level occupancies and magnetic moments. We also present the excitation spectrum in these molecules which is inaccessible in many wave-function based methods. Conceptual and technical difficulties will be outlined and discussed.
NASA Astrophysics Data System (ADS)
De Geeter, N.; Crevecoeur, G.; Leemans, A.; Dupré, L.
2015-01-01
In transcranial magnetic stimulation (TMS), an applied alternating magnetic field induces an electric field in the brain that can interact with the neural system. It is generally assumed that this induced electric field is the crucial effect exciting a certain region of the brain. More specifically, it is the component of this field parallel to the neuron’s local orientation, the so-called effective electric field, that can initiate neuronal stimulation. Deeper insights on the stimulation mechanisms can be acquired through extensive TMS modelling. Most models study simple representations of neurons with assumed geometries, whereas we embed realistic neural trajectories computed using tractography based on diffusion tensor images. This way of modelling ensures a more accurate spatial distribution of the effective electric field that is in addition patient and case specific. The case study of this paper focuses on the single pulse stimulation of the left primary motor cortex with a standard figure-of-eight coil. Including realistic neural geometry in the model demonstrates the strong and localized variations of the effective electric field between the tracts themselves and along them due to the interplay of factors such as the tract’s position and orientation in relation to the TMS coil, the neural trajectory and its course along the white and grey matter interface. Furthermore, the influence of changes in the coil orientation is studied. Investigating the impact of tissue anisotropy confirms that its contribution is not negligible. Moreover, assuming isotropic tissues lead to errors of the same size as rotating or tilting the coil with 10 degrees. In contrast, the model proves to be less sensitive towards the not well-known tissue conductivity values.
Dynamic signatures of quiet sun magnetic fields
NASA Technical Reports Server (NTRS)
Martin, S. F.
1983-01-01
The collision and disappearance of opposite polarity fields is observed most frequently at the borders of network cells. Due to observational limitations, the frequency, magnitude, and spatial distribution of magnetic flux loss have not yet been quantitatively determined at the borders or within the interiors of the cells. However, in agreement with published hypotheses of other authors, the disapperance of magnetic flux is speculated to be a consequence of either gradual or rapid magnetic reconnection which could be the means of converting magnetic energy into the kinetic, thermal, and nonthermal sources of energy for microflares, spicules, the solar wind, and the heating of the solar corona.
Dynamically important magnetic fields near accreting supermassive black holes.
Zamaninasab, M; Clausen-Brown, E; Savolainen, T; Tchekhovskoy, A
2014-06-01
Accreting supermassive black holes at the centres of active galaxies often produce 'jets'--collimated bipolar outflows of relativistic particles. Magnetic fields probably play a critical role in jet formation and in accretion disk physics. A dynamically important magnetic field was recently found near the Galactic Centre black hole. If this is common and if the field continues to near the black hole event horizon, disk structures will be affected, invalidating assumptions made in standard models. Here we report that jet magnetic field and accretion disk luminosity are tightly correlated over seven orders of magnitude for a sample of 76 radio-loud active galaxies. We conclude that the jet-launching regions of these radio-loud galaxies are threaded by dynamically important fields, which will affect the disk properties. These fields obstruct gas infall, compress the accretion disk vertically, slow down the disk rotation by carrying away its angular momentum in an outflow and determine the directionality of jets. PMID:24899311
Monitoring the Earth's Dynamic Magnetic Field
Love, Jeffrey J.; Applegate, David; Townshend, John B.
2008-01-01
The mission of the U.S. Geological Survey's Geomagnetism Program is to monitor the Earth's magnetic field. Using ground-based observatories, the Program provides continuous records of magnetic field variations covering long timescales; disseminates magnetic data to various governmental, academic, and private institutions; and conducts research into the nature of geomagnetic variations for purposes of scientific understanding and hazard mitigation. The program is an integral part of the U.S. Government's National Space Weather Program (NSWP), which also includes programs in the National Aeronautics and Space Administration (NASA), the Department of Defense (DOD), the National Oceanic and Atmospheric Administration (NOAA), and the National Science Foundation (NSF). The NSWP works to provide timely, accurate, and reliable space weather warnings, observations, specifications, and forecasts, and its work is important for the U.S. economy and national security. Please visit the National Geomagnetism Program?s website, http://geomag.usgs.gov, where you can learn more about the Program and the science of geomagnetism. You can find additional related information at the Intermagnet website, http://www.intermagnet.org.
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver. PMID:26340789
A dynamic model of Venus's gravity field
NASA Technical Reports Server (NTRS)
Kiefer, W. S.; Richards, M. A.; Hager, B. H.; Bills, B. G.
1984-01-01
Unlike Earth, long wavelength gravity anomalies and topography correlate well on Venus. Venus's admittance curve from spherical harmonic degree 2 to 18 is inconsistent with either Airy or Pratt isostasy, but is consistent with dynamic support from mantle convection. A model using whole mantle flow and a high viscosity near surface layer overlying a constant viscosity mantle reproduces this admittance curve. On Earth, the effective viscosity deduced from geoid modeling increases by a factor of 300 from the asthenosphere to the lower mantle. These viscosity estimates may be biased by the neglect of lateral variations in mantle viscosity associated with hot plumes and cold subducted slabs. The different effective viscosity profiles for Earth and Venus may reflect their convective styles, with tectonism and mantle heat transport dominated by hot plumes on Venus and by subducted slabs on Earth. Convection at degree 2 appears much stronger on Earth than on Venus. A degree 2 convective structure may be unstable on Venus, but may have been stabilized on Earth by the insulating effects of the Pangean supercontinental assemblage.
A dynamic model of Venus's gravity field
NASA Technical Reports Server (NTRS)
Kiefer, W. S.; Richards, M. A.; Hager, B. H.; Bills, B. G.
1986-01-01
Unlike Earth, long wavelength gravity anomalies and topography correlate well on Venus. Venus's admittance curve from spherical harmonic degree 2 to 18 is inconsistent with either Airy or Pratt isostasy, but is consistent with dynamic support from mantle convection. A model using whole mantle flow and a high viscosity near surface layer overlying a constant viscosity mantle reproduces this admittance curve. On Earth, the effective viscosity deduced from geoid modeling increases by a factor of 300 from the asthenosphere to the lower mantle. These viscosity estimates may be biased by the neglect of lateral variations in mantle viscosity associated with hot plumes and cold subducted slabs. The different effective viscosity profiles for Earth and Venus may reflect their convective styles, with tectonism and mantle heat transport dominated by hot plumes on Venus and by subducted slabs on Earth. Convection at degree 2 appears much stronger on Earth than on Venus. A degree 2 convective structure may be unstable on Venus, but may have been stabilized on Earth by the insulating effects of the Pangean supercontinental assemblage.
High resolution, large dynamic range field map estimation
Dagher, Joseph; Reese, Timothy; Bilgin, Ali
2013-01-01
Purpose We present a theory and a corresponding method to compute high resolution field maps over a large dynamic range. Theory and Methods We derive a closed-form expression for the error in the field map value when computed from two echoes. We formulate an optimization problem to choose three echo times which result in a pair of maximally distinct error distributions. We use standard field mapping sequences at the prescribed echo times. We then design a corresponding estimation algorithm which takes advantage of the optimized echo times to disambiguate the field offset value. Results We validate our method using high resolution images of a phantom at 7T. The resulting field maps demonstrate robust mapping over both a large dynamic range, and in low SNR regions. We also present high resolution offset maps in vivo using both, GRE and MEGE sequences. Even though the proposed echo time spacings are larger than the well known phase aliasing cutoff, the resulting field maps exhibit a large dynamic range without the use of phase unwrapping or spatial regularization techniques. Conclusion We demonstrate a novel 3-echo field map estimation method which overcomes the traditional noise-dynamic range trade-off. PMID:23401245
NASA Astrophysics Data System (ADS)
Melnikov, Leonid A.; Novosselova, Anna V.; Blinova, Nadejda V.; Vinitsky, Sergey I.; Serov, Vladislav V.; Bakutkin, Valery V.; Camenskich, T. G.; Guileva, E. V.
2000-03-01
In this work the numerical investigations of a potential dynamics of a neural network as the non-linear system and dynamics of the visual nerve which connect the eye retina receptors with the striate cortex cerebrum as the answer to the through-skin excitement of the eye retina by the electrical signal were realized. The visual evoked potential is the answer and characterizes the human brain state over the structures retina state and the conduction of the visual nerve fibers. The results of these investigations were presented. Specific features of the neural network, such as the excitation and depression, we took into account too. The discussion about the model parameters, used at the time of the numerical investigation, was made. The comparative analysis of the retina potential data and the data of the external signal filing by the brain hemicerebrum visual centers was made too.
Yan, Zheng; Wang, Jun
2014-03-01
This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach. PMID:24807443
Detorakis, Georgios Is.; Rougier, Nicolas P.
2012-01-01
We investigate the formation and maintenance of ordered topographic maps in the primary somatosensory cortex as well as the reorganization of representations after sensory deprivation or cortical lesion. We consider both the critical period (postnatal) where representations are shaped and the post-critical period where representations are maintained and possibly reorganized. We hypothesize that feed-forward thalamocortical connections are an adequate site of plasticity while cortico-cortical connections are believed to drive a competitive mechanism that is critical for learning. We model a small skin patch located on the distal phalangeal surface of a digit as a set of 256 Merkel ending complexes (MEC) that feed a computational model of the primary somatosensory cortex (area 3b). This model is a two-dimensional neural field where spatially localized solutions (a.k.a. bumps) drive cortical plasticity through a Hebbian-like learning rule. Simulations explain the initial formation of ordered representations following repetitive and random stimulations of the skin patch. Skin lesions as well as cortical lesions are also studied and results confirm the possibility to reorganize representations using the same learning rule and depending on the type of the lesion. For severe lesions, the model suggests that cortico-cortical connections may play an important role in complete recovery. PMID:22808127
A neural model of the frontal eye fields with reward-based learning.
Ye, Weijie; Liu, Shenquan; Liu, Xuanliang; Yu, Yuguo
2016-09-01
Decision-making is a flexible process dependent on the accumulation of various kinds of information; however, the corresponding neural mechanisms are far from clear. We extended a layered model of the frontal eye field to a learning-based model, using computational simulations to explain the cognitive process of choice tasks. The core of this extended model has three aspects: direction-preferred populations that cluster together the neurons with the same orientation preference, rule modules that control different rule-dependent activities, and reward-based synaptic plasticity that modulates connections to flexibly change the decision according to task demands. After repeated attempts in a number of trials, the network successfully simulated three decision choice tasks: an anti-saccade task, a no-go task, and an associative task. We found that synaptic plasticity could modulate the competition of choices by suppressing erroneous choices while enhancing the correct (rewarding) choice. In addition, the trained model captured some properties exhibited in animal and human experiments, such as the latency of the reaction time distribution of anti-saccades, the stop signal mechanism for canceling a reflexive saccade, and the variation of latency to half-max selectivity. Furthermore, the trained model was capable of reproducing the re-learning procedures when switching tasks and reversing the cue-saccade association. PMID:27284696
Residual Separation of Magnetic Fields Using a Cellular Neural Network Approach
NASA Astrophysics Data System (ADS)
Albora, A. M.; Özmen, A.; Uçan, O. N.
- In this paper, a Cellular Neural Network (CNN) has been applied to a magnetic regional/residual anomaly separation problem. CNN is an analog parallel computing paradigm defined in space and characterized by the locality of connections between processing neurons. The behavior of the CNN is defined by the template matrices A, B and the template vector I. We have optimized weight coefficients of these templates using Recurrent Perceptron Learning Algorithm (RPLA). The advantages of CNN as a real-time stochastic method are that it introduces little distortion to the shape of the original image and that it is not effected significantly by factors such as the overlap of power spectra of residual fields. The proposed method is tested using synthetic examples and the average depth of the buried objects has been estimated by power spectrum analysis. Next the CNN approach is applied to magnetic data over the Golalan chromite mine in Elazig which lies East of Turkey. This area is among the largest and richest chromite masses of the world. We compared the performance of CNN to classical derivative approaches.
Nanosecond pulsed electric field thresholds for nanopore formation in neural cells
NASA Astrophysics Data System (ADS)
Roth, Caleb C.; Tolstykh, Gleb P.; Payne, Jason A.; Kuipers, Marjorie A.; Thompson, Gary L.; DeSilva, Mauris N.; Ibey, Bennett L.
2013-03-01
The persistent influx of ions through nanopores created upon cellular exposure to nanosecond pulse electric fields (nsPEF) could be used to modulate neuronal function. One ion, calcium (Ca), is important to action potential firing and regulates many ion channels. However, uncontrolled hyper-excitability of neurons leads to Ca overload and neurodegeneration. Thus, to prevent unintended consequences of nsPEF-induced neural stimulation, knowledge of optimum exposure parameters is required. We determined the relationship between nsPEF exposure parameters (pulse width and amplitude) and nanopore formation in two cell types: rodent neuroblastoma (NG108) and mouse primary hippocampal neurons (PHN). We identified thresholds for nanoporation using Annexin V and FM1-43, to detect changes in membrane asymmetry, and through Ca influx using Calcium Green. The ED50 for a single 600 ns pulse, necessary to cause uptake of extracellular Ca, was 1.76 kV/cm for NG108 and 0.84 kV/cm for PHN. At 16.2 kV/cm, the ED50 for pulse width was 95 ns for both cell lines. Cadmium, a nonspecific Ca channel blocker, failed to prevent Ca uptake suggesting that observed influx is likely due to nanoporation. These data demonstrate that moderate amplitude single nsPEF exposures result in rapid Ca influx that may be capable of controllably modulating neurological function.
Porée, Fabienne; Kachenoura, Amar; Carrault, Guy; Dal Molin, Renzo; Mabo, Philippe; Hernandez, Alfredo I.
2013-01-01
The study proposes a method to facilitate the remote follow-up of patients suffering from cardiac pathologies and treated with an implantable device, by synthesizing a 12-lead surface ECG from the intracardiac electrograms (EGM) recorded by the device. Two methods (direct and indirect), based on dynamic Time Delay artificial Neural Networks (TDNN) are proposed and compared with classical linear approaches. The direct method aims to estimate 12 different transfer functions between the EGM and each surface ECG signal. The indirect method is based on a preliminary orthogonalization phase of the available EGM and ECG signals, and the application of the TDNN between these orthogonalized signals, using only three transfer functions. These methods are evaluated on a dataset issued from 15 patients. Correlation coefficients calculated between the synthesized and the real ECG show that the proposed TDNN methods represent an efficient way to synthesize 12-lead ECG, from two or four EGM and perform better than the linear ones. We also evaluate the results as a function of the EGM configuration. Results are also supported by the comparison of extracted features and a qualitative analysis performed by a cardiologist. PMID:23086502
Modified neural dynamic surface approach to output feedback of MIMO nonlinear systems.
Sun, Guofa; Li, Dongwu; Ren, Xuemei
2015-02-01
We report an adaptive output feedback dynamic surface control (DSC), maintaining the prescribed performance, for a class of uncertain nonlinear systems with multiinput and multioutput. Designing neural network observers and modifying the DSC method achieves several control objectives. First, to achieve output feedback control, the finite-time echo state networks (ESN) observer with fast convergence is designed to obtain the online system states. Thus, the immeasurable states in traditional state feedback control are estimated and the unknown functions are approximated by ESN. Then, a modified DSC approach is developed by introducing a high-order sliding mode differentiator to replace the first-order filter in each step. Thus, the effect of filter performance on closed-loop stability is reduced. Furthermore, the input to state stability guarantees that all signals of the whole closed-loop system are semiglobally uniformly ultimately bounded. Specifically, the performance functions make the tracking errors converge to a compact set around equilibrium. Two numerical examples illustrated the proposed control scheme with satisfactory results. PMID:25608286
Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants
Kopp, Franziska; Dietrich, Claudia
2013-01-01
Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071
Ellamil, Melissa; Fox, Kieran C R; Dixon, Matthew L; Pritchard, Sean; Todd, Rebecca M; Thompson, Evan; Christoff, Kalina
2016-08-01
Thoughts arise spontaneously in our minds with remarkable frequency, but tracking the brain systems associated with the early inception of a thought has proved challenging. Here we addressed this issue by taking advantage of the heightened introspective ability of experienced mindfulness practitioners to observe the onset of their spontaneously arising thoughts. We found subtle differences in timing among the many regions typically recruited by spontaneous thought. In some of these regions, fMRI signal peaked prior to the spontaneous arising of a thought - most notably in the medial temporal lobe and inferior parietal lobule. In contrast, activation in the medial prefrontal, temporopolar, mid-insular, lateral prefrontal, and dorsal anterior cingulate cortices peaked together with or immediately following the arising of spontaneous thought. We propose that brain regions that show antecedent recruitment may be preferentially involved in the initial inception of spontaneous thoughts, while those that show later recruitment may be preferentially involved in the subsequent elaboration and metacognitive processing of spontaneous thoughts. Our findings highlight the temporal dynamics of neural recruitment surrounding the emergence of spontaneous thoughts and may help account for some of spontaneous thought's peculiar qualities, including its wild diversity of content and its links to memory and attention. PMID:27114056
Blind Source Separation and Dynamic Fuzzy Neural Network for Fault Diagnosis in Machines
NASA Astrophysics Data System (ADS)
Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli
2015-07-01
Many assessment and detection methods are used to diagnose faults in machines. High accuracy in fault detection and diagnosis can be achieved by using numerical methods with noise-resistant properties. However, to some extent, noise always exists in measured data on real machines, which affects the identification results, especially in the diagnosis of early- stage faults. In view of this situation, a damage assessment method based on blind source separation and dynamic fuzzy neural network (DFNN) is presented to diagnose the early-stage machinery faults in this paper. In the processing of measurement signals, blind source separation is adopted to reduce noise. Then sensitive features of these faults are obtained by extracting low dimensional manifold characteristics from the signals. The model for fault diagnosis is established based on DFNN. Furthermore, on-line computation is accelerated by means of compressed sensing. Numerical vibration signals of ball screw fault modes are processed on the model for mechanical fault diagnosis and the results are in good agreement with the actual condition even at the early stage of fault development. This detection method is very useful in practice and feasible for early-stage fault diagnosis.
NASA Astrophysics Data System (ADS)
Carli, S.; Bonifetto, R.; Savoldi, L.; Zanino, R.
2015-09-01
A model based on Artificial Neural Networks (ANNs) is developed for the heated line portion of a cryogenic circuit, where supercritical helium (SHe) flows and that also includes a cold circulator, valves, pipes/cryolines and heat exchangers between the main loop and a saturated liquid helium (LHe) bath. The heated line mimics the heat load coming from the superconducting magnets to their cryogenic cooling circuits during the operation of a tokamak fusion reactor. An ANN is trained, using the output from simulations of the circuit performed with the 4C thermal-hydraulic (TH) code, to reproduce the dynamic behavior of the heated line, including for the first time also scenarios where different types of controls act on the circuit. The ANN is then implemented in the 4C circuit model as a new component, which substitutes the original 4C heated line model. For different operational scenarios and control strategies, a good agreement is shown between the simplified ANN model results and the original 4C results, as well as with experimental data from the HELIOS facility confirming the suitability of this new approach which, extended to an entire magnet systems, can lead to real-time control of the cooling loops and fast assessment of control strategies for heat load smoothing to the cryoplant.
Task-dependent neural representations of salient events in dynamic auditory scenes
Shuai, Lan; Elhilali, Mounya
2014-01-01
Selecting pertinent events in the cacophony of sounds that impinge on our ears every day is regulated by the acoustic salience of sounds in the scene as well as their behavioral relevance as dictated by top-down task-dependent demands. The current study aims to explore the neural signature of both facets of attention, as well as their possible interactions in the context of auditory scenes. Using a paradigm with dynamic auditory streams with occasional salient events, we recorded neurophysiological responses of human listeners using EEG while manipulating the subjects' attentional state as well as the presence or absence of a competing auditory stream. Our results showed that salient events caused an increase in the auditory steady-state response (ASSR) irrespective of attentional state or complexity of the scene. Such increase supplemented ASSR increases due to task-driven attention. Salient events also evoked a strong N1 peak in the ERP response when listeners were attending to the target sound stream, accompanied by an MMN-like component in some cases and changes in the P1 and P300 components under all listening conditions. Overall, bottom-up attention induced by a salient change in the auditory stream appears to mostly modulate the amplitude of the steady-state response and certain event-related potentials to salient sound events; though this modulation is affected by top-down attentional processes and the prominence of these events in the auditory scene as well. PMID:25100934
Kasabov, Nikola; Dhoble, Kshitij; Nuntalid, Nuttapod; Indiveri, Giacomo
2013-05-01
On-line learning and recognition of spatio- and spectro-temporal data (SSTD) is a very challenging task and an important one for the future development of autonomous machine learning systems with broad applications. Models based on spiking neural networks (SNN) have already proved their potential in capturing spatial and temporal data. One class of them, the evolving SNN (eSNN), uses a one-pass rank-order learning mechanism and a strategy to evolve a new spiking neuron and new connections to learn new patterns from incoming data. So far these networks have been mainly used for fast image and speech frame-based recognition. Alternative spike-time learning methods, such as Spike-Timing Dependent Plasticity (STDP) and its variant Spike Driven Synaptic Plasticity (SDSP), can also be used to learn spatio-temporal representations, but they usually require many iterations in an unsupervised or semi-supervised mode of learning. This paper introduces a new class of eSNN, dynamic eSNN, that utilise both rank-order learning and dynamic synapses to learn SSTD in a fast, on-line mode. The paper also introduces a new model called deSNN, that utilises rank-order learning and SDSP spike-time learning in unsupervised, supervised, or semi-supervised modes. The SDSP learning is used to evolve dynamically the network changing connection weights that capture spatio-temporal spike data clusters both during training and during recall. The new deSNN model is first illustrated on simple examples and then applied on two case study applications: (1) moving object recognition using address-event representation (AER) with data collected using a silicon retina device; (2) EEG SSTD recognition for brain-computer interfaces. The deSNN models resulted in a superior performance in terms of accuracy and speed when compared with other SNN models that use either rank-order or STDP learning. The reason is that the deSNN makes use of both the information contained in the order of the first input spikes
Arévalo, Orlando; Bornschlegl, Mona A.; Eberhardt, Sven; Ernst, Udo; Pawelzik, Klaus; Fahle, Manfred
2013-01-01
In everyday life, humans interact with a dynamic environment often requiring rapid adaptation of visual perception and motor control. In particular, new visuo–motor mappings must be learned while old skills have to be kept, such that after adaptation, subjects may be able to quickly change between two different modes of generating movements (‘dual–adaptation’). A fundamental question is how the adaptation schedule determines the acquisition speed of new skills. Given a fixed number of movements in two different environments, will dual–adaptation be faster if switches (‘phase changes’) between the environments occur more frequently? We investigated the dynamics of dual–adaptation under different training schedules in a virtual pointing experiment. Surprisingly, we found that acquisition speed of dual visuo–motor mappings in a pointing task is largely independent of the number of phase changes. Next, we studied the neuronal mechanisms underlying this result and other key phenomena of dual–adaptation by relating model simulations to experimental data. We propose a simple and yet biologically plausible neural model consisting of a spatial mapping from an input layer to a pointing angle which is subjected to a global gain modulation. Adaptation is performed by reinforcement learning on the model parameters. Despite its simplicity, the model provides a unifying account for a broad range of experimental data: It quantitatively reproduced the learning rates in dual–adaptation experiments for both direct effect, i.e. adaptation to prisms, and aftereffect, i.e. behavior after removal of prisms, and their independence on the number of phase changes. Several other phenomena, e.g. initial pointing errors that are far smaller than the induced optical shift, were also captured. Moreover, the underlying mechanisms, a local adaptation of a spatial mapping and a global adaptation of a gain factor, explained asymmetric spatial transfer and generalization of
Bengoetxea, Ana; Leurs, Françoise; Hoellinger, Thomas; Cebolla, Ana M.; Dan, Bernard; McIntyre, Joseph; Cheron, Guy
2014-01-01
In this study we employed a dynamic recurrent neural network (DRNN) in a novel fashion to reveal characteristics of control modules underlying the generation of muscle activations when drawing figures with the outstretched arm. We asked healthy human subjects to perform four different figure-eight movements in each of two workspaces (frontal plane and sagittal plane). We then trained a DRNN to predict the movement of the wrist from information in the EMG signals from seven different muscles. We trained different instances of the same network on a single movement direction, on all four movement directions in a single movement plane, or on all eight possible movement patterns and looked at the ability of the DRNN to generalize and predict movements for trials that were not included in the training set. Within a single movement plane, a DRNN trained on one movement direction was not able to predict movements of the hand for trials in the other three directions, but a DRNN trained simultaneously on all four movement directions could generalize across movement directions within the same plane. Similarly, the DRNN was able to reproduce the kinematics of the hand for both movement planes, but only if it was trained on examples performed in each one. As we will discuss, these results indicate that there are important dynamical constraints on the mapping of EMG to hand movement that depend on both the time sequence of the movement and on the anatomical constraints of the musculoskeletal system. In a second step, we injected EMG signals constructed from different synergies derived by the PCA in order to identify the mechanical significance of each of these components. From these results, one can surmise that discrete-rhythmic movements may be constructed from three different fundamental modules, one regulating the co-activation of all muscles over the time span of the movement and two others elliciting patterns of reciprocal activation operating in orthogonal directions
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables. PMID:25069127
Arévalo, Orlando; Bornschlegl, Mona A; Eberhardt, Sven; Ernst, Udo; Pawelzik, Klaus; Fahle, Manfred
2013-01-01
In everyday life, humans interact with a dynamic environment often requiring rapid adaptation of visual perception and motor control. In particular, new visuo-motor mappings must be learned while old skills have to be kept, such that after adaptation, subjects may be able to quickly change between two different modes of generating movements ('dual-adaptation'). A fundamental question is how the adaptation schedule determines the acquisition speed of new skills. Given a fixed number of movements in two different environments, will dual-adaptation be faster if switches ('phase changes') between the environments occur more frequently? We investigated the dynamics of dual-adaptation under different training schedules in a virtual pointing experiment. Surprisingly, we found that acquisition speed of dual visuo-motor mappings in a pointing task is largely independent of the number of phase changes. Next, we studied the neuronal mechanisms underlying this result and other key phenomena of dual-adaptation by relating model simulations to experimental data. We propose a simple and yet biologically plausible neural model consisting of a spatial mapping from an input layer to a pointing angle which is subjected to a global gain modulation. Adaptation is performed by reinforcement learning on the model parameters. Despite its simplicity, the model provides a unifying account for a broad range of experimental data: It quantitatively reproduced the learning rates in dual-adaptation experiments for both direct effect, i.e. adaptation to prisms, and aftereffect, i.e. behavior after removal of prisms, and their independence on the number of phase changes. Several other phenomena, e.g. initial pointing errors that are far smaller than the induced optical shift, were also captured. Moreover, the underlying mechanisms, a local adaptation of a spatial mapping and a global adaptation of a gain factor, explained asymmetric spatial transfer and generalization of prism adaptation, as
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Simmering, Vanessa R.; Spencer, John P.; Schutte, Anne R.
2008-01-01
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective. PMID:17716632
NASA Astrophysics Data System (ADS)
Lauzon, N.; Anctil, F.; Baxter, C. W.
2006-07-01
This work addresses the issue of better considering the heterogeneity of precipitation fields within lumped rainfall-runoff models where only areal mean precipitation is usually used as an input. A method using a Kohonen neural network is proposed for the clustering of precipitation fields. The evaluation and improvement of the performance of a lumped rainfall-runoff model for one-day ahead predictions is then established based on this clustering. Multilayer perceptron neural networks are employed as lumped rainfall-runoff models. The Bas-en-Basset watershed in France, which is equipped with 23 rain gauges with data for a 21-year period, is employed as the application case. The results demonstrate the relevance of the proposed clustering method, which produces groups of precipitation fields that are in agreement with the global climatological features affecting the region, as well as with the topographic constraints of the watershed (i.e., orography). The strengths and weaknesses of the rainfall-runoff models are highlighted by the analysis of their performance vis-à-vis the clustering of precipitation fields. The results also show the capability of multilayer perceptron neural networks to account for the heterogeneity of precipitation, even when built as lumped rainfall-runoff models.
Acceleration of adiabatic quantum dynamics in electromagnetic fields
Masuda, Shumpei; Nakamura, Katsuhiro
2011-10-15
We show a method to accelerate quantum adiabatic dynamics of wave functions under electromagnetic field (EMF) by developing the preceding theory [Masuda and Nakamura, Proc. R. Soc. London Ser. A 466, 1135 (2010)]. Treating the orbital dynamics of a charged particle in EMF, we derive the driving field which accelerates quantum adiabatic dynamics in order to obtain the final adiabatic states in any desired short time. The scheme is consolidated by describing a way to overcome possible singularities in both the additional phase and driving potential due to nodes proper to wave functions under EMF. As explicit examples, we exhibit the fast forward of adiabatic squeezing and transport of excited Landau states with nonzero angular momentum, obtaining the result consistent with the transitionless quantum driving applied to the orbital dynamics in EMF.
Garnier, Aurélie; Vidal, Alexandre; Huneau, Clément; Benali, Habib
2015-02-01
Neural mass modeling is a part of computational neuroscience that was developed to study the general behavior of a neuronal population. This type of mesoscopic model is able to generate output signals that are comparable to experimental data, such as electroencephalograms. Classically, neural mass models consider two interconnected populations: excitatory pyramidal cells and inhibitory interneurons. However, many authors have included an excitatory feedback on the pyramidal cell population. Two distinct approaches have been developed: a direct feedback on the main pyramidal cell population and an indirect feedback via a secondary pyramidal cell population. In this letter, we propose a new neural mass model that couples these two approaches. We perform a detailed bifurcation analysis and present a glossary of dynamical behaviors and associated time series. Our study reveals that the model is able to generate particular realistic time series that were never pointed out in either simulated or experimental data. Finally, we aim to evaluate the effect of balance between both excitatory feedbacks on the dynamical behavior of the model. For this purpose, we compute the codimension 2 bifurcation diagrams of the system to establish a map of the repartition of dynamical behaviors in a direct versus indirect feedback parameter space. A perspective of this work is, from a given temporal series, to estimate the parameter value range, especially in terms of direct versus indirect excitatory feedback. PMID:25514111
Ben-Tal, Alona; Smith, Jeffrey C
2008-04-01
A new model for aspects of the control of respiration in mammals has been developed. The model integrates a reduced representation of the brainstem respiratory neural controller together with peripheral gas exchange and transport mechanisms. The neural controller consists of two components. One component represents the inspiratory oscillator in the pre-Bötzinger complex (pre-BötC) incorporating biophysical mechanisms for rhythm generation. The other component represents the ventral respiratory group (VRG), which is driven by the pre-BötC for generation of inspiratory (pre)motor output. The neural model was coupled to simplified models of the lungs incorporating oxygen and carbon dioxide transport. The simplified representation of the brainstem neural circuitry has regulation of both frequency and amplitude of respiration and is done in response to partial pressures of oxygen and carbon dioxide in the blood using proportional (P) and proportional plus integral (PI) controllers. We have studied the coupled system under open and closed loop control. We show that two breathing regimes can exist in the model. In one regime an increase in the inspiratory frequency is accompanied by an increase in amplitude. In the second regime an increase in frequency is accompanied by a decrease in amplitude. The dynamic response of the model to changes in the concentration of inspired O2 or inspired CO2 was compared qualitatively with experimental data reported in the physiological literature. We show that the dynamic response with a PI-controller fits the experimental data better but suggests that when high levels of CO2 are inspired the respiratory system cannot reach steady state. Our model also predicts that there could be two possible mechanisms for apnea appearance when 100% O2 is inspired following a period of 5% inspired O2. This paper represents a novel attempt to link neural control and gas transport mechanisms, highlights important issues in amplitude and frequency
Neural Architectures for Control
NASA Technical Reports Server (NTRS)
Peterson, James K.
1991-01-01
The cerebellar model articulated controller (CMAC) neural architectures are shown to be viable for the purposes of real-time learning and control. Software tools for the exploration of CMAC performance are developed for three hardware platforms, the MacIntosh, the IBM PC, and the SUN workstation. All algorithm development was done using the C programming language. These software tools were then used to implement an adaptive critic neuro-control design that learns in real-time how to back up a trailer truck. The truck backer-upper experiment is a standard performance measure in the neural network literature, but previously the training of the controllers was done off-line. With the CMAC neural architectures, it was possible to train the neuro-controllers on-line in real-time on a MS-DOS PC 386. CMAC neural architectures are also used in conjunction with a hierarchical planning approach to find collision-free paths over 2-D analog valued obstacle fields. The method constructs a coarse resolution version of the original problem and then finds the corresponding coarse optimal path using multipass dynamic programming. CMAC artificial neural architectures are used to estimate the analog transition costs that dynamic programming requires. The CMAC architectures are trained in real-time for each obstacle field presented. The coarse optimal path is then used as a baseline for the construction of a fine scale optimal path through the original obstacle array. These results are a very good indication of the potential power of the neural architectures in control design. In order to reach as wide an audience as possible, we have run a seminar on neuro-control that has met once per week since 20 May 1991. This seminar has thoroughly discussed the CMAC architecture, relevant portions of classical control, back propagation through time, and adaptive critic designs.
Electron Dynamics in Nanostructures in Strong Laser Fields
Kling, Matthias
2014-09-11
The goal of our research was to gain deeper insight into the collective electron dynamics in nanosystems in strong, ultrashort laser fields. The laser field strengths will be strong enough to extract and accelerate electrons from the nanoparticles and to transiently modify the materials electronic properties. We aimed to observe, with sub-cycle resolution reaching the attosecond time domain, how collective electronic excitations in nanoparticles are formed, how the strong field influences the optical and electrical properties of the nanomaterial, and how the excitations in the presence of strong fields decay.
Hideen Markov Models and Neural Networks for Fault Detection in Dynamic Systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
None given. (From conclusion): Neural networks plus Hidden Markov Models(HMM)can provide excellene detection and false alarm rate performance in fault detection applications. Modified models allow for novelty detection. Also covers some key contributions of neural network model, and application status.
Rajagopalan, Janani; Modi, Shilpi; Kumar, Pawan; Khushu, Subash; Mandal, Manas K
2015-12-01
It is not clearly known as to why some people identify camouflaged objects with ease compared with others. The literature suggests that Field-Independent individuals detect camouflaged object better than their Field-Dependent counterparts, without having evidence at the neural activation level. A paradigm was designed to obtain neural correlates of camouflage detection, with real-life photographs, using functional magnetic resonance imaging. Twenty-three healthy human subjects were stratified as Field-Independent (FI) and Field-Dependent (FD), with Witkin's Embedded Figure Test. FIs performed better than FDs (marginal significance; p=0.054) during camouflage detection task. fMRI revealed differential activation pattern between FI and FD subjects for this task. One sample T-test showed greater activation in terms of cluster size in FDs, whereas FIs showed additional areas for the same task. On direct comparison of the two groups, FI subjects showed additional activation in parts of primary visual cortex, thalamus, cerebellum, inferior and middle frontal gyrus. Conversely, FDs showed greater activation in inferior frontal gyrus, precentral gyrus, putamen, caudate nucleus and superior parietal lobule as compared to FIs. The results give preliminary evidence to the differential neural activation underlying the variances in cognitive styles of the two groups. PMID:26648036
Dynamics of neutral atoms in artificial magnetic field
NASA Astrophysics Data System (ADS)
Yu, Zi-Fa; Hu, Fang-Qi; Zhang, Ai-Xia; Xue, Ju-Kui
2016-02-01
Cyclotron dynamics of neutral atoms in a harmonic trap potential with artificial magnetic field is studied theoretically. The cyclotron orbit is obtained analytically and confirmed numerically. When the external harmonic potential is absent, artificial magnetic field can result in the singly periodic circular motion of Bose gas with the emergence of a Lorentz-like force, which is similar to particles with electric charge moving in a magnetic field. However, the coupling between artificial magnetic field and harmonic trap potential leads to rich and complex cyclotron trajectory, which depends on √{B2 + 1 }, where B is the rescaled artificial magnetic field. When √{B2 + 1 } is a rational number, the cyclotron orbit is multiply periodic and closed. However, when √{B2 + 1 } is an irrational number, the cyclotron orbit is quasiperiodic, i.e., the cyclotron motion of Bose gas is limited in a annular region, and eventually, the motion is ergodic in this region. Furthermore, the cyclotron orbits also depend on the initial conditions of Bose gas. Thus, the cyclotron dynamics of Bose gas can be manipulated in a controllable way by changing the artificial magnetic field, harmonic trap potential and initial conditions. Our results provide a direct theoretical evidence for the cyclotron dynamics of neutral atoms in the artificial gauge field.
Planar cell polarity links axes of spatial dynamics in neural-tube closure.
Nishimura, Tamako; Honda, Hisao; Takeichi, Masatoshi
2012-05-25
Neural-tube closure is a critical step of embryogenesis, and its failure causes serious birth defects. Coordination of two morphogenetic processes--convergent extension and neural-plate apical constriction--ensures the complete closure of the neural tube. We now provide evidence that planar cell polarity (PCP) signaling directly links these two processes. In the bending neural plates, we find that a PCP-regulating cadherin, Celsr1, is concentrated in adherens junctions (AJs) oriented toward the mediolateral axes of the plates. At these AJs, Celsr1 cooperates with Dishevelled, DAAM1, and the PDZ-RhoGEF to upregulate Rho kinase, causing their actomyosin-dependent contraction in a planar-polarized manner. This planar-polarized contraction promotes simultaneous apical constriction and midline convergence of neuroepithelial cells. Together our findings demonstrate that PCP signals confer anisotropic contractility on the AJs, producing cellular forces that promote the polarized bending of the neural plate. PMID:22632972
DNA Breathing Dynamics in the Presence of a Terahertz Field
Alexandrov, B. S.; Gelev, V.; Bishop, A. R.; Usheva, A.; Rasmussen, K. Ø.
2010-01-01
We consider the influence of a terahertz field on the breathing dynamics of double-stranded DNA. We model the spontaneous formation of spatially localized openings of a damped and driven DNA chain, and find that linear instabilities lead to dynamic dimerization, while true local strand separations require a threshold amplitude mechanism. Based on our results we argue that a specific terahertz radiation exposure may significantly affect the natural dynamics of DNA, and thereby influence intricate molecular processes involved in gene expression and DNA replication. PMID:20174451
Plasticity and dislocation dynamics in a phase field crystal model.
Chan, Pak Yuen; Tsekenis, Georgios; Dantzig, Jonathan; Dahmen, Karin A; Goldenfeld, Nigel
2010-07-01
The critical dynamics of dislocation avalanches in plastic flow is examined using a phase field crystal model. In the model, dislocations are naturally created, without any ad hoc creation rules, by applying a shearing force to the perfectly periodic ground state. These dislocations diffuse, interact and annihilate with one another, forming avalanche events. By data collapsing the event energy probability density function for different shearing rates, a connection to interface depinning dynamics is confirmed. The relevant critical exponents agree with mean field theory predictions. PMID:20867460
General dynamics of quintessence fields: Comparison with type Ia SNe
NASA Astrophysics Data System (ADS)
Hernández-Aguayo, César; Ureña-López, L. Arturo
2012-08-01
This work discusses some preliminary results for observational constraints on quintessence fields using dynamical system tools consisting in finding the critical points in the phase space of the system as well as determine the special trajectories which connect physically relevant critical points. For cosmological dynamics, attractor trajectories exist in the form of the heteroclinic trajectories of the phase space. The approach is tested by observing the behavior of the quintessence fields, the monotonic freezing and the monotonic thawing. The values obtained for the roll parameter λ and the variables of the approach are the same in both cases which surprisingly corresponds to the cosmological constant behavior.
Fractional dynamics of charged particles in magnetic fields
NASA Astrophysics Data System (ADS)
Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Alvarado-Méndez, E.; Guerrero-Ramírez, G. V.; Escobar-Jiménez, R. F.
2016-02-01
In many physical applications the electrons play a relevant role. For example, when a beam of electrons accelerated to relativistic velocities is used as an active medium to generate Free Electron Lasers (FEL), the electrons are bound to atoms, but move freely in a magnetic field. The relaxation time, longitudinal effects and transverse variations of the optical field are parameters that play an important role in the efficiency of this laser. The electron dynamics in a magnetic field is a means of radiation source for coupling to the electric field. The transverse motion of the electrons leads to either gain or loss energy from or to the field, depending on the position of the particle regarding the phase of the external radiation field. Due to the importance to know with great certainty the displacement of charged particles in a magnetic field, in this work we study the fractional dynamics of charged particles in magnetic fields. Newton’s second law is considered and the order of the fractional differential equation is (0;1]. Based on the Grünwald-Letnikov (GL) definition, the discretization of fractional differential equations is reported to get numerical simulations. Comparison between the numerical solutions obtained on Euler’s numerical method for the classical case and the GL definition in the fractional approach proves the good performance of the numerical scheme applied. Three application examples are shown: constant magnetic field, ramp magnetic field and harmonic magnetic field. In the first example the results obtained show bistability. Dissipative effects are observed in the system and the standard dynamic is recovered when the order of the fractional derivative is 1.
Coherent Dynamics Following Strong Field Ionization of Polyatomic Molecules
NASA Astrophysics Data System (ADS)
Konar, Arkaprabha; Shu, Yinan; Lozovoy, Vadim; Jackson, James; Levine, Benjamin; Dantus, Marcos
2015-03-01
Molecules, as opposed to atoms, present confounding possibilities of nuclear and electronic motion upon strong field ionization. The dynamics and fragmentation patterns in response to the laser field are structure sensitive; therefore, a molecule cannot simply be treated as a ``bag of atoms'' during field induced ionization. We consider here to what extent molecules retain their molecular identity and properties under strong laser fields. Using time-of-flight mass spectrometry in conjunction with pump-probe techniques we study the dynamical behavior of these molecules, monitoring ion yield modulation caused by intramolecular motions post ionization. The delay scans show that among positional isomers the variations in relative energies, amounting to only a few hundred meVs, influence the dynamical behavior of the molecules despite their having experienced such high fields (V/Å). Ab initio calculations were performed to predict dynamics along with single and multiphoton resonances in the neutral and ionic states. We propose that single electron ionization occurs within an optical cycle with the electron carrying away essentially all of the energy, leaving behind little internal energy in the cation. Evidence for this observation comes from coherent vibrational motion governed by the potential energy surface of the ground state of the cation. Subsequent fragmentation of the cation takes place as a result of further photon absorption modulated by one- and two-photon resonances, which provide sufficient energy to overcome the dissociation energy.
The role of membrane dynamics in electrical and infrared neural stimulation
NASA Astrophysics Data System (ADS)
Moen, Erick K.; Beier, Hope T.; Ibey, Bennett L.; Armani, Andrea M.
2016-03-01
We recently developed a nonlinear optical imaging technique based on second harmonic generation (SHG) to identify membrane disruption events in live cells. This technique was used to detect nanoporation in the plasma membrane following nanosecond pulsed electric field (nsPEF) exposure. It has been hypothesized that similar poration events could be induced by the thermal gradients generated by infrared (IR) laser energy. Optical pulses are a highly desirable stimulus for the nervous system, as they are capable of inhibiting and producing action potentials in a highly localized but non-contact fashion. However, the underlying mechanisms involved with infrared neural stimulation (INS) are not well understood. The ability of our method to non-invasively measure membrane structure and transmembrane potential via Two Photon Fluorescence (TPF) make it uniquely suited to neurological research. In this work, we leverage our technique to understand what role membrane structure plays during INS and contrast it with nsPEF stimulation. We begin by examining the effect of IR pulses on CHO-K1 cells before progressing to primary hippocampal neurons. The use of these two cell lines allows us to directly compare poration as a result of IR pulses to nsPEF exposure in both a neuron-derived cell line, and one likely lacking native channels sensitive to thermal stimuli.
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503
Chhabria, Karishma; Chakravarthy, V. Srinivasa
2016-01-01
The motivation of developing simple minimal models for neuro-glio-vascular (NGV) system arises from a recent modeling study elucidating the bidirectional information flow within the NGV system having 89 dynamic equations (1). While this was one of the first attempts at formulating a comprehensive model for neuro-glio-vascular system, it poses severe restrictions in scaling up to network levels. On the contrary, low-dimensional models are convenient devices in simulating large networks that also provide an intuitive understanding of the complex interactions occurring within the NGV system. The key idea underlying the proposed models is to describe the glio-vascular system as a lumped system, which takes neural firing rate as input and returns an “energy” variable (analogous to ATP) as output. To this end, we present two models: biophysical neuro-energy (Model 1 with five variables), comprising KATP channel activity governed by neuronal ATP dynamics, and the dynamic threshold (Model 2 with three variables), depicting the dependence of neural firing threshold on the ATP dynamics. Both the models show different firing regimes, such as continuous spiking, phasic, and tonic bursting depending on the ATP production coefficient, ɛp, and external current. We then demonstrate that in a network comprising such energy-dependent neuron units, ɛp could modulate the local field potential (LFP) frequency and amplitude. Interestingly, low-frequency LFP dominates under low ɛp conditions, which is thought to be reminiscent of seizure-like activity observed in epilepsy. The proposed “neuron-energy” unit may be implemented in building models of NGV networks to simulate data obtained from multimodal neuroimaging systems, such as functional near infrared spectroscopy coupled to electroencephalogram and functional magnetic resonance imaging coupled to electroencephalogram. Such models could also provide a theoretical basis for devising optimal neurorehabilitation strategies
Downscaling Transpiration from the Field to the Tree Scale using the Neural Network Approach
NASA Astrophysics Data System (ADS)
Hopmans, J. W.
2015-12-01
Estimating actual evapotranspiration (ETa) spatial variability in orchards is key when trying to quantify water (and associated nutrients) leaching, both with the mass balance and inverse modeling methods. ETa measurements however generally occur at larger scales (e.g. Eddy-covariance method) or have a limited quantitative accuracy. In this study we propose to establish a statistical relation between field ETa and field averaged variables known to be closely related to it, such as stem water potential (WP), soil water storage (WS) and ETc. For that we use 4 years of soil and almond trees water status data to train artificial neural networks (ANNs) predicting field scale ETa and downscale the relation to the individual tree scale. ANNs composed of only two neurons in a hidden layer (11 parameters on total) proved to be the most accurate (overall RMSE = 0.0246 mm/h, R2 = 0.944), seemingly because adding more neurons generated overfitting of noise in the training dataset. According to the optimized weights in the best ANNs, the first hidden neuron could be considered in charge of relaying the ETc information while the other one would deal with the water stress response to stem WP, soil WS, and ETc. As individual trees had specific signatures for combinations of these variables, variability was generated in their ETa responses. The relative canopy cover was the main source of variability of ETa while stem WP was the most influent factor for the ETa / ETc ratio. Trees on drip-irrigated side of the orchard appeared to be less affected by low estimated soil WS in the root zone than on the fanjet micro-sprinklers side, possibly due to a combination of (i) more substantial root biomass increasing the plant hydraulic conductance, (ii) bias in the soil WS estimation due to soil moisture heterogeneity on the drip-side, and (iii) the access to deeper water resource. Tree scale ETa responses are in good agreement with soil-plant water relations reported in the literature, and
The Effect of Varying Magnetic Field Gradient on Combustion Dynamic
NASA Astrophysics Data System (ADS)
Suzdalenko, Vera; Zake, Maija; Barmina, Inesa; Gedrovics, Martins
2011-01-01
The focus of the recent experimental research is to provide control of the combustion dynamics and complex measurements (flame temperature, heat production rate, and composition of polluting emissions) for pelletized wood biomass using a non-uniform magnetic field that produces magnetic force interacting with magnetic moment of paramagnetic oxygen. The experimental results have shown that a gradient magnetic field provides enhanced mixing of the flame compounds by increasing combustion efficiency and enhancing the burnout of volatiles.
Hysteretic dynamics of active particles in a periodic orienting field
Romensky, Maksym; Scholz, Dimitri; Lobaskin, Vladimir
2015-01-01
Active motion of living organisms and artificial self-propelling particles has been an area of intense research at the interface of biology, chemistry and physics. Significant progress in understanding these phenomena has been related to the observation that dynamic self-organization in active systems has much in common with ordering in equilibrium condensed matter such as spontaneous magnetization in ferromagnets. The velocities of active particles may behave similar to magnetic dipoles and develop global alignment, although interactions between the individuals might be completely different. In this work, we show that the dynamics of active particles in external fields can also be described in a way that resembles equilibrium condensed matter. It follows simple general laws, which are independent of the microscopic details of the system. The dynamics is revealed through hysteresis of the mean velocity of active particles subjected to a periodic orienting field. The hysteresis is measured in computer simulations and experiments on unicellular organisms. We find that the ability of the particles to follow the field scales with the ratio of the field variation period to the particles' orientational relaxation time, which, in turn, is related to the particle self-propulsion power and the energy dissipation rate. The collective behaviour of the particles due to aligning interactions manifests itself at low frequencies via increased persistence of the swarm motion when compared with motion of an individual. By contrast, at high field frequencies, the active group fails to develop the alignment and tends to behave like a set of independent individuals even in the presence of interactions. We also report on asymptotic laws for the hysteretic dynamics of active particles, which resemble those in magnetic systems. The generality of the assumptions in the underlying model suggests that the observed laws might apply to a variety of dynamic phenomena from the motion of
Hysteretic dynamics of active particles in a periodic orienting field.
Romensky, Maksym; Scholz, Dimitri; Lobaskin, Vladimir
2015-07-01
Active motion of living organisms and artificial self-propelling particles has been an area of intense research at the interface of biology, chemistry and physics. Significant progress in understanding these phenomena has been related to the observation that dynamic self-organization in active systems has much in common with ordering in equilibrium condensed matter such as spontaneous magnetization in ferromagnets. The velocities of active particles may behave similar to magnetic dipoles and develop global alignment, although interactions between the individuals might be completely different. In this work, we show that the dynamics of active particles in external fields can also be described in a way that resembles equilibrium condensed matter. It follows simple general laws, which are independent of the microscopic details of the system. The dynamics is revealed through hysteresis of the mean velocity of active particles subjected to a periodic orienting field. The hysteresis is measured in computer simulations and experiments on unicellular organisms. We find that the ability of the particles to follow the field scales with the ratio of the field variation period to the particles' orientational relaxation time, which, in turn, is related to the particle self-propulsion power and the energy dissipation rate. The collective behaviour of the particles due to aligning interactions manifests itself at low frequencies via increased persistence of the swarm motion when compared with motion of an individual. By contrast, at high field frequencies, the active group fails to develop the alignment and tends to behave like a set of independent individuals even in the presence of interactions. We also report on asymptotic laws for the hysteretic dynamics of active particles, which resemble those in magnetic systems. The generality of the assumptions in the underlying model suggests that the observed laws might apply to a variety of dynamic phenomena from the motion of
NASA Astrophysics Data System (ADS)
Omori, Toshiaki; Horiguchi, Tsuyoshi
2004-12-01
We propose a two-layered neural network model for oscillatory phenomena in the thalamic system and investigate an effect of neuromodulation due to the acetylcholine on the oscillatory phenomena by numerical simulations. The proposed model consists of a layer of the thalamic reticular neurons and that of the cholinergic neurons. We introduce a dynamics of concentration of the acetylcholine which depends on a state of the cholinergic neurons, and assume that the conductance of the thalamic reticular neurons is dynamically regulated by the acetylcholine. From the results obtained by numerical simulations, we find that a dynamical transition between a bursting state and a resting state occurs successively in the layer of the thalamic reticular neurons due to the acetylcholine. Therefore it turns out that the neuromodulation due to the acetylcholine is important for the dynamical state transition in the thalamic system.
A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology
ERIC Educational Resources Information Center
Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren
2005-01-01
A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…
DYNAMICS OF CHROMOSPHERIC UPFLOWS AND UNDERLYING MAGNETIC FIELDS
Yurchyshyn, V.; Abramenko, V.; Goode, P.
2013-04-10
We used H{alpha}-0.1 nm and magnetic field (at 1.56{mu}) data obtained with the New Solar Telescope to study the origin of the disk counterparts to type II spicules, so-called rapid blueshifted excursions (RBEs). The high time cadence of our chromospheric (10 s) and magnetic field (45 s) data allowed us to generate x-t plots using slits parallel to the spines of the RBEs. These plots, along with potential field extrapolation, led us to suggest that the occurrence of RBEs is generally correlated with the appearance of new, mixed, or unipolar fields in close proximity to network fields. RBEs show a tendency to occur at the interface between large-scale fields and small-scale dynamic magnetic loops and thus are likely to be associated with the existence of a magnetic canopy. Detection of kinked and/or inverse {sup Y-}shaped RBEs further confirm this conclusion.
Dark energy parametrization motivated by scalar field dynamics
NASA Astrophysics Data System (ADS)
de la Macorra, Axel
2016-05-01
We propose a new dark energy (DE) parametrization motivated by the dynamics of a scalar field ϕ. We use an equation of state w parametrized in terms of two functions L and y, closely related to the dynamics of scalar fields, which is exact and has no approximation. By choosing an appropriate ansatz for L we obtain a wide class of behavior for the evolution of DE without the need to specify the scalar potential V. We parametrize L and y in terms of only four parameters, giving w a rich structure and allowing for a wide class of DE dynamics. Our w can either grow and later decrease, or it can happen the other way around; the steepness of the transition is not fixed and it contains the ansatz w={w}o+{w}a(1-a). Our parametrization follows closely the dynamics of a scalar field, and the function L allows us to connect it with the scalar potential V(φ ). While the Universe is accelerating and the slow roll approximation is valid, we get L≃ {({V}\\prime /V)}2. To determine the dynamics of DE we also calculate the background evolution and its perturbations, since they are important to discriminate between different DE models.
NASA Technical Reports Server (NTRS)
Russell, Samuel S.; Lansing, Matthew D.
1997-01-01
This effort used a new and novel method of acquiring strains called Sub-pixel Digital Video Image Correlation (SDVIC) on impact damaged Kevlar/epoxy filament wound pressure vessels during a proof test. To predict the burst pressure, the hoop strain field distribution around the impact location from three vessels was used to train a neural network. The network was then tested on additional pressure vessels. Several variations on the network were tried. The best results were obtained using a single hidden layer. SDVIC is a fill-field non-contact computer vision technique which provides in-plane deformation and strain data over a load differential. This method was used to determine hoop and axial displacements, hoop and axial linear strains, the in-plane shear strains and rotations in the regions surrounding impact sites in filament wound pressure vessels (FWPV) during proof loading by internal pressurization. The relationship between these deformation measurement values and the remaining life of the pressure vessels, however, requires a complex theoretical model or numerical simulation. Both of these techniques are time consuming and complicated. Previous results using neural network methods had been successful in predicting the burst pressure for graphite/epoxy pressure vessels based upon acoustic emission (AE) measurements in similar tests. The neural network associates the character of the AE amplitude distribution, which depends upon the extent of impact damage, with the burst pressure. Similarly, higher amounts of impact damage are theorized to cause a higher amount of strain concentration in the damage effected zone at a given pressure and result in lower burst pressures. This relationship suggests that a neural network might be able to find an empirical relationship between the SDVIC strain field data and the burst pressure, analogous to the AE method, with greater speed and simplicity than theoretical or finite element modeling. The process of testing SDVIC
Learning real-world stimuli in a neural network with spike-driven synaptic dynamics.
Brader, Joseph M; Senn, Walter; Fusi, Stefano
2007-11-01
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons). PMID:17883345
Dempere-Marco, Laura; Melcher, David P.; Deco, Gustavo
2012-01-01
The study of working memory capacity is of outmost importance in cognitive psychology as working memory is at the basis of general cognitive function. Although the working memory capacity limit has been thoroughly studied, its origin still remains a matter of strong debate. Only recently has the role of visual saliency in modulating working memory storage capacity been assessed experimentally and proved to provide valuable insights into working memory function. In the computational arena, attractor networks have successfully accounted for psychophysical and neurophysiological data in numerous working memory tasks given their ability to produce a sustained elevated firing rate during a delay period. Here we investigate the mechanisms underlying working memory capacity by means of a biophysically-realistic attractor network with spiking neurons while accounting for two recent experimental observations: 1) the presence of a visually salient item reduces the number of items that can be held in working memory, and 2) visually salient items are commonly kept in memory at the cost of not keeping as many non-salient items. Our model suggests that working memory capacity is determined by two fundamental processes: encoding of visual items into working memory and maintenance of the encoded items upon their removal from the visual display. While maintenance critically depends on the constraints that lateral inhibition imposes to the mnemonic activity, encoding is limited by the ability of the stimulated neural assemblies to reach a sufficiently high level of excitation, a process governed by the dynamics of competition and cooperation among neuronal pools. Encoding is therefore contingent upon the visual working memory task and has led us to introduce the concept of effective working memory capacity (eWMC) in contrast to the maximal upper capacity limit only reached under ideal conditions. PMID:22952608
Dempere-Marco, Laura; Melcher, David P; Deco, Gustavo
2012-01-01
The study of working memory capacity is of outmost importance in cognitive psychology as working memory is at the basis of general cognitive function. Although the working memory capacity limit has been thoroughly studied, its origin still remains a matter of strong debate. Only recently has the role of visual saliency in modulating working memory storage capacity been assessed experimentally and proved to provide valuable insights into working memory function. In the computational arena, attractor networks have successfully accounted for psychophysical and neurophysiological data in numerous working memory tasks given their ability to produce a sustained elevated firing rate during a delay period. Here we investigate the mechanisms underlying working memory capacity by means of a biophysically-realistic attractor network with spiking neurons while accounting for two recent experimental observations: 1) the presence of a visually salient item reduces the number of items that can be held in working memory, and 2) visually salient items are commonly kept in memory at the cost of not keeping as many non-salient items. Our model suggests that working memory capacity is determined by two fundamental processes: encoding of visual items into working memory and maintenance of the encoded items upon their removal from the visual display. While maintenance critically depends on the constraints that lateral inhibition imposes to the mnemonic activity, encoding is limited by the ability of the stimulated neural assemblies to reach a sufficiently high level of excitation, a process governed by the dynamics of competition and cooperation among neuronal pools. Encoding is therefore contingent upon the visual working memory task and has led us to introduce the concept of effective working memory capacity (eWMC) in contrast to the maximal upper capacity limit only reached under ideal conditions. PMID:22952608
Tunable nonequilibrium dynamics of field quenches in spin ice
Mostame, Sarah; Castelnovo, Claudio; Moessner, Roderich; Sondhi, Shivaji L.
2014-01-01
We present nonequilibrium physics in spin ice as a unique setting that combines kinematic constraints, emergent topological defects, and magnetic long-range Coulomb interactions. In spin ice, magnetic frustration leads to highly degenerate yet locally constrained ground states. Together, they form a highly unusual magnetic state—a “Coulomb phase”—whose excitations are point-like defects—magnetic monopoles—in the absence of which effectively no dynamics is possible. Hence, when they are sparse at low temperature, dynamics becomes very sluggish. When quenching the system from a monopole-rich to a monopole-poor state, a wealth of dynamical phenomena occur, the exposition of which is the subject of this article. Most notably, we find reaction diffusion behavior, slow dynamics owing to kinematic constraints, as well as a regime corresponding to the deposition of interacting dimers on a honeycomb lattice. We also identify potential avenues for detecting the magnetic monopoles in a regime of slow-moving monopoles. The interest in this model system is further enhanced by its large degree of tunability and the ease of probing it in experiment: With varying magnetic fields at different temperatures, geometric properties—including even the effective dimensionality of the system—can be varied. By monitoring magnetization, spin correlations or zero-field NMR, the dynamical properties of the system can be extracted in considerable detail. This establishes spin ice as a laboratory of choice for the study of tunable, slow dynamics. PMID:24379372
Miconi, Thomas; VanRullen, Rufin
2016-01-01
Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell. PMID:26890584
Miconi, Thomas; VanRullen, Rufin
2016-02-01
Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell. PMID:26890584
WATER TEMPERATURE DYNAMICS IN EXPERIMENTAL FIELD CHANNELS: ANALYSIS AND MODELING
This study is on water temperature dynamics in the shallow field channels of the USEPA Monticello Ecological Research Station (MERS). The hydraulic and temperature environment in the MERS channels was measured and simulated to provide some background for several biological studie...
OLD-FIELD SUCCESSIONAL DYNAMICS FOLLOWING INTENSIVE HERBIVORY
Community composition and successional patterns can be altered by disturbance and exotic species invasions. Our objective was to describe vegetation dynamics following cessation of severe disturbance, which was heavy grazing by cattle, in an old-field grassland subject to invasi...
The effective field theorist's approach to gravitational dynamics
NASA Astrophysics Data System (ADS)
Porto, Rafael A.
2016-05-01
We review the effective field theory (EFT) approach to gravitational dynamics. We focus on extended objects in long-wavelength backgrounds and gravitational wave emission from spinning binary systems. We conclude with an introduction to EFT methods for the study of cosmological large scale structures.
Using Dynamic Field Theory to Rethink Infant Habituation
ERIC Educational Resources Information Center
Schoner, Gregor; Thelen, Esther
2006-01-01
Much of what psychologists know about infant perception and cognition is based on habituation, but the process itself is still poorly understood. Here the authors offer a dynamic field model of infant visual habituation, which simulates the known features of habituation, including familiarity and novelty effects, stimulus intensity effects, and…
Book review: old fields: dynamics and restoration of abandoned farmland
Technology Transfer Automated Retrieval System (TEKTRAN)
The 2007 volume, “Old Fields: Dynamics and Restoration of Abandoned Farmland”, edited by VA Cramer and RJ Hobbs and published by the Society for Ecological Restoration International (Island Press), is a valuable attempt to synthesize a dozen case studies on agricultural abandonment from all of the ...
Deciphering and prediction of plant dynamics under field conditions.
Izawa, Takeshi
2015-04-01
Elucidation of plant dynamics under fluctuating natural environments is a challenging goal in plant physiology. Recently, using a computer statistics integrating a series of transcriptome data of field-grown rice leaves during an entire crop season and several corresponding environmental data such as solar radiation and ambient temperature, most parts of transcriptome have been modeled. This reveals the detailed contributions of developmental timing, circadian clocks and each environmental factor to transcriptome dynamics in the field and can predict transcriptome dynamics under given environments. Furthermore, some traits such as flowering time in natural environments have been shown to be predicted by mathematical models based on gene-networks parameterized with data obtained in the laboratory, and phenology models refined by knowledge of molecular genetics. New molecular physiology is beginning in plant science. PMID:25706440
Taub-NUT dynamics with a magnetic field
NASA Astrophysics Data System (ADS)
Jante, Rogelio; Schroers, Bernd J.
2016-06-01
We study classical and quantum dynamics on the Euclidean Taub-NUT geometry coupled to an abelian gauge field with self-dual curvature and show that, even though Taub-NUT has neither bounded orbits nor quantum bound states, the magnetic binding via the gauge field produces both. The conserved Runge-Lenz vector of Taub-NUT dynamics survives, in a modified form, in the gauged model and allows for an essentially algebraic computation of classical trajectories and energies of quantum bound states. We also compute scattering cross sections and find a surprising electric-magnetic duality. Finally, we exhibit the dynamical symmetry behind the conserved Runge-Lenz and angular momentum vectors in terms of a twistorial formulation of phase space.
Quantum emitters dynamically coupled to a quantum field
Acevedo, O. L.; Quiroga, L.; Rodríguez, F. J.; Johnson, N. F.
2013-12-04
We study theoretically the dynamical response of a set of solid-state quantum emitters arbitrarily coupled to a single-mode microcavity system. Ramping the matter-field coupling strength in round trips, we quantify the hysteresis or irreversible quantum dynamics. The matter-field system is modeled as a finite-size Dicke model which has previously been used to describe equilibrium (including quantum phase transition) properties of systems such as quantum dots in a microcavity. Here we extend this model to address non-equilibrium situations. Analyzing the system’s quantum fidelity, we find that the near-adiabatic regime exhibits the richest phenomena, with a strong asymmetry in the internal collective dynamics depending on which phase is chosen as the starting point. We also explore signatures of the crossing of the critical points on the radiation subsystem by monitoring its Wigner function; then, the subsystem can exhibit the emergence of non-classicality and complexity.
NASA Astrophysics Data System (ADS)
Agrawal, Paras M.; Samadh, Abdul N. A.; Raff, Lionel M.; Hagan, Martin T.; Bukkapatnam, Satish T.; Komanduri, Ranga
2005-12-01
A new approach involving neural networks combined with molecular dynamics has been used for the determination of reaction probabilities as a function of various input parameters for the reactions associated with the chemical-vapor deposition of carbon dimers on a diamond (100) surface. The data generated by the simulations have been used to train and test neural networks. The probabilities of chemisorption, scattering, and desorption as a function of input parameters, such as rotational energy, translational energy, and direction of the incident velocity vector of the carbon dimer, have been considered. The very good agreement obtained between the predictions of neural networks and those provided by molecular dynamics and the fact that, after training the network, the determination of the interpolated probabilities as a function of various input parameters involves only the evaluation of simple analytical expressions rather than computationally intensive algorithms show that neural networks are extremely powerful tools for interpolating the probabilities and rates of chemical reactions. We also find that a neural network fits the underlying trends in the data rather than the statistical variations present in the molecular-dynamics results. Consequently, neural networks can also provide a computationally convenient means of averaging the statistical variations inherent in molecular-dynamics calculations. In the present case the application of this method is found to reduce the statistical uncertainty in the molecular-dynamics results by about a factor of 3.5.
Approximate photochemical dynamics of azobenzene with reactive force fields
Li, Yan; Hartke, Bernd
2013-12-14
We have fitted reactive force fields of the ReaxFF type to the ground and first excited electronic states of azobenzene, using global parameter optimization by genetic algorithms. Upon coupling with a simple energy-gap transition probability model, this setup allows for completely force-field-based simulations of photochemical cis→trans- and trans→cis-isomerizations of azobenzene, with qualitatively acceptable quantum yields. This paves the way towards large-scale dynamics simulations of molecular machines, including bond breaking and formation (via the reactive force field) as well as photochemical engines (presented in this work)
Qi, L.; Carr, T.R.
2006-01-01
In the Hugoton Embayment of southwestern Kansas, St. Louis Limestone reservoirs have relatively low recovery efficiencies, attributed to the heterogeneous nature of the oolitic deposits. This study establishes quantitative relationships between digital well logs and core description data, and applies these relationships in a probabilistic sense to predict lithofacies in 90 uncored wells across the Big Bow and Sand Arroyo Creek fields. In 10 wells, a single hidden-layer neural network based on digital well logs and core described lithofacies of the limestone depositional texture was used to train and establish a non-linear relationship between lithofacies assignments from detailed core descriptions and selected log curves. Neural network models were optimized by selecting six predictor variables and automated cross-validation with neural network parameters and then used to predict lithofacies on the whole data set of the 2023 half-foot intervals from the 10 cored wells with the selected network size of 35 and a damping parameter of 0.01. Predicted lithofacies results compared to actual lithofacies displays absolute accuracies of 70.37-90.82%. Incorporating adjoining lithofacies, within-one lithofacies improves accuracy slightly (93.72%). Digital logs from uncored wells were batch processed to predict lithofacies and probabilities related to each lithofacies at half-foot resolution corresponding to log units. The results were used to construct interpolated cross-sections and useful depositional patterns of St. Louis lithofacies were illustrated, e.g., the concentration of oolitic deposits (including lithofacies 5 and 6) along local highs and the relative dominance of quartz-rich carbonate grainstone (lithofacies 1) in the zones A and B of the St. Louis Limestone. Neural network techniques are applicable to other complex reservoirs, in which facies geometry and distribution are the key factors controlling heterogeneity and distribution of rock properties. Future work
Dynamics of lysozyme and its hydration water under electric field
Favi, Pelagie M; Zhang, Qiu; O'Neill, Hugh Michael; Mamontov, Eugene; Omar Diallo, Souleymane; Palmer, Jeremy
2014-01-01
The effects of static electric field on the dynamics of lysozyme and its hydration water have been investigated by means of incoherent quasi-elastic neutron scattering (QENS). Measurements were performed on lysozyme samples, hydrated respectively with heavy water (D2O) to capture the protein dynamics, and with light water (H2O), to probe the dynamics of the hydration shell, in the temperature range from 210 < T < 260 K. The hydration fraction in both cases was about 0.38 gram of water per gram of dry protein. The field strengths investigated were respectively 0 kV/mm and 2 kV/mm ( 2 106 V/m) for the protein hydrated with D2O and 0 kV and 1 kV/mm for the H2O-hydrated counterpart. While the overall internal protons dynamics of the protein appears to be unaffected by the application of electric field up to 2 kV/mm, likely due to the stronger intra-molecular interactions, there is also no appreciable quantitative enhancement of the diffusive dynamics of the hydration water, as would be anticipated based on our recent observations in water confined in silica pores under field values of 2.5 kV/mm. This may be due to the difference in surface interactions between water and the two adsorption hosts (silica and protein), or to the existence of a critical threshold field value Ec 2 3 kV/mm for increased molecular diffusion, for which electrical breakdown is a limitation for our sample.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Foti, Dan; Roberts, Felicia
2016-01-01
The neural circuitry for speech perception is well-characterized, yet the temporal dynamics therein are largely unknown. This timing information is critical in that spoken language almost always occurs in the context of joint speech (i.e., conversations) where effective communication requires the precise timing of speaker turn-taking-a core aspect of prosody. Here, we used event-related potentials to characterize neural activity elicited by conversation stimuli within a large, unselected adult sample (N=115). We focused on two stages of speech perception: inter-speaker gaps and speaker responses. We found activation in two known speech perception networks, with functional and neuroanatomical specificity: silence during inter-speaker gaps primarily activated the posterior pathway involving the supramarginal gyrus and premotor cortex, whereas hearing speaker responses primarily activated the anterior pathway involving the superior temporal gyrus. These data provide the first direct evidence that the posterior pathway is uniquely involved in monitoring speaker turn-taking. PMID:27177112
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Simulation of Pedestrian Dynamic Using a Vector Floor Field Model
NASA Astrophysics Data System (ADS)
Yang, Jun; Hou, Zhongsheng; Zhan, Minghui
2013-04-01
Simulation of complex scenarios and multi-direction pedestrian flow is a main challenge to microscopic model of pedestrian movement. It is an issue to simulate real pedestrian traffic with great fidelity while keeping its computational cost at an acceptable level. This paper reports on an improved floor field model called vector floor field model to simulate pedestrian flows in some basic scenarios. In this model, vectorization of static floor field and dynamic floor field are used to indicate preference directions and the pedestrian flow tendency, respectively. Pedestrian transition depends on both their preference directions and tendency. The simulations in some basic scenarios are conducted, quantitative comparison to the record of practical experiments and standard floor field model is given as well, and the results indicate the effectivity of this model. An adjusted static vector floor field is also proposed to simulate pedestrian flow in turning scenario. The vector floor field model is also sufficient to simulate some essential features in pedestrian dynamic, such as lane formation. This model can be widely used in the simulation of multi-direction pedestrian at turning, crossing and other junctions.
Dynamics of molecular superrotors in an external magnetic field
NASA Astrophysics Data System (ADS)
Korobenko, Aleksey; Milner, Valery
2015-08-01
We excite diatomic oxygen and nitrogen to high rotational states with an optical centrifuge and study their dynamics in an external magnetic field. Ion imaging is employed to directly visualize, and follow in time, the rotation plane of the molecular superrotors. The two different mechanisms of interaction between the magnetic field and the molecular angular momentum in paramagnetic oxygen and non-magnetic nitrogen lead to qualitatively different behaviour. In nitrogen, we observe the precession of the molecular angular momentum around the field vector. In oxygen, strong spin-rotation coupling results in faster and richer dynamics, encompassing the splitting of the rotation plane into three separate components. As the centrifuged molecules evolve with no significant dispersion of the molecular wave function, the observed magnetic interaction presents an efficient mechanism for controlling the plane of molecular rotation.
An implicit divalent counterion force field for RNA molecular dynamics
NASA Astrophysics Data System (ADS)
Henke, Paul S.; Mak, Chi H.
2016-03-01
How to properly account for polyvalent counterions in a molecular dynamics simulation of polyelectrolytes such as nucleic acids remains an open question. Not only do counterions such as Mg2+ screen electrostatic interactions, they also produce attractive intrachain interactions that stabilize secondary and tertiary structures. Here, we show how a simple force field derived from a recently reported implicit counterion model can be integrated into a molecular dynamics simulation for RNAs to realistically reproduce key structural details of both single-stranded and base-paired RNA constructs. This divalent counterion model is computationally efficient. It works with existing atomistic force fields, or coarse-grained models may be tuned to work with it. We provide optimized parameters for a coarse-grained RNA model that takes advantage of this new counterion force field. Using the new model, we illustrate how the structural flexibility of RNA two-way junctions is modified under different salt conditions.
NASA Technical Reports Server (NTRS)
Plumer, Edward S.
1991-01-01
A technique is developed for vehicle navigation and control in the presence of obstacles. A potential function was devised that peaks at the surface of obstacles and has its minimum at the proper vehicle destination. This function is computed using a systolic array and is guaranteed not to have local minima. A feedfoward neural network is then used to control the steering of the vehicle using local potential field information. In this case, the vehicle is a trailer truck backing up. Previous work has demonstrated the capability of a neural network to control steering of such a trailer truck backing to a loading platform, but without obstacles. Now, the neural network was able to learn to navigate a trailer truck around obstacles while backing toward its destination. The network is trained in an obstacle free space to follow the negative gradient of the field, after which the network is able to control and navigate the truck to its target destination in a space of obstacles which may be stationary or movable.
Dynamical analysis of memristor-based fractional-order neural networks with time delay
NASA Astrophysics Data System (ADS)
Cui, Xueli; Yu, Yongguang; Wang, Hu; Hu, Wei
2016-06-01
In this paper, the memristor-based fractional-order neural networks with time delay are analyzed. Based on the theories of set-value maps, differential inclusions and Filippov’s solution, some sufficient conditions for asymptotic stability of this neural network model are obtained when the external inputs are constants. Besides, uniform stability condition is derived when the external inputs are time-varying, and its attractive interval is estimated. Finally, numerical examples are given to verify our results.
First principles molecular dynamics without self-consistent field optimization
Souvatzis, Petros; Niklasson, Anders M. N.
2014-01-28
We present a first principles molecular dynamics approach that is based on time-reversible extended Lagrangian Born-Oppenheimer molecular dynamics [A. M. N. Niklasson, Phys. Rev. Lett. 100, 123004 (2008)] in the limit of vanishing self-consistent field optimization. The optimization-free dynamics keeps the computational cost to a minimum and typically provides molecular trajectories that closely follow the exact Born-Oppenheimer potential energy surface. Only one single diagonalization and Hamiltonian (or Fockian) construction are required in each integration time step. The proposed dynamics is derived for a general free-energy potential surface valid at finite electronic temperatures within hybrid density functional theory. Even in the event of irregular functional behavior that may cause a dynamical instability, the optimization-free limit represents a natural starting guess for force calculations that may require a more elaborate iterative electronic ground state optimization. Our optimization-free dynamics thus represents a flexible theoretical framework for a broad and general class of ab initio molecular dynamics simulations.
Cohen, M.A.; Grossberg, S.
1987-05-15
A massively parallel neural-network architecture, called a masking field, is characterized through systematic computer simulations. A masking field can simultaneously detect multiple groupings within its input patterns and assign activation weights to the codes for these groupings that are predictive with respect to the contextual information embedded within the patterns and the prior learning of the system. A masking field automatically rescales its sensitivity as the overall size of an input pattern changes, yet also remains sensitive to the microstructure within each pattern. Thus, a masking field suggests a solution of the credit assignment problem by embodying a real-time code for the predictive evidence contained within its input patterns. Such capabilities are useful in speech recognition, visual object recognition, and cognitive information processing. An absolutely stable design for a masking field is disclosed through an analysis of the computer simulations. This design suggests how associative mechanisms, cooperative-competitive interactions, and modulatory gating signals can be joined together to regulate the learning of compressed recognition codes. Data about the neural substrates of learning and memory are compared to these mechanisms.
Quantum dynamics of charge state in silicon field evaporation
NASA Astrophysics Data System (ADS)
Silaeva, Elena P.; Uchida, Kazuki; Watanabe, Kazuyuki
2016-08-01
The charge state of an ion field-evaporating from a silicon-atom cluster is analyzed using time-dependent density functional theory coupled to molecular dynamics. The final charge state of the ion is shown to increase gradually with increasing external electrostatic field in agreement with the average charge state of silicon ions detected experimentally. When field evaporation is triggered by laser-induced electronic excitations the charge state also increases with increasing intensity of the laser pulse. At the evaporation threshold, the charge state of the evaporating ion does not depend on the electrostatic field due to the strong contribution of laser excitations to the ionization process both at low and high laser energies. A neutral silicon atom escaping the cluster due to its high initial kinetic energy is shown to be eventually ionized by external electrostatic field.
Mascotte-Cruz, Juan Uriel; Ríos, Amelia; Escalante, Bruno
2016-01-01
Differentiation of bone marrow-derived mesenchymal stem cells (MSCs) into neural phenotype has been induced by either flow-induced shear stress (FSS) or electromagnetic fields (EMF). However, procedures are still expensive and time consuming. In the present work, induction for 1 h with the combination of both forces showed the presence of the neural precursor nestin as early as 9 h in culture after treatment and this result lasted for the following 6 d. In conclusion, the use of a combination of FSS and EMF for a short-time renders in neurite-like cells, although further investigation is required to analyze cell functionality. PMID:26325339
NASA Astrophysics Data System (ADS)
Hahn, Federico
1996-03-01
Statistical discriminative analysis and neural networks were used to prove that crop/weed/soil discrimination by optical reflectance was feasible. The wavelengths selected as inputs on those neural networks were ten nanometers width, reducing the total collected radiation for the sensor. Spectral data collected from several farms having different weed populations were introduced to discriminant analysis. The best discriminant wavelengths were used to build a wavelength histogram which selected the three best spectral broadbands for broccoli/weed/soil discrimination. The broadbands were analyzed using a new single broadband discriminator index named the discriminative integration index, DII, and the DII values obtained were used to train a neural network. This paper introduces the index concept, its results and its use for minimizing artificial lightning requirements with broadband spectral measurements for broccoli/weed/soil discrimination.
Dynamics of Dollard asymptotic variables. Asymptotic fields in Coulomb scattering
NASA Astrophysics Data System (ADS)
Morchio, G.; Strocchi, F.
2016-03-01
Generalizing Dollard’s strategy, we investigate the structure of the scattering theory associated to any large time reference dynamics UD(t) allowing for the existence of Møller operators. We show that (for each scattering channel) UD(t) uniquely identifies, for t →±∞, asymptotic dynamics U±(t); they are unitary groups acting on the scattering spaces, satisfy the Møller interpolation formulas and are interpolated by the S-matrix. In view of the application to field theory models, we extend the result to the adiabatic procedure. In the Heisenberg picture, asymptotic variables are obtained as LSZ-like limits of Heisenberg variables; their time evolution is induced by U±(t), which replace the usual free asymptotic dynamics. On the asymptotic states, (for each channel) the Hamiltonian can by written in terms of the asymptotic variables as H = H±(qout/in,pout/in), H±(q,p) the generator of the asymptotic dynamics. As an application, we obtain the asymptotic fields ψout/in in repulsive Coulomb scattering by an LSZ modified formula; in this case, U±(t) = U0(t), so that ψout/in are free canonical fields and H = H0(ψout/in).
Pollutants dynamics in a rice field and an upland field during storm events
NASA Astrophysics Data System (ADS)
Kim, Jin Soo; Park, Jong-Wha; Jang, Hoon; Kim, Young Hyeon
2010-05-01
We investigated the dynamics of pollutants such as total nitrogen (TN), total phosphorous (TP), biochemical oxygen demand (BOD), chemical oxygen demand (COD), and suspended sediment (SS) in runoff from a rice field and an upland field near the upper stream of the Han river in South Korea for multiple storm events. The upland field was cropped with red pepper, sweet potato, beans, and sesame. Runoff from the rice field started later than that from the upland field due to the water storage function of rice field. Unlike the upland field, runoff from the rice field was greatly affected by farmers' water management practices. Overall, event mean concentrations (EMCs) of pollutants in runoff water from the upland field were higher than those from the rice field. Especially, EMCs of TP and SS in runoff water from the upland field were one order of magnitude higher than those from the rice field. This may be because ponding condition and flat geographical features of the rice field greatly reduces the transport of particulate phosphorous associated with soil erosion. The results suggest that the rice field contributes to control particulate pollutants into adjacent water bodies.
Mountney, John; Silage, Dennis; Obeid, Iyad
2010-01-01
Both linear and nonlinear estimation algorithms have been successfully applied as neural decoding techniques in brain machine interfaces. Nonlinear approaches such as Bayesian auxiliary particle filters offer improved estimates over other methodologies seemingly at the expense of computational complexity. Real-time implementation of particle filtering algorithms for neural signal processing may become prohibitive when the number of neurons in the observed ensemble becomes large. By implementing a parallel hardware architecture, filter performance can be improved in terms of throughput over conventional sequential processing. Such an architecture is presented here and its FPGA resource utilization is reported. PMID:21096196
An imaging system for monitoring receptive field dynamics.
Petersson, P; Holmer, M; Breslin, T; Granmo, M; Schouenborg, J
2001-01-15
The paper describes a computerized method, termed receptive field imaging (RFI), for the rapid mapping of multiple receptive fields and their respective sensitivity distributions. RFI uses random stimulation of multiple sites, in combination with an averaging procedure, to extract the relative contribution from each of the stimulated sites. Automated multi-electrode stimulation and recording, with spike detection and counting, are performed on-line by the RFI programme. Direct user interpretation of receptive field changes is made possible by a user-friendly graphic interface. A series of imaging experiments was carried out to evaluate the functional capacity of the system. RFI was tested on the receptive fields in the nociceptive withdrawal reflex (NWR) system in the rat. RFI replicates the results obtained with conventional methods and allows the display of receptive field dynamics induced by topical spinal cord application of morphine and naloxone on a minute-to-minute time scale. Data variance was estimated, and proved to be small enough to yield a stable representation of the receptive field, thereby achieving a high sensitivity in dynamic imaging experiments. The large number of stimulation and registration sites that can be monitored in parallel permits detailed network analysis of synaptic sets, corresponding to 'connection weights' between individual neurones. PMID:11164238
Simultaneous Electromagnetic Tracking and Calibration for Dynamic Field Distortion Compensation.
Sadjadi, Hossein; Hashtrudi-Zaad, Keyvan; Fichtinger, Gabor
2016-08-01
Electromagnetic (EM) tracking systems are highly susceptible to field distortion. The interference can cause measurement errors up to a few centimeters in clinical environments, which limits the reliability of these systems. Unless corrected for, this measurement error imperils the success of clinical procedures. It is therefore fundamental to dynamically calibrate EM tracking systems and compensate for measurement error caused by field distorting objects commonly present in clinical environments. We propose to combine a motion model with observations of redundant EM sensors and compensate for field distortions in real time. We employ a simultaneous localization and mapping technique to accurately estimate the pose of the tracked instrument while creating the field distortion map. We conducted experiments with six degrees-of-freedom motions in the presence of field distorting objects in research and clinical environments. We applied our approach to improve the EM tracking accuracy and compared our results to a conventional sensor fusion technique. Using our approach, the maximum tracking error was reduced by 67% for position measurements and by 64% for orientation measurements. Currently, clinical applications of EM trackers are hampered by the adverse distortion effects. Our approach introduces a novel method for dynamic field distortion compensation, independent from preoperative calibrations or external tracking devices, and enables reliable EM navigation for potential applications. PMID:26595908
NASA Astrophysics Data System (ADS)
Fang, Ming-chung; Lee, Zi-yi
2013-08-01
This paper develops a nonlinear mathematical model to simulate the dynamic motion behavior of the barge equipped with the portable outboard Dynamic Positioning (DP) system in short-crested waves. The self-tuning Proportional-Derivative (PD) controller based on the neural network algorithm is applied to control the thrusters for optimal adjustment of the barge position in waves. In addition to the wave, the current, the wind and the nonlinear drift force are also considered in the calculations. The time domain simulations for the six-degree-of-freedom motions of the barge with the DP system are solved by the 4th order Runge-Kutta method which can compromise the efficiency and the accuracy of the simulations. The technique of the portable alternative DP system developed here can serve as a practical tool to assist those ships without being equipped with the DP facility while the dynamic positioning missions are needed.
Microstructures fabricated by dynamically controlled femtosecond patterned vector optical fields.
Cai, Meng-Qiang; Li, Ping-Ping; Feng, Dan; Pan, Yue; Qian, Sheng-Xia; Li, Yongnan; Tu, Chenghou; Wang, Hui-Tian
2016-04-01
We have presented and demonstrated a method for the fabrication of various complicated microstructures based on dynamically controlled patterned vector optical fields (PVOFs). We design and generate dynamic PVOFs by loading patterned holograms displayed on the spatial light modulator and moving traces of focuses with different patterns. We experimentally fabricate the various microstructures in z-cut lithium niobate plates. The method we present has some benefits such as no motion of the fabricated samples and high efficiency due to its parallel feature. Moreover, our approach is able to fabricate three-dimensional microstructures. PMID:27192265
Dynamical renormalization group approach to relaxation in quantum field theory
NASA Astrophysics Data System (ADS)
Boyanovsky, D.; de Vega, H. J.
2003-10-01
The real time evolution and relaxation of expectation values of quantum fields and of quantum states are computed as initial value problems by implementing the dynamical renormalization group (DRG). Linear response is invoked to set up the renormalized initial value problem to study the dynamics of the expectation value of quantum fields. The perturbative solution of the equations of motion for the field expectation values of quantum fields as well as the evolution of quantum states features secular terms, namely terms that grow in time and invalidate the perturbative expansion for late times. The DRG provides a consistent framework to resum these secular terms and yields a uniform asymptotic expansion at long times. Several relevant cases are studied in detail, including those of threshold infrared divergences which appear in gauge theories at finite temperature and lead to anomalous relaxation. In these cases the DRG is shown to provide a resummation akin to Bloch-Nordsieck but directly in real time and that goes beyond the scope of Bloch-Nordsieck and Dyson resummations. The nature of the resummation program is discussed in several examples. The DRG provides a framework that is consistent, systematic, and easy to implement to study the non-equilibrium relaxational dynamics directly in real time that does not rely on the concept of quasiparticle widths.
Wardle, Susan G; Kriegeskorte, Nikolaus; Grootswagers, Tijl; Khaligh-Razavi, Seyed-Mahdi; Carlson, Thomas A
2016-05-15
Perceptual similarity is a cognitive judgment that represents the end-stage of a complex cascade of hierarchical processing throughout visual cortex. Previous studies have shown a correspondence between the similarity of coarse-scale fMRI activation patterns and the perceived similarity of visual stimuli, suggesting that visual objects that appear similar also share similar underlying patterns of neural activation. Here we explore the temporal relationship between the human brain's time-varying representation of visual patterns and behavioral judgments of perceptual similarity. The visual stimuli were abstract patterns constructed from identical perceptual units (oriented Gabor patches) so that each pattern had a unique global form or perceptual 'Gestalt'. The visual stimuli were decodable from evoked neural activation patterns measured with magnetoencephalography (MEG), however, stimuli differed in the similarity of their neural representation as estimated by differences in decodability. Early after stimulus onset (from 50ms), a model based on retinotopic organization predicted the representational similarity of the visual stimuli. Following the peak correlation between the retinotopic model and neural data at 80ms, the neural representations quickly evolved so that retinotopy no longer provided a sufficient account of the brain's time-varying representation of the stimuli. Overall the strongest predictor of the brain's representation was a model based on human judgments of perceptual similarity, which reached the limits of the maximum correlation with the neural data defined by the 'noise ceiling'. Our results show that large-scale brain activation patterns contain a neural signature for the perceptual Gestalt of composite visual features, and demonstrate a strong correspondence between perception and complex patterns of brain activity. PMID:26899210
Majerus, Steve; Salmon, Eric; Attout, Lucie
2013-01-01
Studies of brain-behaviour interactions in the field of working memory (WM) have associated WM success with activation of a fronto-parietal network during the maintenance stage, and this mainly for visuo-spatial WM. Using an inter-individual differences approach, we demonstrate here the equal importance of neural dynamics during the encoding stage, and this in the context of verbal WM tasks which are characterized by encoding phases of long duration and sustained attentional demands. Participants encoded and maintained 5-word lists, half of them containing an unexpected word intended to disturb WM encoding and associated task-related attention processes. We observed that inter-individual differences in WM performance for lists containing disturbing stimuli were related to activation levels in a region previously associated with task-related attentional processing, the left intraparietal sulcus (IPS), and this during stimulus encoding but not maintenance; functional connectivity strength between the left IPS and lateral prefrontal cortex (PFC) further predicted WM performance. This study highlights the critical role, during WM encoding, of neural substrates involved in task-related attentional processes for predicting inter-individual differences in verbal WM performance, and, more generally, provides support for attention-based models of WM. PMID:23874935
Mender, Bedeho M. W.; Stringer, Simon M.
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions. PMID:25717301
Cheng, Shuiyuan; Li, Li; Chen, Dongsheng; Li, Jianbing
2012-12-15
A neural network based ensemble methodology was presented in this study to improve the accuracy of meteorological input fields for regional air quality modeling. Through nonlinear integration of simulation results from two meteorological models (MM5 and WRF), the ensemble approach focused on the optimization of meteorological variable values (temperature, surface air pressure, and wind field) in the vertical layer near ground. To illustrate the proposed approach, a case study in northern China during two selected air pollution events, in 2006, was conducted. The performances of the MM5, the WRF, and the ensemble approach were assessed using different statistical measures. The results indicated that the ensemble approach had a higher simulation accuracy than the MM5 and the WRF model. Performance was improved by more than 12.9% for temperature, 18.7% for surface air pressure field, and 17.7% for wind field. The atmospheric PM(10) concentrations in the study region were also simulated by coupling the air quality model CMAQ with the MM5 model, the WRF model, and the ensemble model. It was found that the modeling accuracy of the ensemble-CMAQ model was improved by more than 7.0% and 17.8% when compared to the MM5-CMAQ and the WRF-CMAQ models, respectively. The proposed neural network based meteorological modeling approach holds great potential for improving the performance of regional air quality modeling. PMID:23000477
Cen, Zhaohui; Wei, Jiaolong; Jiang, Rui
2013-12-01
A novel gray-box neural network model (GBNNM), including multi-layer perception (MLP) neural network (NN) and integrators, is proposed for a model identification and fault estimation (MIFE) scheme. With the GBNNM, both the nonlinearity and dynamics of a class of nonlinear dynamic systems can be approximated. Unlike previous NN-based model identification methods, the GBNNM directly inherits system dynamics and separately models system nonlinearities. This model corresponds well with the object system and is easy to build. The GBNNM is embedded online as a normal model reference to obtain the quantitative residual between the object system output and the GBNNM output. This residual can accurately indicate the fault offset value, so it is suitable for differing fault severities. To further estimate the fault parameters (FPs), an improved extended state observer (ESO) using the same NNs (IESONN) from the GBNNM is proposed to avoid requiring the knowledge of ESO nonlinearity. Then, the proposed MIFE scheme is applied for reaction wheels (RW) in a satellite attitude control system (SACS). The scheme using the GBNNM is compared with other NNs in the same fault scenario, and several partial loss of effect (LOE) faults with different severities are considered to validate the effectiveness of the FP estimation and its superiority. PMID:24156668
Pulsed DC Electric Field-Induced Differentiation of Cortical Neural Precursor Cells.
Chang, Hui-Fang; Lee, Ying-Shan; Tang, Tang K; Cheng, Ji-Yen
2016-01-01
We report the differentiation of neural stem and progenitor cells solely induced by direct current (DC) pulses stimulation. Neural stem and progenitor cells in the adult mammalian brain are promising candidates for the development of therapeutic neuroregeneration strategies. The differentiation of neural stem and progenitor cells depends on various in vivo environmental factors, such as nerve growth factor and endogenous EF. In this study, we demonstrated that the morphologic and phenotypic changes of mouse neural stem and progenitor cells (mNPCs) could be induced solely by exposure to square-wave DC pulses (magnitude 300 mV/mm at frequency of 100-Hz). The DC pulse stimulation was conducted for 48 h, and the morphologic changes of mNPCs were monitored continuously. The length of primary processes and the amount of branching significantly increased after stimulation by DC pulses for 48 h. After DC pulse treatment, the mNPCs differentiated into neurons, astrocytes, and oligodendrocytes simultaneously in stem cell maintenance medium. Our results suggest that simple DC pulse treatment could control the fate of NPCs. With further studies, DC pulses may be applied to manipulate NPC differentiation and may be used for the development of therapeutic strategies that employ NPCs to treat nervous system disorders. PMID:27352251
Simulations of Dynamical Friction Including Spatially-Varying Magnetic Fields
Bell, G. I.; Bruhwiler, D. L.; Busby, R.; Abell, D. T.; Messmer, P.; Veitzer, S.; Litvinenko, V. N.; Cary, J. R.
2006-03-20
A proposed luminosity upgrade to the Relativistic Heavy Ion Collider (RHIC) includes a novel electron cooling section, which would use {approx}55 MeV electrons to cool fully-ionized 100 GeV/nucleon gold ions. We consider the dynamical friction force exerted on individual ions due to a relevant electron distribution. The electrons may be focussed by a strong solenoid field, with sensitive dependence on errors, or by a wiggler field. In the rest frame of the relativistic co-propagating electron and ion beams, where the friction force can be simulated for nonrelativistic motion and electrostatic fields, the Lorentz transform of these spatially-varying magnetic fields includes strong, rapidly-varying electric fields. Previous friction force simulations for unmagnetized electrons or error-free solenoids used a 4th-order Hermite algorithm, which is not well-suited for the inclusion of strong, rapidly-varying external fields. We present here a new algorithm for friction force simulations, using an exact two-body collision model to accurately resolve close interactions between electron/ion pairs. This field-free binary-collision model is combined with a modified Boris push, using an operator-splitting approach, to include the effects of external fields. The algorithm has been implemented in the VORPAL code and successfully benchmarked.