ChainMail based neural dynamics modeling of soft tissue deformation for surgical simulation.
Zhang, Jinao; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2017-07-20
Realistic and real-time modeling and simulation of soft tissue deformation is a fundamental research issue in the field of surgical simulation. In this paper, a novel cellular neural network approach is presented for modeling and simulation of soft tissue deformation by combining neural dynamics of cellular neural network with ChainMail mechanism. The proposed method formulates the problem of elastic deformation into cellular neural network activities to avoid the complex computation of elasticity. The local position adjustments of ChainMail are incorporated into the cellular neural network as the local connectivity of cells, through which the dynamic behaviors of soft tissue deformation are transformed into the neural dynamics of cellular neural network. Experiments demonstrate that the proposed neural network approach is capable of modeling the soft tissues' nonlinear deformation and typical mechanical behaviors. The proposed method not only improves ChainMail's linear deformation with the nonlinear characteristics of neural dynamics but also enables the cellular neural network to follow the principle of continuum mechanics to simulate soft tissue deformation.
Creative-Dynamics Approach To Neural Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1992-01-01
Paper discusses approach to mathematical modeling of artificial neural networks exhibiting complicated behaviors reminiscent of creativity and intelligence of biological neural networks. Neural network treated as non-Lipschitzian dynamical system - as described in "Non-Lipschitzian Dynamics For Modeling Neural Networks" (NPO-17814). System serves as tool for modeling of temporal-pattern memories and recognition of complicated spatial patterns.
Model Of Neural Network With Creative Dynamics
NASA Technical Reports Server (NTRS)
Zak, Michail; Barhen, Jacob
1993-01-01
Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.
Neural Networks for Rapid Design and Analysis
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.; Maghami, Peiman G.
1998-01-01
Artificial neural networks have been employed for rapid and efficient dynamics and control analysis of flexible systems. Specifically, feedforward neural networks are designed to approximate nonlinear dynamic components over prescribed input ranges, and are used in simulations as a means to speed up the overall time response analysis process. To capture the recursive nature of dynamic components with artificial neural networks, recurrent networks, which use state feedback with the appropriate number of time delays, as inputs to the networks, are employed. Once properly trained, neural networks can give very good approximations to nonlinear dynamic components, and by their judicious use in simulations, allow the analyst the potential to speed up the analysis process considerably. To illustrate this potential speed up, an existing simulation model of a spacecraft reaction wheel system is executed, first conventionally, and then with an artificial neural network in place.
An Attractor-Based Complexity Measurement for Boolean Recurrent Neural Networks
Cabessa, Jérémie; Villa, Alessandro E. P.
2014-01-01
We provide a novel refined attractor-based complexity measurement for Boolean recurrent neural networks that represents an assessment of their computational power in terms of the significance of their attractor dynamics. This complexity measurement is achieved by first proving a computational equivalence between Boolean recurrent neural networks and some specific class of -automata, and then translating the most refined classification of -automata to the Boolean neural network context. As a result, a hierarchical classification of Boolean neural networks based on their attractive dynamics is obtained, thus providing a novel refined attractor-based complexity measurement for Boolean recurrent neural networks. These results provide new theoretical insights to the computational and dynamical capabilities of neural networks according to their attractive potentialities. An application of our findings is illustrated by the analysis of the dynamics of a simplified model of the basal ganglia-thalamocortical network simulated by a Boolean recurrent neural network. This example shows the significance of measuring network complexity, and how our results bear new founding elements for the understanding of the complexity of real brain circuits. PMID:24727866
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2017-12-01
How neural adaptation affects neural information processing (i.e. the dynamics and equilibrium state of neural activities) is a central question in computational neuroscience. In my previous works, I analytically clarified the dynamics and equilibrium state of neural activities in a ring-type neural network model that is widely used to model the visual cortex, motor cortex, and several other brain regions. The neural dynamics and the equilibrium state in the neural network model corresponded to a Bayesian computation and statistically optimal multiple information integration, respectively, under a biologically inspired condition. These results were revealed in an analytically tractable manner; however, adaptation effects were not considered. Here, I analytically reveal how the dynamics and equilibrium state of neural activities in a ring neural network are influenced by spike-frequency adaptation (SFA). SFA is an adaptation that causes gradual inhibition of neural activity when a sustained stimulus is applied, and the strength of this inhibition depends on neural activities. I reveal that SFA plays three roles: (1) SFA amplifies the influence of external input in neural dynamics; (2) SFA allows the history of the external input to affect neural dynamics; and (3) the equilibrium state corresponds to the statistically optimal multiple information integration independent of the existence of SFA. In addition, the equilibrium state in a ring neural network model corresponds to the statistically optimal integration of multiple information sources under biologically inspired conditions, independent of the existence of SFA.
Modeling and control of magnetorheological fluid dampers using neural networks
NASA Astrophysics Data System (ADS)
Wang, D. H.; Liao, W. H.
2005-02-01
Due to the inherent nonlinear nature of magnetorheological (MR) fluid dampers, one of the challenging aspects for utilizing these devices to achieve high system performance is the development of accurate models and control algorithms that can take advantage of their unique characteristics. In this paper, the direct identification and inverse dynamic modeling for MR fluid dampers using feedforward and recurrent neural networks are studied. The trained direct identification neural network model can be used to predict the damping force of the MR fluid damper on line, on the basis of the dynamic responses across the MR fluid damper and the command voltage, and the inverse dynamic neural network model can be used to generate the command voltage according to the desired damping force through supervised learning. The architectures and the learning methods of the dynamic neural network models and inverse neural network models for MR fluid dampers are presented, and some simulation results are discussed. Finally, the trained neural network models are applied to predict and control the damping force of the MR fluid damper. Moreover, validation methods for the neural network models developed are proposed and used to evaluate their performance. Validation results with different data sets indicate that the proposed direct identification dynamic model using the recurrent neural network can be used to predict the damping force accurately and the inverse identification dynamic model using the recurrent neural network can act as a damper controller to generate the command voltage when the MR fluid damper is used in a semi-active mode.
A biologically inspired neural network for dynamic programming.
Francelin Romero, R A; Kacpryzk, J; Gomide, F
2001-12-01
An artificial neural network with a two-layer feedback topology and generalized recurrent neurons, for solving nonlinear discrete dynamic optimization problems, is developed. A direct method to assign the weights of neural networks is presented. The method is based on Bellmann's Optimality Principle and on the interchange of information which occurs during the synaptic chemical processing among neurons. The neural network based algorithm is an advantageous approach for dynamic programming due to the inherent parallelism of the neural networks; further it reduces the severity of computational problems that can occur in methods like conventional methods. Some illustrative application examples are presented to show how this approach works out including the shortest path and fuzzy decision making problems.
Electronic neural networks for global optimization
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.
1990-01-01
An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.
Application of dynamic recurrent neural networks in nonlinear system identification
NASA Astrophysics Data System (ADS)
Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang
2006-11-01
An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.
Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators.
Xu, Kesheng; Maidana, Jean Paul; Castro, Samy; Orio, Patricio
2018-05-30
Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that - when isolated - can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. We found that the heterogeneity of firing rates and firing patterns make a greater contribution than chaos to the steepness of the synchronization transition curve. We also show that chaotic dynamics of the isolated neurons do not always make a visible difference in the transition to full synchrony. Moreover, macroscopic chaos is observed regardless of the dynamics nature of the neurons. However, performing a Functional Connectivity Dynamics analysis, we show that chaotic nodes can promote what is known as multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states.
Miconi, Thomas
2017-01-01
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
Neural dynamics based on the recognition of neural fingerprints
Carrillo-Medina, José Luis; Latorre, Roberto
2015-01-01
Experimental evidence has revealed the existence of characteristic spiking features in different neural signals, e.g., individual neural signatures identifying the emitter or functional signatures characterizing specific tasks. These neural fingerprints may play a critical role in neural information processing, since they allow receptors to discriminate or contextualize incoming stimuli. This could be a powerful strategy for neural systems that greatly enhances the encoding and processing capacity of these networks. Nevertheless, the study of information processing based on the identification of specific neural fingerprints has attracted little attention. In this work, we study (i) the emerging collective dynamics of a network of neurons that communicate with each other by exchange of neural fingerprints and (ii) the influence of the network topology on the self-organizing properties within the network. Complex collective dynamics emerge in the network in the presence of stimuli. Predefined inputs, i.e., specific neural fingerprints, are detected and encoded into coexisting patterns of activity that propagate throughout the network with different spatial organization. The patterns evoked by a stimulus can survive after the stimulation is over, which provides memory mechanisms to the network. The results presented in this paper suggest that neural information processing based on neural fingerprints can be a plausible, flexible, and powerful strategy. PMID:25852531
Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano
2008-09-01
This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.
Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.; Makaryants, G. M.
2018-01-01
There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.
Dynamic interactions in neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbib, M.A.; Amari, S.
The study of neural networks is enjoying a great renaissance, both in computational neuroscience, the development of information processing models of living brains, and in neural computing, the use of neurally inspired concepts in the construction of intelligent machines. This volume presents models and data on the dynamic interactions occurring in the brain, and exhibits the dynamic interactions between research in computational neuroscience and in neural computing. The authors present current research, future trends and open problems.
Bio-inspired spiking neural network for nonlinear systems control.
Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M
2018-08-01
Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.
Dynamic security contingency screening and ranking using neural networks.
Mansour, Y; Vaahedi, E; El-Sharkawi, M A
1997-01-01
This paper summarizes BC Hydro's experience in applying neural networks to dynamic security contingency screening and ranking. The idea is to use the information on the prevailing operating condition and directly provide contingency screening and ranking using a trained neural network. To train the two neural networks for the large scale systems of BC Hydro and Hydro Quebec, in total 1691 detailed transient stability simulation were conducted, 1158 for BC Hydro system and 533 for the Hydro Quebec system. The simulation program was equipped with the energy margin calculation module (second kick) to measure the energy margin in each run. The first set of results showed poor performance for the neural networks in assessing the dynamic security. However a number of corrective measures improved the results significantly. These corrective measures included: 1) the effectiveness of output; 2) the number of outputs; 3) the type of features (static versus dynamic); 4) the number of features; 5) system partitioning; and 6) the ratio of training samples to features. The final results obtained using the large scale systems of BC Hydro and Hydro Quebec demonstrates a good potential for neural network in dynamic security assessment contingency screening and ranking.
The Complexity of Dynamics in Small Neural Circuits
Panzeri, Stefano
2016-01-01
Mean-field approximations are a powerful tool for studying large neural networks. However, they do not describe well the behavior of networks composed of a small number of neurons. In this case, major differences between the mean-field approximation and the real behavior of the network can arise. Yet, many interesting problems in neuroscience involve the study of mesoscopic networks composed of a few tens of neurons. Nonetheless, mathematical methods that correctly describe networks of small size are still rare, and this prevents us to make progress in understanding neural dynamics at these intermediate scales. Here we develop a novel systematic analysis of the dynamics of arbitrarily small networks composed of homogeneous populations of excitatory and inhibitory firing-rate neurons. We study the local bifurcations of their neural activity with an approach that is largely analytically tractable, and we numerically determine the global bifurcations. We find that for strong inhibition these networks give rise to very complex dynamics, caused by the formation of multiple branching solutions of the neural dynamics equations that emerge through spontaneous symmetry-breaking. This qualitative change of the neural dynamics is a finite-size effect of the network, that reveals qualitative and previously unexplored differences between mesoscopic cortical circuits and their mean-field approximation. The most important consequence of spontaneous symmetry-breaking is the ability of mesoscopic networks to regulate their degree of functional heterogeneity, which is thought to help reducing the detrimental effect of noise correlations on cortical information processing. PMID:27494737
NASA Astrophysics Data System (ADS)
Sidelnikov, O. S.; Redyuk, A. A.; Sygletos, S.
2017-12-01
We consider neural network-based schemes of digital signal processing. It is shown that the use of a dynamic neural network-based scheme of signal processing ensures an increase in the optical signal transmission quality in comparison with that provided by other methods for nonlinear distortion compensation.
Coherence resonance in bursting neural networks
NASA Astrophysics Data System (ADS)
Kim, June Hoan; Lee, Ho Jun; Min, Cheol Hong; Lee, Kyoung J.
2015-10-01
Synchronized neural bursts are one of the most noticeable dynamic features of neural networks, being essential for various phenomena in neuroscience, yet their complex dynamics are not well understood. With extrinsic electrical and optical manipulations on cultured neural networks, we demonstrate that the regularity (or randomness) of burst sequences is in many cases determined by a (few) low-dimensional attractor(s) working under strong neural noise. Moreover, there is an optimal level of noise strength at which the regularity of the interburst interval sequence becomes maximal—a phenomenon of coherence resonance. The experimental observations are successfully reproduced through computer simulations on a well-established neural network model, suggesting that the same phenomena may occur in many in vivo as well as in vitro neural networks.
NASA Astrophysics Data System (ADS)
Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min
2015-12-01
In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.
Neural networks for self-learning control systems
NASA Technical Reports Server (NTRS)
Nguyen, Derrick H.; Widrow, Bernard
1990-01-01
It is shown how a neural network can learn of its own accord to control a nonlinear dynamic system. An emulator, a multilayered neural network, learns to identify the system's dynamic characteristics. The controller, another multilayered neural network, next learns to control the emulator. The self-trained controller is then used to control the actual dynamic system. The learning process continues as the emulator and controller improve and track the physical process. An example is given to illustrate these ideas. The 'truck backer-upper,' a neural network controller that steers a trailer truck while the truck is backing up to a loading dock, is demonstrated. The controller is able to guide the truck to the dock from almost any initial position. The technique explored should be applicable to a wide variety of nonlinear control problems.
Liu, Hui; Song, Yongduan; Xue, Fangzheng; Li, Xiumin
2015-11-01
In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than the SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hui; Song, Yongduan; Xue, Fangzheng
In this paper, the generation of multi-clustered structure of self-organized neural network with different neuronal firing patterns, i.e., bursting or spiking, has been investigated. The initially all-to-all-connected spiking neural network or bursting neural network can be self-organized into clustered structure through the symmetric spike-timing-dependent plasticity learning for both bursting and spiking neurons. However, the time consumption of this clustering procedure of the burst-based self-organized neural network (BSON) is much shorter than the spike-based self-organized neural network (SSON). Our results show that the BSON network has more obvious small-world properties, i.e., higher clustering coefficient and smaller shortest path length than themore » SSON network. Also, the results of larger structure entropy and activity entropy of the BSON network demonstrate that this network has higher topological complexity and dynamical diversity, which benefits for enhancing information transmission of neural circuits. Hence, we conclude that the burst firing can significantly enhance the efficiency of clustering procedure and the emergent clustered structure renders the whole network more synchronous and therefore more sensitive to weak input. This result is further confirmed from its improved performance on stochastic resonance. Therefore, we believe that the multi-clustered neural network which self-organized from the bursting dynamics has high efficiency in information processing.« less
Modular, Hierarchical Learning By Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
A Scientific Understanding of Keystroke Dynamics
2012-01-01
keystroke- dynamics classifiers. Obaidat and Sadoun (1997) had 16 subjects type their own and each others’ user IDs. They constructed neural networks and a...puts are assigned high anomaly scores. In the training phase, the neural network is constructed with p input nodes and p out- put nodes (where p is...Berlin. S. Cho, C. Han, D. H. Han, and H.-I. Kim. Web- based keystroke dynamics identity ver- ification using neural network . Journal of Organizational
Liu, Qingshan; Guo, Zhishan; Wang, Jun
2012-02-01
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Neurodynamics With Spatial Self-Organization
NASA Technical Reports Server (NTRS)
Zak, Michail A.
1993-01-01
Report presents theoretical study of dynamics of neural network organizing own response in both phase space and in position space. Postulates several mathematical models of dynamics including spatial derivatives representing local interconnections among neurons. Shows how neural responses propagate via these interconnections and how spatial pattern of neural responses formed in homogeneous biological neural network.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1992-01-01
Concept of space-time neural network affords distributed temporal memory enabling such network to model complicated dynamical systems mathematically and to recognize temporally varying spatial patterns. Digital filters replace synaptic-connection weights of conventional back-error-propagation neural network.
Synthesis of recurrent neural networks for dynamical system simulation.
Trischler, Adam P; D'Eleuterio, Gabriele M T
2016-08-01
We review several of the most widely used techniques for training recurrent neural networks to approximate dynamical systems, then describe a novel algorithm for this task. The algorithm is based on an earlier theoretical result that guarantees the quality of the network approximation. We show that a feedforward neural network can be trained on the vector-field representation of a given dynamical system using backpropagation, then recast it as a recurrent network that replicates the original system's dynamics. After detailing this algorithm and its relation to earlier approaches, we present numerical examples that demonstrate its capabilities. One of the distinguishing features of our approach is that both the original dynamical systems and the recurrent networks that simulate them operate in continuous time. Copyright © 2016 Elsevier Ltd. All rights reserved.
Propagating waves can explain irregular neural dynamics.
Keane, Adam; Gong, Pulin
2015-01-28
Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.
Standard representation and unified stability analysis for dynamic artificial neural network models.
Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D
2018-02-01
An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.
Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.; Williams-Hayes, Peggy; Kaneshige, John T.; Stachowiak, Susan J.
2006-01-01
Description of the performance of a simplified dynamic inversion controller with neural network augmentation follows. Simulation studies focus on the results with and without neural network adaptation through the use of an F-15 aircraft simulator that has been modified to include canards. Simulated control law performance with a surface failure, in addition to an aerodynamic failure, is presented. The aircraft, with adaptation, attempts to minimize the inertial cross-coupling effect of the failure (a control derivative anomaly associated with a jammed control surface). The dynamic inversion controller calculates necessary surface commands to achieve desired rates. The dynamic inversion controller uses approximate short period and roll axis dynamics. The yaw axis controller is a sideslip rate command system. Methods are described to reduce the cross-coupling effect and maintain adequate tracking errors for control surface failures. The aerodynamic failure destabilizes the pitching moment due to angle of attack. The results show that control of the aircraft with the neural networks is easier (more damped) than without the neural networks. Simulation results show neural network augmentation of the controller improves performance with aerodynamic and control surface failures in terms of tracking error and cross-coupling reduction.
Adaptive Control Using Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.; Williams-Hayes, Peggy; Karneshige, J. T.; Stachowiak, Susan J.
2006-01-01
Description of the performance of a simplified dynamic inversion controller with neural network augmentation follows. Simulation studies focus on the results with and without neural network adaptation through the use of an F-15 aircraft simulator that has been modified to include canards. Simulated control law performance with a surface failure, in addition to an aerodynamic failure, is presented. The aircraft, with adaptation, attempts to minimize the inertial cross-coupling effect of the failure (a control derivative anomaly associated with a jammed control surface). The dynamic inversion controller calculates necessary surface commands to achieve desired rates. The dynamic inversion controller uses approximate short period and roll axis dynamics. The yaw axis controller is a sideslip rate command system. Methods are described to reduce the cross-coupling effect and maintain adequate tracking errors for control surface failures. The aerodynamic failure destabilizes the pitching moment due to angle of attack. The results show that control of the aircraft with the neural networks is easier (more damped) than without the neural networks. Simulation results show neural network augmentation of the controller improves performance with aerodynamic and control surface failures in terms of tracking error and cross-coupling reduction.
Li, Shuhui; Fairbank, Michael; Johnson, Cameron; Wunsch, Donald C; Alonso, Eduardo; Proaño, Julio L
2014-04-01
Three-phase grid-connected converters are widely used in renewable and electric power system applications. Traditionally, grid-connected converters are controlled with standard decoupled d-q vector control mechanisms. However, recent studies indicate that such mechanisms show limitations in their applicability to dynamic systems. This paper investigates how to mitigate such restrictions using a neural network to control a grid-connected rectifier/inverter. The neural network implements a dynamic programming algorithm and is trained by using back-propagation through time. To enhance performance and stability under disturbance, additional strategies are adopted, including the use of integrals of error signals to the network inputs and the introduction of grid disturbance voltage to the outputs of a well-trained network. The performance of the neural-network controller is studied under typical vector control conditions and compared against conventional vector control methods, which demonstrates that the neural vector control strategy proposed in this paper is effective. Even in dynamic and power converter switching environments, the neural vector controller shows strong ability to trace rapidly changing reference commands, tolerate system disturbances, and satisfy control requirements for a faulted power system.
System Identification for Nonlinear Control Using Neural Networks
NASA Technical Reports Server (NTRS)
Stengel, Robert F.; Linse, Dennis J.
1990-01-01
An approach to incorporating artificial neural networks in nonlinear, adaptive control systems is described. The controller contains three principal elements: a nonlinear inverse dynamic control law whose coefficients depend on a comprehensive model of the plant, a neural network that models system dynamics, and a state estimator whose outputs drive the control law and train the neural network. Attention is focused on the system identification task, which combines an extended Kalman filter with generalized spline function approximation. Continual learning is possible during normal operation, without taking the system off line for specialized training. Nonlinear inverse dynamic control requires smooth derivatives as well as function estimates, imposing stringent goals on the approximating technique.
Suemitsu, Yoshikazu; Nara, Shigetoshi
2004-09-01
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.
Generalized Adaptive Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1993-01-01
Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.
Impact of leakage delay on bifurcation in high-order fractional BAM neural networks.
Huang, Chengdai; Cao, Jinde
2018-02-01
The effects of leakage delay on the dynamics of neural networks with integer-order have lately been received considerable attention. It has been confirmed that fractional neural networks more appropriately uncover the dynamical properties of neural networks, but the results of fractional neural networks with leakage delay are relatively few. This paper primarily concentrates on the issue of bifurcation for high-order fractional bidirectional associative memory(BAM) neural networks involving leakage delay. The first attempt is made to tackle the stability and bifurcation of high-order fractional BAM neural networks with time delay in leakage terms in this paper. The conditions for the appearance of bifurcation for the proposed systems with leakage delay are firstly established by adopting time delay as a bifurcation parameter. Then, the bifurcation criteria of such system without leakage delay are successfully acquired. Comparative analysis wondrously detects that the stability performance of the proposed high-order fractional neural networks is critically weakened by leakage delay, they cannot be overlooked. Numerical examples are ultimately exhibited to attest the efficiency of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fan, Meng; Ye, Dan
2005-09-01
This paper studies the dynamics of a system of retarded functional differential equations (i.e., RF=Es), which generalize the Hopfield neural network models, the bidirectional associative memory neural networks, the hybrid network models of the cellular neural network type, and some population growth model. Sufficient criteria are established for the globally exponential stability and the existence and uniqueness of pseudo almost periodic solution. The approaches are based on constructing suitable Lyapunov functionals and the well-known Banach contraction mapping principle. The paper ends with some applications of the main results to some neural network models and population growth models and numerical simulations.
The experimental identification of magnetorheological dampers and evaluation of their controllers
NASA Astrophysics Data System (ADS)
Metered, H.; Bonello, P.; Oyadiji, S. O.
2010-05-01
Magnetorheological (MR) fluid dampers are semi-active control devices that have been applied over a wide range of practical vibration control applications. This paper concerns the experimental identification of the dynamic behaviour of an MR damper and the use of the identified parameters in the control of such a damper. Feed-forward and recurrent neural networks are used to model both the direct and inverse dynamics of the damper. Training and validation of the proposed neural networks are achieved by using the data generated through dynamic tests with the damper mounted on a tensile testing machine. The validation test results clearly show that the proposed neural networks can reliably represent both the direct and inverse dynamic behaviours of an MR damper. The effect of the cylinder's surface temperature on both the direct and inverse dynamics of the damper is studied, and the neural network model is shown to be reasonably robust against significant temperature variation. The inverse recurrent neural network model is introduced as a damper controller and experimentally evaluated against alternative controllers proposed in the literature. The results reveal that the neural-based damper controller offers superior damper control. This observation and the added advantages of low-power requirement, extended service life of the damper and the minimal use of sensors, indicate that a neural-based damper controller potentially offers the most cost-effective vibration control solution among the controllers investigated.
Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen
2013-02-01
This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.
NASA Astrophysics Data System (ADS)
Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan
2018-02-01
Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.
Nonlinear neural control with power systems applications
NASA Astrophysics Data System (ADS)
Chen, Dingguo
1998-12-01
Extensive studies have been undertaken on the transient stability of large interconnected power systems with flexible ac transmission systems (FACTS) devices installed. Varieties of control methodologies have been proposed to stabilize the postfault system which would otherwise eventually lose stability without a proper control. Generally speaking, regular transient stability is well understood, but the mechanism of load-driven voltage instability or voltage collapse has not been well understood. The interaction of generator dynamics and load dynamics makes synthesis of stabilizing controllers even more challenging. There is currently increasing interest in the research of neural networks as identifiers and controllers for dealing with dynamic time-varying nonlinear systems. This study focuses on the development of novel artificial neural network architectures for identification and control with application to dynamic electric power systems so that the stability of the interconnected power systems, following large disturbances, and/or with the inclusion of uncertain loads, can be largely enhanced, and stable operations are guaranteed. The latitudinal neural network architecture is proposed for the purpose of system identification. It may be used for identification of nonlinear static/dynamic loads, which can be further used for static/dynamic voltage stability analysis. The properties associated with this architecture are investigated. A neural network methodology is proposed for dealing with load modeling and voltage stability analysis. Based on the neural network models of loads, voltage stability analysis evolves, and modal analysis is performed. Simulation results are also provided. The transient stability problem is studied with consideration of load effects. The hierarchical neural control scheme is developed. Trajectory-following policy is used so that the hierarchical neural controller performs as almost well for non-nominal cases as they do for the nominal cases. The adaptive hierarchical neural control scheme is also proposed to deal with the time-varying nature of loads. Further, adaptive neural control, which is based on the on-line updating of the weights and biases of the neural networks, is studied. Simulations provided on the faulted power systems with unknown loads suggest that the proposed adaptive hierarchical neural control schemes should be useful for practical power applications.
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
Peng, Zhouhua; Wang, Dan; Wang, Wei; Liu, Lu
2015-11-01
This paper investigates the containment control problem of networked autonomous underwater vehicles in the presence of model uncertainty and unknown ocean disturbances. A predictor-based neural dynamic surface control design method is presented to develop the distributed adaptive containment controllers, under which the trajectories of follower vehicles nearly converge to the dynamic convex hull spanned by multiple reference trajectories over a directed network. Prediction errors, rather than tracking errors, are used to update the neural adaptation laws, which are independent of the tracking error dynamics, resulting in two time-scales to govern the entire system. The stability property of the closed-loop network is established via Lyapunov analysis, and transient property is quantified in terms of L2 norms of the derivatives of neural weights, which are shown to be smaller than the classical neural dynamic surface control approach. Comparative studies are given to show the substantial improvements of the proposed new method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Spatiotemporal properties of microsaccades: Model predictions and experimental tests
NASA Astrophysics Data System (ADS)
Zhou, Jian-Fang; Yuan, Wu-Jie; Zhou, Zhao
2016-10-01
Microsaccades are involuntary and very small eye movements during fixation. Recently, the microsaccade-related neural dynamics have been extensively investigated both in experiments and by constructing neural network models. Experimentally, microsaccades also exhibit many behavioral properties. It’s well known that the behavior properties imply the underlying neural dynamical mechanisms, and so are determined by neural dynamics. The behavioral properties resulted from neural responses to microsaccades, however, are not yet understood and are rarely studied theoretically. Linking neural dynamics to behavior is one of the central goals of neuroscience. In this paper, we provide behavior predictions on spatiotemporal properties of microsaccades according to microsaccade-induced neural dynamics in a cascading network model, which includes both retinal adaptation and short-term depression (STD) at thalamocortical synapses. We also successfully give experimental tests in the statistical sense. Our results provide the first behavior description of microsaccades based on neural dynamics induced by behaving activity, and so firstly link neural dynamics to behavior of microsaccades. These results indicate strongly that the cascading adaptations play an important role in the study of microsaccades. Our work may be useful for further investigations of the microsaccadic behavioral properties and of the underlying neural dynamical mechanisms responsible for the behavioral properties.
Chang, Fi-John; Chen, Pin-An; Chang, Li-Chiu; Tsai, Yu-Hsuan
2016-08-15
This study attempts to model the spatio-temporal dynamics of total phosphate (TP) concentrations along a river for effective hydro-environmental management. We propose a systematical modeling scheme (SMS), which is an ingenious modeling process equipped with a dynamic neural network and three refined statistical methods, for reliably predicting the TP concentrations along a river simultaneously. Two different types of artificial neural network (BPNN-static neural network; NARX network-dynamic neural network) are constructed in modeling the dynamic system. The Dahan River in Taiwan is used as a study case, where ten-year seasonal water quality data collected at seven monitoring stations along the river are used for model training and validation. Results demonstrate that the NARX network can suitably capture the important dynamic features and remarkably outperforms the BPNN model, and the SMS can effectively identify key input factors, suitably overcome data scarcity, significantly increase model reliability, satisfactorily estimate site-specific TP concentration at seven monitoring stations simultaneously, and adequately reconstruct seasonal TP data into a monthly scale. The proposed SMS can reliably model the dynamic spatio-temporal water pollution variation in a river system for missing, hazardous or costly data of interest. Copyright © 2016 Elsevier B.V. All rights reserved.
Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias
2008-12-01
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
Neural network error correction for solving coupled ordinary differential equations
NASA Technical Reports Server (NTRS)
Shelton, R. O.; Darsey, J. A.; Sumpter, B. G.; Noid, D. W.
1992-01-01
A neural network is presented to learn errors generated by a numerical algorithm for solving coupled nonlinear differential equations. The method is based on using a neural network to correctly learn the error generated by, for example, Runge-Kutta on a model molecular dynamics (MD) problem. The neural network programs used in this study were developed by NASA. Comparisons are made for training the neural network using backpropagation and a new method which was found to converge with fewer iterations. The neural net programs, the MD model and the calculations are discussed.
Accelerating Learning By Neural Networks
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Barhen, Jacob
1992-01-01
Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.
Reconfigurable Control with Neural Network Augmentation for a Modified F-15 Aircraft
NASA Technical Reports Server (NTRS)
Burken, John J.
2007-01-01
This paper describes the performance of a simplified dynamic inversion controller with neural network supplementation. This 6 DOF (Degree-of-Freedom) simulation study focuses on the results with and without adaptation of neural networks using a simulation of the NASA modified F-15 which has canards. One area of interest is the performance of a simulated surface failure while attempting to minimize the inertial cross coupling effect of a [B] matrix failure (a control derivative anomaly associated with a jammed or missing control surface). Another area of interest and presented is simulated aerodynamic failures ([A] matrix) such as a canard failure. The controller uses explicit models to produce desired angular rate commands. The dynamic inversion calculates the necessary surface commands to achieve the desired rates. The simplified dynamic inversion uses approximate short period and roll axis dynamics. Initial results indicated that the transient response for a [B] matrix failure using a Neural Network (NN) improved the control behavior when compared to not using a neural network for a given failure, However, further evaluation of the controller was comparable, with objections io the cross coupling effects (after changes were made to the controller). This paper describes the methods employed to reduce the cross coupling effect and maintain adequate tracking errors. The IA] matrix failure results show that control of the aircraft without adaptation is more difficult [leas damped) than with active neural networks, Simulation results show Neural Network augmentation of the controller improves performance in terms of backing error and cross coupling reduction and improved performance with aerodynamic-type failures.
Boreland, B; Clement, G; Kunze, H
2015-08-01
After reviewing set selection and memory model dynamical system neural networks, we introduce a neural network model that combines set selection with partial memories (stored memories on subsets of states in the network). We establish that feasible equilibria with all states equal to ± 1 correspond to answers to a particular set theoretic problem. We show that KenKen puzzles can be formulated as a particular case of this set theoretic problem and use the neural network model to solve them; in addition, we use a similar approach to solve Sudoku. We illustrate the approach in examples. As a heuristic experiment, we use online or print resources to identify the difficulty of the puzzles and compare these difficulties to the number of iterations used by the appropriate neural network solver, finding a strong relationship. Copyright © 2015 Elsevier Ltd. All rights reserved.
Neural networks: Alternatives to conventional techniques for automatic docking
NASA Technical Reports Server (NTRS)
Vinz, Bradley L.
1994-01-01
Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.
Region stability analysis and tracking control of memristive recurrent neural network.
Bao, Gang; Zeng, Zhigang; Shen, Yanjun
2018-02-01
Memristor is firstly postulated by Leon Chua and realized by Hewlett-Packard (HP) laboratory. Research results show that memristor can be used to simulate the synapses of neurons. This paper presents a class of recurrent neural network with HP memristors. Firstly, it shows that memristive recurrent neural network has more compound dynamics than the traditional recurrent neural network by simulations. Then it derives that n dimensional memristive recurrent neural network is composed of [Formula: see text] sub neural networks which do not have a common equilibrium point. By designing the tracking controller, it can make memristive neural network being convergent to the desired sub neural network. At last, two numerical examples are given to verify the validity of our result. Copyright © 2017 Elsevier Ltd. All rights reserved.
Functional neural networks underlying response inhibition in adolescents and adults.
Stevens, Michael C; Kiehl, Kent A; Pearlson, Godfrey D; Calhoun, Vince D
2007-07-19
This study provides the first description of neural network dynamics associated with response inhibition in healthy adolescents and adults. Functional and effective connectivity analyses of whole brain hemodynamic activity elicited during performance of a Go/No-Go task were used to identify functionally integrated neural networks and characterize their causal interactions. Three response inhibition circuits formed a hierarchical, inter-dependent system wherein thalamic modulation of input to premotor cortex by fronto-striatal regions led to response suppression. Adolescents differed from adults in the degree of network engagement, regional fronto-striatal-thalamic connectivity, and network dynamics. We identify and characterize several age-related differences in the function of neural circuits that are associated with behavioral performance changes across adolescent development.
On neural networks in identification and control of dynamic systems
NASA Technical Reports Server (NTRS)
Phan, Minh; Juang, Jer-Nan; Hyland, David C.
1993-01-01
This paper presents a discussion of the applicability of neural networks in the identification and control of dynamic systems. Emphasis is placed on the understanding of how the neural networks handle linear systems and how the new approach is related to conventional system identification and control methods. Extensions of the approach to nonlinear systems are then made. The paper explains the fundamental concepts of neural networks in their simplest terms. Among the topics discussed are feed forward and recurrent networks in relation to the standard state-space and observer models, linear and nonlinear auto-regressive models, linear, predictors, one-step ahead control, and model reference adaptive control for linear and nonlinear systems. Numerical examples are presented to illustrate the application of these important concepts.
Functional neural networks underlying response inhibition in adolescents and adults
Stevens, Michael C.; Kiehl, Kent A.; Pearlson, Godfrey D.; Calhoun, Vince D.
2008-01-01
This study provides the first description of neural network dynamics associated with response inhibition in healthy adolescents and adults. Functional and effective connectivity analyses of whole brain hemodynamic activity elicited during performance of a Go/No-Go task were used to identify functionally-integrated neural networks and characterize their causal interactions. Three response inhibition circuits formed a hierarchical, inter-dependent system wherein thalamic modulation of input to premotor cortex by frontostriatal regions led to response suppression. Adolescents differed from adults in the degree of network engagement, regional fronto-striatal-thalamic connectivity, and network dynamics. We identify and characterize several age-related differences in the function of neural circuits that are associated with behavioral performance changes across adolescent development. PMID:17467816
NASA Astrophysics Data System (ADS)
Kasatkina, T. I.; Dushkin, A. V.; Pavlov, V. A.; Shatovkin, R. R.
2018-03-01
In the development of information, systems and programming to predict the series of dynamics, neural network methods have recently been applied. They are more flexible, in comparison with existing analogues and are capable of taking into account the nonlinearities of the series. In this paper, we propose a modified algorithm for predicting the series of dynamics, which includes a method for training neural networks, an approach to describing and presenting input data, based on the prediction by the multilayer perceptron method. To construct a neural network, the values of a series of dynamics at the extremum points and time values corresponding to them, formed based on the sliding window method, are used as input data. The proposed algorithm can act as an independent approach to predicting the series of dynamics, and be one of the parts of the forecasting system. The efficiency of predicting the evolution of the dynamics series for a short-term one-step and long-term multi-step forecast by the classical multilayer perceptron method and a modified algorithm using synthetic and real data is compared. The result of this modification was the minimization of the magnitude of the iterative error that arises from the previously predicted inputs to the inputs to the neural network, as well as the increase in the accuracy of the iterative prediction of the neural network.
Dynamical synapses enhance neural information processing: gracefulness, accuracy, and mobility.
Fung, C C Alan; Wong, K Y Michael; Wang, He; Wu, Si
2012-05-01
Experimental data have revealed that neuronal connection efficacy exhibits two forms of short-term plasticity: short-term depression (STD) and short-term facilitation (STF). They have time constants residing between fast neural signaling and rapid learning and may serve as substrates for neural systems manipulating temporal information on relevant timescales. This study investigates the impact of STD and STF on the dynamics of continuous attractor neural networks and their potential roles in neural information processing. We find that STD endows the network with slow-decaying plateau behaviors: the network that is initially being stimulated to an active state decays to a silent state very slowly on the timescale of STD rather than on that of neuralsignaling. This provides a mechanism for neural systems to hold sensory memory easily and shut off persistent activities gracefully. With STF, we find that the network can hold a memory trace of external inputs in the facilitated neuronal interactions, which provides a way to stabilize the network response to noisy inputs, leading to improved accuracy in population decoding. Furthermore, we find that STD increases the mobility of the network states. The increased mobility enhances the tracking performance of the network in response to time-varying stimuli, leading to anticipative neural responses. In general, we find that STD and STP tend to have opposite effects on network dynamics and complementary computational advantages, suggesting that the brain may employ a strategy of weighting them differentially depending on the computational purpose.
NASA Astrophysics Data System (ADS)
Virkar, Yogesh S.; Shew, Woodrow L.; Restrepo, Juan G.; Ott, Edward
2016-10-01
Learning and memory are acquired through long-lasting changes in synapses. In the simplest models, such synaptic potentiation typically leads to runaway excitation, but in reality there must exist processes that robustly preserve overall stability of the neural system dynamics. How is this accomplished? Various approaches to this basic question have been considered. Here we propose a particularly compelling and natural mechanism for preserving stability of learning neural systems. This mechanism is based on the global processes by which metabolic resources are distributed to the neurons by glial cells. Specifically, we introduce and study a model composed of two interacting networks: a model neural network interconnected by synapses that undergo spike-timing-dependent plasticity; and a model glial network interconnected by gap junctions that diffusively transport metabolic resources among the glia and, ultimately, to neural synapses where they are consumed. Our main result is that the biophysical constraints imposed by diffusive transport of metabolic resources through the glial network can prevent runaway growth of synaptic strength, both during ongoing activity and during learning. Our findings suggest a previously unappreciated role for glial transport of metabolites in the feedback control stabilization of neural network dynamics during learning.
Dynamic Neural Networks Supporting Memory Retrieval
St. Jacques, Peggy L.; Kragel, Philip A.; Rubin, David C.
2011-01-01
How do separate neural networks interact to support complex cognitive processes such as remembrance of the personal past? Autobiographical memory (AM) retrieval recruits a consistent pattern of activation that potentially comprises multiple neural networks. However, it is unclear how such large-scale neural networks interact and are modulated by properties of the memory retrieval process. In the present functional MRI (fMRI) study, we combined independent component analysis (ICA) and dynamic causal modeling (DCM) to understand the neural networks supporting AM retrieval. ICA revealed four task-related components consistent with the previous literature: 1) Medial Prefrontal Cortex (PFC) Network, associated with self-referential processes, 2) Medial Temporal Lobe (MTL) Network, associated with memory, 3) Frontoparietal Network, associated with strategic search, and 4) Cingulooperculum Network, associated with goal maintenance. DCM analysis revealed that the medial PFC network drove activation within the system, consistent with the importance of this network to AM retrieval. Additionally, memory accessibility and recollection uniquely altered connectivity between these neural networks. Recollection modulated the influence of the medial PFC on the MTL network during elaboration, suggesting that greater connectivity among subsystems of the default network supports greater re-experience. In contrast, memory accessibility modulated the influence of frontoparietal and MTL networks on the medial PFC network, suggesting that ease of retrieval involves greater fluency among the multiple networks contributing to AM. These results show the integration between neural networks supporting AM retrieval and the modulation of network connectivity by behavior. PMID:21550407
Hideen Markov Models and Neural Networks for Fault Detection in Dynamic Systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
None given. (From conclusion): Neural networks plus Hidden Markov Models(HMM)can provide excellene detection and false alarm rate performance in fault detection applications. Modified models allow for novelty detection. Also covers some key contributions of neural network model, and application status.
Xiao, Min; Zheng, Wei Xing; Cao, Jinde
2013-01-01
Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.
Modeling fluctuations in default-mode brain network using a spiking neural network.
Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko
2012-08-01
Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.
Open quantum generalisation of Hopfield neural networks
NASA Astrophysics Data System (ADS)
Rotondo, P.; Marcuzzi, M.; Garrahan, J. P.; Lesanovsky, I.; Müller, M.
2018-03-01
We propose a new framework to understand how quantum effects may impact on the dynamics of neural networks. We implement the dynamics of neural networks in terms of Markovian open quantum systems, which allows us to treat thermal and quantum coherent effects on the same footing. In particular, we propose an open quantum generalisation of the Hopfield neural network, the simplest toy model of associative memory. We determine its phase diagram and show that quantum fluctuations give rise to a qualitatively new non-equilibrium phase. This novel phase is characterised by limit cycles corresponding to high-dimensional stationary manifolds that may be regarded as a generalisation of storage patterns to the quantum domain.
Baglietto, Gabriel; Gigante, Guido; Del Giudice, Paolo
2017-01-01
Two, partially interwoven, hot topics in the analysis and statistical modeling of neural data, are the development of efficient and informative representations of the time series derived from multiple neural recordings, and the extraction of information about the connectivity structure of the underlying neural network from the recorded neural activities. In the present paper we show that state-space clustering can provide an easy and effective option for reducing the dimensionality of multiple neural time series, that it can improve inference of synaptic couplings from neural activities, and that it can also allow the construction of a compact representation of the multi-dimensional dynamics, that easily lends itself to complexity measures. We apply a variant of the 'mean-shift' algorithm to perform state-space clustering, and validate it on an Hopfield network in the glassy phase, in which metastable states are largely uncorrelated from memories embedded in the synaptic matrix. In this context, we show that the neural states identified as clusters' centroids offer a parsimonious parametrization of the synaptic matrix, which allows a significant improvement in inferring the synaptic couplings from the neural activities. Moving to the more realistic case of a multi-modular spiking network, with spike-frequency adaptation inducing history-dependent effects, we propose a procedure inspired by Boltzmann learning, but extending its domain of application, to learn inter-module synaptic couplings so that the spiking network reproduces a prescribed pattern of spatial correlations; we then illustrate, in the spiking network, how clustering is effective in extracting relevant features of the network's state-space landscape. Finally, we show that the knowledge of the cluster structure allows casting the multi-dimensional neural dynamics in the form of a symbolic dynamics of transitions between clusters; as an illustration of the potential of such reduction, we define and analyze a measure of complexity of the neural time series.
Dynamic decomposition of spatiotemporal neural signals
2017-01-01
Neural signals are characterized by rich temporal and spatiotemporal dynamics that reflect the organization of cortical networks. Theoretical research has shown how neural networks can operate at different dynamic ranges that correspond to specific types of information processing. Here we present a data analysis framework that uses a linearized model of these dynamic states in order to decompose the measured neural signal into a series of components that capture both rhythmic and non-rhythmic neural activity. The method is based on stochastic differential equations and Gaussian process regression. Through computer simulations and analysis of magnetoencephalographic data, we demonstrate the efficacy of the method in identifying meaningful modulations of oscillatory signals corrupted by structured temporal and spatiotemporal noise. These results suggest that the method is particularly suitable for the analysis and interpretation of complex temporal and spatiotemporal neural signals. PMID:28558039
Dual adaptive dynamic control of mobile robots using neural networks.
Bugeja, Marvin K; Fabri, Simon G; Camilleri, Liberato
2009-02-01
This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.
Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.
Sokoloski, Sacha
2017-09-01
In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
Path optimisation of a mobile robot using an artificial neural network controller
NASA Astrophysics Data System (ADS)
Singh, M. K.; Parhi, D. R.
2011-01-01
This article proposed a novel approach for design of an intelligent controller for an autonomous mobile robot using a multilayer feed forward neural network, which enables the robot to navigate in a real world dynamic environment. The inputs to the proposed neural controller consist of left, right and front obstacle distance with respect to its position and target angle. The output of the neural network is steering angle. A four layer neural network has been designed to solve the path and time optimisation problem of mobile robots, which deals with the cognitive tasks such as learning, adaptation, generalisation and optimisation. A back propagation algorithm is used to train the network. This article also analyses the kinematic design of mobile robots for dynamic movements. The simulation results are compared with experimental results, which are satisfactory and show very good agreement. The training of the neural nets and the control performance analysis has been done in a real experimental setup.
Luo, Shaohua; Wu, Songli; Gao, Ruizhen
2015-07-01
This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in the closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shaohua; Department of Mechanical Engineering, Chongqing Aerospace Polytechnic, Chongqing, 400021; Wu, Songli
2015-07-15
This paper investigates chaos control for the brushless DC motor (BLDCM) system by adaptive dynamic surface approach based on neural network with the minimum weights. The BLDCM system contains parameter perturbation, chaotic behavior, and uncertainty. With the help of radial basis function (RBF) neural network to approximate the unknown nonlinear functions, the adaptive law is established to overcome uncertainty of the control gain. By introducing the RBF neural network and adaptive technology into the dynamic surface control design, a robust chaos control scheme is developed. It is proved that the proposed control approach can guarantee that all signals in themore » closed-loop system are globally uniformly bounded, and the tracking error converges to a small neighborhood of the origin. Simulation results are provided to show that the proposed approach works well in suppressing chaos and parameter perturbation.« less
Back-propagation learning of infinite-dimensional dynamical systems.
Tokuda, Isao; Tokunaga, Ryuji; Aihara, Kazuyuki
2003-10-01
This paper presents numerical studies of applying back-propagation learning to a delayed recurrent neural network (DRNN). The DRNN is a continuous-time recurrent neural network having time delayed feedbacks and the back-propagation learning is to teach spatio-temporal dynamics to the DRNN. Since the time-delays make the dynamics of the DRNN infinite-dimensional, the learning algorithm and the learning capability of the DRNN are different from those of the ordinary recurrent neural network (ORNN) having no time-delays. First, two types of learning algorithms are developed for a class of DRNNs. Then, using chaotic signals generated from the Mackey-Glass equation and the Rössler equations, learning capability of the DRNN is examined. Comparing the learning algorithms, learning capability, and robustness against noise of the DRNN with those of the ORNN and time delay neural network, advantages as well as disadvantages of the DRNN are investigated.
Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1997-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Design of neural networks for fast convergence and accuracy: dynamics and control.
Maghami, P G; Sparks, D R
2000-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting
Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927
Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.
Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut
2016-01-01
Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.
The use of artificial neural networks in experimental data acquisition and aerodynamic design
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1991-01-01
It is proposed that an artificial neural network be used to construct an intelligent data acquisition system. The artificial neural networks (ANN) model has a potential for replacing traditional procedures as well as for use in computational fluid dynamics validation. Potential advantages of the ANN model are listed. As a proof of concept, the author modeled a NACA 0012 airfoil at specific conditions, using the neural network simulator NETS, developed by James Baffes of the NASA Johnson Space Center. The neural network predictions were compared to the actual data. It is concluded that artificial neural networks can provide an elegant and valuable class of mathematical tools for data analysis.
NASA Astrophysics Data System (ADS)
Sun, Ran; Wang, Jihe; Zhang, Dexin; Shao, Xiaowei
2018-02-01
This paper presents an adaptive neural networks-based control method for spacecraft formation with coupled translational and rotational dynamics using only aerodynamic forces. It is assumed that each spacecraft is equipped with several large flat plates. A coupled orbit-attitude dynamic model is considered based on the specific configuration of atmospheric-based actuators. For this model, a neural network-based adaptive sliding mode controller is implemented, accounting for system uncertainties and external perturbations. To avoid invalidation of the neural networks destroying stability of the system, a switching control strategy is proposed which combines an adaptive neural networks controller dominating in its active region and an adaptive sliding mode controller outside the neural active region. An optimal process is developed to determine the control commands for the plates system. The stability of the closed-loop system is proved by a Lyapunov-based method. Comparative results through numerical simulations illustrate the effectiveness of executing attitude control while maintaining the relative motion, and higher control accuracy can be achieved by using the proposed neural-based switching control scheme than using only adaptive sliding mode controller.
NASA Technical Reports Server (NTRS)
Burken, John J.
2005-01-01
This viewgraph presentation covers the following topics: 1) Brief explanation of Generation II Flight Program; 2) Motivation for Neural Network Adaptive Systems; 3) Past/ Current/ Future IFCS programs; 4) Dynamic Inverse Controller with Explicit Model; 5) Types of Neural Networks Investigated; and 6) Brief example
ERIC Educational Resources Information Center
Peng, Yefei
2010-01-01
An ontology mapping neural network (OMNN) is proposed in order to learn and infer correspondences among ontologies. It extends the Identical Elements Neural Network (IENN)'s ability to represent and map complex relationships. The learning dynamics of simultaneous (interlaced) training of similar tasks interact at the shared connections of the…
Analysis of structural patterns in the brain with the complex network approach
NASA Astrophysics Data System (ADS)
Maksimenko, Vladimir A.; Makarov, Vladimir V.; Kharchenko, Alexander A.; Pavlov, Alexey N.; Khramova, Marina V.; Koronovskii, Alexey A.; Hramov, Alexander E.
2015-03-01
In this paper we study mechanisms of the phase synchronization in a model network of Van der Pol oscillators and in the neural network of the brain by consideration of macroscopic parameters of these networks. As the macroscopic characteristics of the model network we consider a summary signal produced by oscillators. Similar to the model simulations, we study EEG signals reflecting the macroscopic dynamics of neural network. We show that the appearance of the phase synchronization leads to an increased peak in the wavelet spectrum related to the dynamics of synchronized oscillators. The observed correlation between the phase relations of individual elements and the macroscopic characteristics of the whole network provides a way to detect phase synchronization in the neural networks in the cases of normal and pathological activity.
Milde, Moritz B.; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware. PMID:28747883
Milde, Moritz B; Blum, Hermann; Dietmüller, Alexander; Sumislawska, Dora; Conradt, Jörg; Indiveri, Giacomo; Sandamirskaya, Yulia
2017-01-01
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.
Developing neuronal networks: Self-organized criticality predicts the future
NASA Astrophysics Data System (ADS)
Pu, Jiangbo; Gong, Hui; Li, Xiangning; Luo, Qingming
2013-01-01
Self-organized criticality emerged in neural activity is one of the key concepts to describe the formation and the function of developing neuronal networks. The relationship between critical dynamics and neural development is both theoretically and experimentally appealing. However, whereas it is well-known that cortical networks exhibit a rich repertoire of activity patterns at different stages during in vitro maturation, dynamical activity patterns through the entire neural development still remains unclear. Here we show that a series of metastable network states emerged in the developing and ``aging'' process of hippocampal networks cultured from dissociated rat neurons. The unidirectional sequence of state transitions could be only observed in networks showing power-law scaling of distributed neuronal avalanches. Our data suggest that self-organized criticality may guide spontaneous activity into a sequential succession of homeostatically-regulated transient patterns during development, which may help to predict the tendency of neural development at early ages in the future.
Amozegar, M; Khorasani, K
2016-04-01
In this paper, a new approach for Fault Detection and Isolation (FDI) of gas turbine engines is proposed by developing an ensemble of dynamic neural network identifiers. For health monitoring of the gas turbine engine, its dynamics is first identified by constructing three separate or individual dynamic neural network architectures. Specifically, a dynamic multi-layer perceptron (MLP), a dynamic radial-basis function (RBF) neural network, and a dynamic support vector machine (SVM) are trained to individually identify and represent the gas turbine engine dynamics. Next, three ensemble-based techniques are developed to represent the gas turbine engine dynamics, namely, two heterogeneous ensemble models and one homogeneous ensemble model. It is first shown that all ensemble approaches do significantly improve the overall performance and accuracy of the developed system identification scheme when compared to each of the stand-alone solutions. The best selected stand-alone model (i.e., the dynamic RBF network) and the best selected ensemble architecture (i.e., the heterogeneous ensemble) in terms of their performances in achieving an accurate system identification are then selected for solving the FDI task. The required residual signals are generated by using both a single model-based solution and an ensemble-based solution under various gas turbine engine health conditions. Our extensive simulation studies demonstrate that the fault detection and isolation task achieved by using the residuals that are obtained from the dynamic ensemble scheme results in a significantly more accurate and reliable performance as illustrated through detailed quantitative confusion matrix analysis and comparative studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interactions between neural networks: a mechanism for tuning chaos and oscillations.
Wang, Lipo
2007-06-01
We show that chaos and oscillations in a higher-order binary neural network can be tuned effectively using interactions between neural networks. Our results suggest that network interactions may be useful as a means of adjusting the level of dynamic activities in systems that employ chaos and oscillations for information processing, or as a means of suppressing oscillatory behaviors in systems that require stability.
An annealed chaotic maximum neural network for bipartite subgraph problem.
Wang, Jiahai; Tang, Zheng; Wang, Ronglong
2004-04-01
In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.
2012-01-01
Background Synchronized bursting activity (SBA) is a remarkable dynamical behavior in both ex vivo and in vivo neural networks. Investigations of the underlying structural characteristics associated with SBA are crucial to understanding the system-level regulatory mechanism of neural network behaviors. Results In this study, artificial pulsed neural networks were established using spike response models to capture fundamental dynamics of large scale ex vivo cortical networks. Network simulations with synaptic parameter perturbations showed the following two findings. (i) In a network with an excitatory ratio (ER) of 80-90%, its connective ratio (CR) was within a range of 10-30% when the occurrence of SBA reached the highest expectation. This result was consistent with the experimental observation in ex vivo neuronal networks, which were reported to possess a matured inhibitory synaptic ratio of 10-20% and a CR of 10-30%. (ii) No SBA occurred when a network does not contain any all-positive-interaction feedback loop (APFL) motif. In a neural network containing APFLs, the number of APFLs presented an optimal range corresponding to the maximal occurrence of SBA, which was very similar to the optimal CR. Conclusions In a neural network, the evolutionarily selected CR (10-30%) optimizes the occurrence of SBA, and APFL serves a pivotal network motif required to maximize the occurrence of SBA. PMID:22462685
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1992-01-01
Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning.
Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D.
2017-01-01
This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in “real-time”) and forecasting (predicting the future) ILI dynamics in the 2011 – 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets. PMID:29244814
Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D
2017-01-01
This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets.
Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan
2014-01-01
The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442
Synchronization in a noise-driven developing neural network
NASA Astrophysics Data System (ADS)
Lin, I.-H.; Wu, R.-K.; Chen, C.-M.
2011-11-01
We use computer simulations to investigate the structural and dynamical properties of a developing neural network whose activity is driven by noise. Structurally, the constructed neural networks in our simulations exhibit the small-world properties that have been observed in several neural networks. The dynamical change of neuronal membrane potential is described by the Hodgkin-Huxley model, and two types of learning rules, including spike-timing-dependent plasticity (STDP) and inverse STDP, are considered to restructure the synaptic strength between neurons. Clustered synchronized firing (SF) of the network is observed when the network connectivity (number of connections/maximal connections) is about 0.75, in which the firing rate of neurons is only half of the network frequency. At the connectivity of 0.86, all neurons fire synchronously at the network frequency. The network SF frequency increases logarithmically with the culturing time of a growing network and decreases exponentially with the delay time in signal transmission. These conclusions are consistent with experimental observations. The phase diagrams of SF in a developing network are investigated for both learning rules.
Synchronization of heteroclinic circuits through learning in coupled neural networks
NASA Astrophysics Data System (ADS)
Selskii, Anton; Makarov, Valeri A.
2016-01-01
The synchronization of oscillatory activity in neural networks is usually implemented by coupling the state variables describing neuronal dynamics. Here we study another, but complementary mechanism based on a learning process with memory. A driver network, acting as a teacher, exhibits winner-less competition (WLC) dynamics, while a driven network, a learner, tunes its internal couplings according to the oscillations observed in the teacher. We show that under appropriate training the learner can "copy" the coupling structure and thus synchronize oscillations with the teacher. The replication of the WLC dynamics occurs for intermediate memory lengths only, consequently, the learner network exhibits a phenomenon of learning resonance.
Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks
NASA Astrophysics Data System (ADS)
Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.
2017-12-01
We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.
Kappel, David; Legenstein, Robert; Habenschuss, Stefan; Hsieh, Michael; Maass, Wolfgang
2018-01-01
Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations.
Habenschuss, Stefan; Hsieh, Michael
2018-01-01
Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations. PMID:29696150
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
NASA Astrophysics Data System (ADS)
Pyle, Ryan; Rosenbaum, Robert
2017-01-01
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
NASA Technical Reports Server (NTRS)
Bomben, Craig R.; Smolka, James W.; Bosworth, John T.; Silliams-Hayes, Peggy S.; Burken, John J.; Larson, Richard R.; Buschbacher, Mark J.; Maliska, Heather A.
2006-01-01
The Intelligent Flight Control System (IFCS) project at the NASA Dryden Flight Research Center, Edwards AFB, CA, has been investigating the use of neural network based adaptive control on a unique NF-15B test aircraft. The IFCS neural network is a software processor that stores measured aircraft response information to dynamically alter flight control gains. In 2006, the neural network was engaged and allowed to learn in real time to dynamically alter the aircraft handling qualities characteristics in the presence of actual aerodynamic failure conditions injected into the aircraft through the flight control system. The use of neural network and similar adaptive technologies in the design of highly fault and damage tolerant flight control systems shows promise in making future aircraft far more survivable than current technology allows. This paper will present the results of the IFCS flight test program conducted at the NASA Dryden Flight Research Center in 2006, with emphasis on challenges encountered and lessons learned.
Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks
2018-01-01
Much of the information the brain processes and stores is temporal in nature—a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds—we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. PMID:29537963
Encoding sensory and motor patterns as time-invariant trajectories in recurrent neural networks.
Goudar, Vishwa; Buonomano, Dean V
2018-03-14
Much of the information the brain processes and stores is temporal in nature-a spoken word or a handwritten signature, for example, is defined by how it unfolds in time. However, it remains unclear how neural circuits encode complex time-varying patterns. We show that by tuning the weights of a recurrent neural network (RNN), it can recognize and then transcribe spoken digits. The model elucidates how neural dynamics in cortical networks may resolve three fundamental challenges: first, encode multiple time-varying sensory and motor patterns as stable neural trajectories; second, generalize across relevant spatial features; third, identify the same stimuli played at different speeds-we show that this temporal invariance emerges because the recurrent dynamics generate neural trajectories with appropriately modulated angular velocities. Together our results generate testable predictions as to how recurrent networks may use different mechanisms to generalize across the relevant spatial and temporal features of complex time-varying stimuli. © 2018, Goudar et al.
Modeling of cortical signals using echo state networks
NASA Astrophysics Data System (ADS)
Zhou, Hanying; Wang, Yongji; Huang, Jiangshuai
2009-10-01
Diverse modeling frameworks have been utilized with the ultimate goal of translating brain cortical signals into prediction of visible behavior. The inputs to these models are usually multidimensional neural recordings collected from relevant regions of a monkey's brain while the outputs are the associated behavior which is typically the 2-D or 3-D hand position of a primate. Here our task is to set up a proper model in order to figure out the move trajectories by input the neural signals which are simultaneously collected in the experiment. In this paper, we propose to use Echo State Networks (ESN) to map the neural firing activities into hand positions. ESN is a newly developed recurrent neural network(RNN) model. Besides its dynamic property and short term memory just as other recurrent neural networks have, it has a special echo state property which endows it with the ability to model nonlinear dynamic systems powerfully. What distinguished it from transitional recurrent neural networks most significantly is its special learning method. In this paper we train this net with a refined version of its typical training method and get a better model.
Interactions between neural networks: a mechanism for tuning chaos and oscillations
2007-01-01
We show that chaos and oscillations in a higher-order binary neural network can be tuned effectively using interactions between neural networks. Our results suggest that network interactions may be useful as a means of adjusting the level of dynamic activities in systems that employ chaos and oscillations for information processing, or as a means of suppressing oscillatory behaviors in systems that require stability. PMID:19003511
Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 2
NASA Technical Reports Server (NTRS)
Lea, Robert N. (Editor); Villarreal, James A. (Editor)
1991-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Texas, Houston. Topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.
Predictive Coding of Dynamical Variables in Balanced Spiking Networks
Boerlin, Martin; Machens, Christian K.; Denève, Sophie
2013-01-01
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. PMID:24244113
Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; ...
2017-12-15
This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine
This work is the first to take advantage of recurrent neural networks to predict influenza-like-illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data [1, 2] and the state-of-the-art machine learning models [3, 4], we build and evaluate the predictive power of Long Short Term Memory (LSTMs) architectures capable of nowcasting (predicting in \\real-time") and forecasting (predicting the future) ILI dynamics in the 2011 { 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, stylistic and syntactic patterns,more » emotions and opinions, and communication behavior. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks. Finally, we combine ILI and social media signals to build joint neural network models for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance [1], specifically for military rather than general populations [3] in 26 U.S. and six international locations. Our approach demonstrates several advantages: (a) Neural network models learned from social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than syntactic and stylistic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns.« less
Dynamic Pricing in Electronic Commerce Using Neural Network
NASA Astrophysics Data System (ADS)
Ghose, Tapu Kumar; Tran, Thomas T.
In this paper, we propose an approach where feed-forward neural network is used for dynamically calculating a competitive price of a product in order to maximize sellers’ revenue. In the approach we considered that along with product price other attributes such as product quality, delivery time, after sales service and seller’s reputation contribute in consumers purchase decision. We showed that once the sellers, by using their limited prior knowledge, set an initial price of a product our model adjusts the price automatically with the help of neural network so that sellers’ revenue is maximized.
Functional expansion representations of artificial neural networks
NASA Technical Reports Server (NTRS)
Gray, W. Steven
1992-01-01
In the past few years, significant interest has developed in using artificial neural networks to model and control nonlinear dynamical systems. While there exists many proposed schemes for accomplishing this and a wealth of supporting empirical results, most approaches to date tend to be ad hoc in nature and rely mainly on heuristic justifications. The purpose of this project was to further develop some analytical tools for representing nonlinear discrete-time input-output systems, which when applied to neural networks would give insight on architecture selection, pruning strategies, and learning algorithms. A long term goal is to determine in what sense, if any, a neural network can be used as a universal approximator for nonliner input-output maps with memory (i.e., realized by a dynamical system). This property is well known for the case of static or memoryless input-output maps. The general architecture under consideration in this project was a single-input, single-output recurrent feedforward network.
Implementing neural nets with programmable logic
NASA Technical Reports Server (NTRS)
Vidal, Jacques J.
1988-01-01
Networks of Boolean programmable logic modules are presented as one purely digital class of artificial neural nets. The approach contrasts with the continuous analog framework usually suggested. Programmable logic networks are capable of handling many neural-net applications. They avoid some of the limitations of threshold logic networks and present distinct opportunities. The network nodes are called dynamically programmable logic modules. They can be implemented with digitally controlled demultiplexers. Each node performs a Boolean function of its inputs which can be dynamically assigned. The overall network is therefore a combinational circuit and its outputs are Boolean global functions of the network's input variables. The approach offers definite advantages for VLSI implementation, namely, a regular architecture with limited connectivity, simplicity of the control machinery, natural modularity, and the support of a mature technology.
Comparison of RF spectrum prediction methods for dynamic spectrum access
NASA Astrophysics Data System (ADS)
Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.
2017-05-01
Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.
Embedding recurrent neural networks into predator-prey models.
Moreau, Yves; Louiès, Stephane; Vandewalle, Joos; Brenig, Leon
1999-03-01
We study changes of coordinates that allow the embedding of ordinary differential equations describing continuous-time recurrent neural networks into differential equations describing predator-prey models-also called Lotka-Volterra systems. We transform the equations for the neural network first into quasi-monomial form (Brenig, L. (1988). Complete factorization and analytic solutions of generalized Lotka-Volterra equations. Physics Letters A, 133(7-8), 378-382), where we express the vector field of the dynamical system as a linear combination of products of powers of the variables. In practice, this transformation is possible only if the activation function is the hyperbolic tangent or the logistic sigmoid. From this quasi-monomial form, we can directly transform the system further into Lotka-Volterra equations. The resulting Lotka-Volterra system is of higher dimension than the original system, but the behavior of its first variables is equivalent to the behavior of the original neural network. We expect that this transformation will permit the application of existing techniques for the analysis of Lotka-Volterra systems to recurrent neural networks. Furthermore, our results show that Lotka-Volterra systems are universal approximators of dynamical systems, just as are continuous-time neural networks.
Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica
2012-05-30
The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.
Phase synchronization motion and neural coding in dynamic transmission of neural information.
Wang, Rubin; Zhang, Zhikang; Qu, Jingyi; Cao, Jianting
2011-07-01
In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion.
Dynamic Adaptive Neural Network Arrays: A Neuromorphic Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Disney, Adam; Reynolds, John
2015-01-01
Dynamic Adaptive Neural Network Array (DANNA) is a neuromorphic hardware implementation. It differs from most other neuromorphic projects in that it allows for programmability of structure, and it is trained or designed using evolutionary optimization. This paper describes the DANNA structure, how DANNA is trained using evolutionary optimization, and an application of DANNA to a very simple classification task.
A Neural Network Model of the Structure and Dynamics of Human Personality
ERIC Educational Resources Information Center
Read, Stephen J.; Monroe, Brian M.; Brownstein, Aaron L.; Yang, Yu; Chopra, Gurveen; Miller, Lynn C.
2010-01-01
We present a neural network model that aims to bridge the historical gap between dynamic and structural approaches to personality. The model integrates work on the structure of the trait lexicon, the neurobiology of personality, temperament, goal-based models of personality, and an evolutionary analysis of motives. It is organized in terms of two…
Active Control of Wind-Tunnel Model Aeroelastic Response Using Neural Networks
NASA Technical Reports Server (NTRS)
Scott, Robert C.
2000-01-01
NASA Langley Research Center, Hampton, VA 23681 Under a joint research and development effort conducted by the National Aeronautics and Space Administration and The Boeing Company (formerly McDonnell Douglas) three neural-network based control systems were developed and tested. The control systems were experimentally evaluated using a transonic wind-tunnel model in the Langley Transonic Dynamics Tunnel. One system used a neural network to schedule flutter suppression control laws, another employed a neural network in a predictive control scheme, and the third employed a neural network in an inverse model control scheme. All three of these control schemes successfully suppressed flutter to or near the limits of the testing apparatus, and represent the first experimental applications of neural networks to flutter suppression. This paper will summarize the findings of this project.
Using fuzzy logic to integrate neural networks and knowledge-based systems
NASA Technical Reports Server (NTRS)
Yen, John
1991-01-01
Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.
Neural joint control for Space Shuttle Remote Manipulator System
NASA Technical Reports Server (NTRS)
Atkins, Mark A.; Cox, Chadwick J.; Lothers, Michael D.; Pap, Robert M.; Thomas, Charles R.
1992-01-01
Neural networks are being used to control a robot arm in a telerobotic operation. The concept uses neural networks for both joint and inverse kinematics in a robotic control application. An upper level neural network is trained to learn inverse kinematic mappings. The output, a trajectory, is then fed to the Decentralized Adaptive Joint Controllers. This neural network implementation has shown that the controlled arm recovers from unexpected payload changes while following the reference trajectory. The neural network-based decentralized joint controller is faster, more robust and efficient than conventional approaches. Implementations of this architecture are discussed that would relax assumptions about dynamics, obstacles, and heavy loads. This system is being developed to use with the Space Shuttle Remote Manipulator System.
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
Sinapayen, Lana; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.
Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.
NASA Technical Reports Server (NTRS)
Zak, Michail
1990-01-01
A new neural network architecture is proposed based upon effects of non-Lipschitzian dynamics. The network is fully connected, but these connections are active only during vanishingly short time periods. The advantages of this architecture are discussed.
Wang, Dongshu; Huang, Lihong; Tang, Longkun
2015-08-01
This paper is concerned with the synchronization dynamical behaviors for a class of delayed neural networks with discontinuous neuron activations. Continuous and discontinuous state feedback controller are designed such that the neural networks model can realize exponential complete synchronization in view of functional differential inclusions theory, Lyapunov functional method and inequality technique. The new proposed results here are very easy to verify and also applicable to neural networks with continuous activations. Finally, some numerical examples show the applicability and effectiveness of our main results.
Network evolution induced by asynchronous stimuli through spike-timing-dependent plasticity.
Yuan, Wu-Jie; Zhou, Jian-Fang; Zhou, Changsong
2013-01-01
In sensory neural system, external asynchronous stimuli play an important role in perceptual learning, associative memory and map development. However, the organization of structure and dynamics of neural networks induced by external asynchronous stimuli are not well understood. Spike-timing-dependent plasticity (STDP) is a typical synaptic plasticity that has been extensively found in the sensory systems and that has received much theoretical attention. This synaptic plasticity is highly sensitive to correlations between pre- and postsynaptic firings. Thus, STDP is expected to play an important role in response to external asynchronous stimuli, which can induce segregative pre- and postsynaptic firings. In this paper, we study the impact of external asynchronous stimuli on the organization of structure and dynamics of neural networks through STDP. We construct a two-dimensional spatial neural network model with local connectivity and sparseness, and use external currents to stimulate alternately on different spatial layers. The adopted external currents imposed alternately on spatial layers can be here regarded as external asynchronous stimuli. Through extensive numerical simulations, we focus on the effects of stimulus number and inter-stimulus timing on synaptic connecting weights and the property of propagation dynamics in the resulting network structure. Interestingly, the resulting feedforward structure induced by stimulus-dependent asynchronous firings and its propagation dynamics reflect both the underlying property of STDP. The results imply a possible important role of STDP in generating feedforward structure and collective propagation activity required for experience-dependent map plasticity in developing in vivo sensory pathways and cortices. The relevance of the results to cue-triggered recall of learned temporal sequences, an important cognitive function, is briefly discussed as well. Furthermore, this finding suggests a potential application for examining STDP by measuring neural population activity in a cultured neural network.
Nonequilibrium landscape theory of neural networks.
Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin
2013-11-05
The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape-flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.
Nonequilibrium landscape theory of neural networks
Yan, Han; Zhao, Lei; Hu, Liang; Wang, Xidi; Wang, Erkang; Wang, Jin
2013-01-01
The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments. PMID:24145451
Neural networks and logical reasoning systems: a translation table.
Martins, J; Mendes, R V
2001-04-01
A correspondence is established between the basic elements of logic reasoning systems (knowledge bases, rules, inference and queries) and the structure and dynamical evolution laws of neural networks. The correspondence is pictured as a translation dictionary which might allow to go back and forth between symbolic and network formulations, a desirable step in learning-oriented systems and multicomputer networks. In the framework of Horn clause logics, it is found that atomic propositions with n arguments correspond to nodes with nth order synapses, rules to synaptic intensity constraints, forward chaining to synaptic dynamics and queries either to simple node activation or to a query tensor dynamics.
A modular architecture for transparent computation in recurrent neural networks.
Carmantini, Giovanni S; Beim Graben, Peter; Desroches, Mathieu; Rodrigues, Serafim
2017-01-01
Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Gödelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in the absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: (i) the design of a Central Pattern Generator from a finite-state locomotive controller, and (ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dynamic neural networking as a basis for plasticity in the control of heart rate.
Kember, G; Armour, J A; Zamir, M
2013-01-21
A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.
Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network
NASA Astrophysics Data System (ADS)
Yang, Bin
2017-07-01
Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately
Radar signal categorization using a neural network
NASA Technical Reports Server (NTRS)
Anderson, James A.; Gately, Michael T.; Penz, P. Andrew; Collins, Dean R.
1991-01-01
Neural networks were used to analyze a complex simulated radar environment which contains noisy radar pulses generated by many different emitters. The neural network used is an energy minimizing network (the BSB model) which forms energy minima - attractors in the network dynamical system - based on learned input data. The system first determines how many emitters are present (the deinterleaving problem). Pulses from individual simulated emitters give rise to separate stable attractors in the network. Once individual emitters are characterized, it is possible to make tentative identifications of them based on their observed parameters. As a test of this idea, a neural network was used to form a small data base that potentially could make emitter identifications.
Synchronization and long-time memory in neural networks with inhibitory hubs and synaptic plasticity
NASA Astrophysics Data System (ADS)
Bertolotti, Elena; Burioni, Raffaella; di Volo, Matteo; Vezzani, Alessandro
2017-01-01
We investigate the dynamical role of inhibitory and highly connected nodes (hub) in synchronization and input processing of leaky-integrate-and-fire neural networks with short term synaptic plasticity. We take advantage of a heterogeneous mean-field approximation to encode the role of network structure and we tune the fraction of inhibitory neurons fI and their connectivity level to investigate the cooperation between hub features and inhibition. We show that, depending on fI, highly connected inhibitory nodes strongly drive the synchronization properties of the overall network through dynamical transitions from synchronous to asynchronous regimes. Furthermore, a metastable regime with long memory of external inputs emerges for a specific fraction of hub inhibitory neurons, underlining the role of inhibition and connectivity also for input processing in neural networks.
Neural network-based model reference adaptive control system.
Patino, H D; Liu, D
2000-01-01
In this paper, an approach to model reference adaptive control based on neural networks is proposed and analyzed for a class of first-order continuous-time nonlinear dynamical systems. The controller structure can employ either a radial basis function network or a feedforward neural network to compensate adaptively the nonlinearities in the plant. A stable controller-parameter adjustment mechanism, which is determined using the Lyapunov theory, is constructed using a sigma-modification-type updating law. The evaluation of control error in terms of the neural network learning error is performed. That is, the control error converges asymptotically to a neighborhood of zero, whose size is evaluated and depends on the approximation error of the neural network. In the design and analysis of neural network-based control systems, it is important to take into account the neural network learning error and its influence on the control error of the plant. Simulation results showing the feasibility and performance of the proposed approach are given.
Hybrid discrete-time neural networks.
Cao, Hongjun; Ibarz, Borja
2010-11-13
Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.
Diminished neural network dynamics after moderate and severe traumatic brain injury.
Gilbert, Nicholas; Bernier, Rachel A; Calhoun, Vincent D; Brenner, Einat; Grossner, Emily; Rajtmajer, Sarah M; Hillary, Frank G
2018-01-01
Over the past decade there has been increasing enthusiasm in the cognitive neurosciences around using network science to understand the system-level changes associated with brain disorders. A growing literature has used whole-brain fMRI analysis to examine changes in the brain's subnetworks following traumatic brain injury (TBI). Much of network modeling in this literature has focused on static network mapping, which provides a window into gross inter-nodal relationships, but is insensitive to more subtle fluctuations in network dynamics, which may be an important predictor of neural network plasticity. In this study, we examine the dynamic connectivity with focus on state-level connectivity (state) and evaluate the reliability of dynamic network states over the course of two runs of intermittent task and resting data. The goal was to examine the dynamic properties of neural networks engaged periodically with task stimulation in order to determine: 1) the reliability of inter-nodal and network-level characteristics over time and 2) the transitions between distinct network states after traumatic brain injury. To do so, we enrolled 23 individuals with moderate and severe TBI at least 1-year post injury and 19 age- and education-matched healthy adults using functional MRI methods, dynamic connectivity modeling, and graph theory. The results reveal several distinct network "states" that were reliably evident when comparing runs; the overall frequency of dynamic network states are highly reproducible (r-values>0.8) for both samples. Analysis of movement between states resulted in fewer state transitions in the TBI sample and, in a few cases, brain injury resulted in the appearance of states not exhibited by the healthy control (HC) sample. Overall, the findings presented here demonstrate the reliability of observable dynamic mental states during periods of on-task performance and support emerging evidence that brain injury may result in diminished network dynamics.
Diminished neural network dynamics after moderate and severe traumatic brain injury
Gilbert, Nicholas; Bernier, Rachel A.; Calhoun, Vincent D.; Brenner, Einat; Grossner, Emily; Rajtmajer, Sarah M.
2018-01-01
Over the past decade there has been increasing enthusiasm in the cognitive neurosciences around using network science to understand the system-level changes associated with brain disorders. A growing literature has used whole-brain fMRI analysis to examine changes in the brain’s subnetworks following traumatic brain injury (TBI). Much of network modeling in this literature has focused on static network mapping, which provides a window into gross inter-nodal relationships, but is insensitive to more subtle fluctuations in network dynamics, which may be an important predictor of neural network plasticity. In this study, we examine the dynamic connectivity with focus on state-level connectivity (state) and evaluate the reliability of dynamic network states over the course of two runs of intermittent task and resting data. The goal was to examine the dynamic properties of neural networks engaged periodically with task stimulation in order to determine: 1) the reliability of inter-nodal and network-level characteristics over time and 2) the transitions between distinct network states after traumatic brain injury. To do so, we enrolled 23 individuals with moderate and severe TBI at least 1-year post injury and 19 age- and education-matched healthy adults using functional MRI methods, dynamic connectivity modeling, and graph theory. The results reveal several distinct network “states” that were reliably evident when comparing runs; the overall frequency of dynamic network states are highly reproducible (r-values>0.8) for both samples. Analysis of movement between states resulted in fewer state transitions in the TBI sample and, in a few cases, brain injury resulted in the appearance of states not exhibited by the healthy control (HC) sample. Overall, the findings presented here demonstrate the reliability of observable dynamic mental states during periods of on-task performance and support emerging evidence that brain injury may result in diminished network dynamics. PMID:29883447
Financial time series prediction using spiking neural networks.
Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam
2014-01-01
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.
Field-theoretic approach to fluctuation effects in neural networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buice, Michael A.; Cowan, Jack D.; Mathematics Department, University of Chicago, Chicago, Illinois 60637
A well-defined stochastic theory for neural activity, which permits the calculation of arbitrary statistical moments and equations governing them, is a potentially valuable tool for theoretical neuroscience. We produce such a theory by analyzing the dynamics of neural activity using field theoretic methods for nonequilibrium statistical processes. Assuming that neural network activity is Markovian, we construct the effective spike model, which describes both neural fluctuations and response. This analysis leads to a systematic expansion of corrections to mean field theory, which for the effective spike model is a simple version of the Wilson-Cowan equation. We argue that neural activity governedmore » by this model exhibits a dynamical phase transition which is in the universality class of directed percolation. More general models (which may incorporate refractoriness) can exhibit other universality classes, such as dynamic isotropic percolation. Because of the extremely high connectivity in typical networks, it is expected that higher-order terms in the systematic expansion are small for experimentally accessible measurements, and thus, consistent with measurements in neocortical slice preparations, we expect mean field exponents for the transition. We provide a quantitative criterion for the relative magnitude of each term in the systematic expansion, analogous to the Ginsburg criterion. Experimental identification of dynamic universality classes in vivo is an outstanding and important question for neuroscience.« less
Fixed-time stabilization of impulsive Cohen-Grossberg BAM neural networks.
Li, Hongfei; Li, Chuandong; Huang, Tingwen; Zhang, Wanli
2018-02-01
This article is concerned with the fixed-time stabilization for impulsive Cohen-Grossberg BAM neural networks via two different controllers. By using a novel constructive approach based on some comparison techniques for differential inequalities, an improvement theorem of fixed-time stability for impulsive dynamical systems is established. In addition, based on the fixed-time stability theorem of impulsive dynamical systems, two different control protocols are designed to ensure the fixed-time stabilization of impulsive Cohen-Grossberg BAM neural networks, which include and extend the earlier works. Finally, two simulations examples are provided to illustrate the validity of the proposed theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xia, Kewei; Huo, Wei
2016-05-01
This paper presents a robust adaptive neural networks control strategy for spacecraft rendezvous and docking with the coupled position and attitude dynamics under input saturation. Backstepping technique is applied to design a relative attitude controller and a relative position controller, respectively. The dynamics uncertainties are approximated by radial basis function neural networks (RBFNNs). A novel switching controller consists of an adaptive neural networks controller dominating in its active region combined with an extra robust controller to avoid invalidation of the RBFNNs destroying stability of the system outside the neural active region. An auxiliary signal is introduced to compensate the input saturation with anti-windup technique, and a command filter is employed to approximate derivative of the virtual control in the backstepping procedure. Globally uniformly ultimately bounded of the relative states is proved via Lyapunov theory. Simulation example demonstrates effectiveness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Robustness analysis of uncertain dynamical neural networks with multiple time delays.
Senan, Sibel
2015-10-01
This paper studies the problem of global robust asymptotic stability of the equilibrium point for the class of dynamical neural networks with multiple time delays with respect to the class of slope-bounded activation functions and in the presence of the uncertainties of system parameters of the considered neural network model. By using an appropriate Lyapunov functional and exploiting the properties of the homeomorphism mapping theorem, we derive a new sufficient condition for the existence, uniqueness and global robust asymptotic stability of the equilibrium point for the class of neural networks with multiple time delays. The obtained stability condition basically relies on testing some relationships imposed on the interconnection matrices of the neural system, which can be easily verified by using some certain properties of matrices. An instructive numerical example is also given to illustrate the applicability of our result and show the advantages of this new condition over the previously reported corresponding results. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wan, Tat C.; Kabuka, Mansur R.
1994-05-01
With the tremendous growth in imaging applications and the development of filmless radiology, the need for compression techniques that can achieve high compression ratios with user specified distortion rates becomes necessary. Boundaries and edges in the tissue structures are vital for detection of lesions and tumors, which in turn requires the preservation of edges in the image. The proposed edge preserving image compressor (EPIC) combines lossless compression of edges with neural network compression techniques based on dynamic associative neural networks (DANN), to provide high compression ratios with user specified distortion rates in an adaptive compression system well-suited to parallel implementations. Improvements to DANN-based training through the use of a variance classifier for controlling a bank of neural networks speed convergence and allow the use of higher compression ratios for `simple' patterns. The adaptation and generalization capabilities inherent in EPIC also facilitate progressive transmission of images through varying the number of quantization levels used to represent compressed patterns. Average compression ratios of 7.51:1 with an averaged average mean squared error of 0.0147 were achieved.
Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950
Predicting physical time series using dynamic ridge polynomial neural networks.
Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir
2014-01-01
Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.
Feature to prototype transition in neural networks
NASA Astrophysics Data System (ADS)
Krotov, Dmitry; Hopfield, John
Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.
Role of local network oscillations in resting-state functional connectivity.
Cabral, Joana; Hugues, Etienne; Sporns, Olaf; Deco, Gustavo
2011-07-01
Spatio-temporally organized low-frequency fluctuations (<0.1 Hz), observed in BOLD fMRI signal during rest, suggest the existence of underlying network dynamics that emerge spontaneously from intrinsic brain processes. Furthermore, significant correlations between distinct anatomical regions-or functional connectivity (FC)-have led to the identification of several widely distributed resting-state networks (RSNs). This slow dynamics seems to be highly structured by anatomical connectivity but the mechanism behind it and its relationship with neural activity, particularly in the gamma frequency range, remains largely unknown. Indeed, direct measurements of neuronal activity have revealed similar large-scale correlations, particularly in slow power fluctuations of local field potential gamma frequency range oscillations. To address these questions, we investigated neural dynamics in a large-scale model of the human brain's neural activity. A key ingredient of the model was a structural brain network defined by empirically derived long-range brain connectivity together with the corresponding conduction delays. A neural population, assumed to spontaneously oscillate in the gamma frequency range, was placed at each network node. When these oscillatory units are integrated in the network, they behave as weakly coupled oscillators. The time-delayed interaction between nodes is described by the Kuramoto model of phase oscillators, a biologically-based model of coupled oscillatory systems. For a realistic setting of axonal conduction speed, we show that time-delayed network interaction leads to the emergence of slow neural activity fluctuations, whose patterns correlate significantly with the empirically measured FC. The best agreement of the simulated FC with the empirically measured FC is found for a set of parameters where subsets of nodes tend to synchronize although the network is not globally synchronized. Inside such clusters, the simulated BOLD signal between nodes is found to be correlated, instantiating the empirically observed RSNs. Between clusters, patterns of positive and negative correlations are observed, as described in experimental studies. These results are found to be robust with respect to a biologically plausible range of model parameters. In conclusion, our model suggests how resting-state neural activity can originate from the interplay between the local neural dynamics and the large-scale structure of the brain. Copyright © 2011 Elsevier Inc. All rights reserved.
Dynamical principles in neuroscience
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.; Abarbanel, Henry D. I.
2006-10-01
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?
Dynamical principles in neuroscience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only amore » few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?.« less
Dynamic gesture recognition using neural networks: a fundament for advanced interaction construction
NASA Astrophysics Data System (ADS)
Boehm, Klaus; Broll, Wolfgang; Sokolewicz, Michael A.
1994-04-01
Interaction in virtual reality environments is still a challenging task. Static hand posture recognition is currently the most common and widely used method for interaction using glove input devices. In order to improve the naturalness of interaction, and thereby decrease the user-interface learning time, there is a need to be able to recognize dynamic gestures. In this paper we describe our approach to overcoming the difficulties of dynamic gesture recognition (DGR) using neural networks. Backpropagation neural networks have already proven themselves to be appropriate and efficient for posture recognition. However, the extensive amount of data involved in DGR requires a different approach. Because of features such as topology preservation and automatic-learning, Kohonen Feature Maps are particularly suitable for the reduction of the high dimensional data space that is the result of a dynamic gesture, and are thus implemented for this task.
Biological modelling of a computational spiking neural network with neuronal avalanches.
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2017-06-28
In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).
Biological modelling of a computational spiking neural network with neuronal avalanches
NASA Astrophysics Data System (ADS)
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2017-05-01
In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.
The DSFPN, a new neural network for optical character recognition.
Morns, L P; Dlay, S S
1999-01-01
A new type of neural network for recognition tasks is presented in this paper. The network, called the dynamic supervised forward-propagation network (DSFPN), is based on the forward only version of the counterpropagation network (CPN). The DSFPN, trains using a supervised algorithm and can grow dynamically during training, allowing subclasses in the training data to be learnt in an unsupervised manner. It is shown to train in times comparable to the CPN while giving better classification accuracies than the popular backpropagation network. Both Fourier descriptors and wavelet descriptors are used for image preprocessing and the wavelets are proven to give a far better performance.
Kumar, Rajesh; Srivastava, Smriti; Gupta, J R P
2017-03-01
In this paper adaptive control of nonlinear dynamical systems using diagonal recurrent neural network (DRNN) is proposed. The structure of DRNN is a modification of fully connected recurrent neural network (FCRNN). Presence of self-recurrent neurons in the hidden layer of DRNN gives it an ability to capture the dynamic behaviour of the nonlinear plant under consideration (to be controlled). To ensure stability, update rules are developed using lyapunov stability criterion. These rules are then used for adjusting the various parameters of DRNN. The responses of plants obtained with DRNN are compared with those obtained when multi-layer feed forward neural network (MLFFNN) is used as a controller. Also, in example 4, FCRNN is also investigated and compared with DRNN and MLFFNN. Robustness of the proposed control scheme is also tested against parameter variations and disturbance signals. Four simulation examples including one-link robotic manipulator and inverted pendulum are considered on which the proposed controller is applied. The results so obtained show the superiority of DRNN over MLFFNN as a controller. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)
2002-01-01
Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic coefficients to an accuracy of 110% . In our problem, we would like to get an optimized neural network architecture and minimum data set. This has been accomplished within 500 training cycles of a neural network. After removing training pairs (outliers), the GA has produced much better results. The neural network constructed is a feed forward neural network with a back propagation learning mechanism. The main goal has been to free the network design process from constraints of human biases, and to discover better forms of neural network architectures. The automation of the network architecture search by genetic algorithms seems to have been the best way to achieve this goal.
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
NASA Technical Reports Server (NTRS)
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
Neural network regulation driven by autonomous neural firings
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2016-07-01
Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.
A new delay-independent condition for global robust stability of neural networks with time delays.
Samli, Ruya
2015-06-01
This paper studies the problem of robust stability of dynamical neural networks with discrete time delays under the assumptions that the network parameters of the neural system are uncertain and norm-bounded, and the activation functions are slope-bounded. By employing the results of Lyapunov stability theory and matrix theory, new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for delayed neural networks are presented. The results reported in this paper can be easily tested by checking some special properties of symmetric matrices associated with the parameter uncertainties of neural networks. We also present a numerical example to show the effectiveness of the proposed theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Method and system for determining induction motor speed
Parlos, Alexander G.; Bharadwaj, Raj M.
2004-03-30
A non-linear, semi-parametric neural network-based adaptive filter is utilized to determine the dynamic speed of a rotating rotor within an induction motor, without the explicit use of a speed sensor, such as a tachometer, is disclosed. The neural network-based filter is developed using actual motor current measurements, voltage measurements, and nameplate information. The neural network-based adaptive filter is trained using an estimated speed calculator derived from the actual current and voltage measurements. The neural network-based adaptive filter uses voltage and current measurements to determine the instantaneous speed of a rotating rotor. The neural network-based adaptive filter also includes an on-line adaptation scheme that permits the filter to be readily adapted for new operating conditions during operations.
Proceedings of the Second Joint Technology Workshop on Neural Networks and Fuzzy Logic, volume 1
NASA Technical Reports Server (NTRS)
Lea, Robert N. (Editor); Villarreal, James (Editor)
1991-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by NASA and the University of Houston, Clear Lake. The workshop was held April 11 to 13 at the Johnson Space Flight Center. Technical topics addressed included adaptive systems, learning algorithms, network architectures, vision, robotics, neurobiological connections, speech recognition and synthesis, fuzzy set theory and application, control and dynamics processing, space applications, fuzzy logic and neural network computers, approximate reasoning, and multiobject decision making.
NASA Astrophysics Data System (ADS)
Ali, M. Syed; Zhu, Quanxin; Pavithra, S.; Gunasekaran, N.
2018-03-01
This study examines the problem of dissipative synchronisation of coupled reaction-diffusion neural networks with time-varying delays. This paper proposes a complex dynamical network consisting of N linearly and diffusively coupled identical reaction-diffusion neural networks. By constructing a suitable Lyapunov-Krasovskii functional (LKF), utilisation of Jensen's inequality and reciprocally convex combination (RCC) approach, strictly ?-dissipative conditions of the addressed systems are derived. Finally, a numerical example is given to show the effectiveness of the theoretical results.
Noel, Jean-Paul; Blanke, Olaf; Magosso, Elisa; Serino, Andrea
2018-06-01
Interactions between the body and the environment occur within the peripersonal space (PPS), the space immediately surrounding the body. The PPS is encoded by multisensory (audio-tactile, visual-tactile) neurons that possess receptive fields (RFs) anchored on the body and restricted in depth. The extension in depth of PPS neurons' RFs has been documented to change dynamically as a function of the velocity of incoming stimuli, but the underlying neural mechanisms are still unknown. Here, by integrating a psychophysical approach with neural network modeling, we propose a mechanistic explanation behind this inherent dynamic property of PPS. We psychophysically mapped the size of participant's peri-face and peri-trunk space as a function of the velocity of task-irrelevant approaching auditory stimuli. Findings indicated that the peri-trunk space was larger than the peri-face space, and, importantly, as for the neurophysiological delineation of RFs, both of these representations enlarged as the velocity of incoming sound increased. We propose a neural network model to mechanistically interpret these findings: the network includes reciprocal connections between unisensory areas and higher order multisensory neurons, and it implements neural adaptation to persistent stimulation as a mechanism sensitive to stimulus velocity. The network was capable of replicating the behavioral observations of PPS size remapping and relates behavioral proxies of PPS size to neurophysiological measures of multisensory neurons' RF size. We propose that a biologically plausible neural adaptation mechanism embedded within the network encoding for PPS can be responsible for the dynamic alterations in PPS size as a function of the velocity of incoming stimuli. NEW & NOTEWORTHY Interactions between body and environment occur within the peripersonal space (PPS). PPS neurons are highly dynamic, adapting online as a function of body-object interactions. The mechanistic underpinning PPS dynamic properties are unexplained. We demonstrate with a psychophysical approach that PPS enlarges as incoming stimulus velocity increases, efficiently preventing contacts with faster approaching objects. We present a neurocomputational model of multisensory PPS implementing neural adaptation to persistent stimulation to propose a neurophysiological mechanism underlying this effect.
Effect of dilution in asymmetric recurrent neural networks.
Folli, Viola; Gosti, Giorgio; Leonetti, Marco; Ruocco, Giancarlo
2018-04-16
We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution and asymmetry. The network dilution measures the fraction of neuron couples that are connected, and the network asymmetry measures to what extent the underlying connectivity matrix is asymmetric. For each given neural network, we study the dynamical evolution of all the different initial conditions, thus characterizing the full dynamical landscape without imposing any learning rule. Because of the deterministic dynamics, each trajectory converges to an attractor, that can be either a fixed point or a limit cycle. These attractors form the set of all the possible limit behaviors of the neural network. For each network we then determine the convergence times, the limit cycles' length, the number of attractors, and the sizes of the attractors' basin. We show that there are two network structures that maximize the number of possible limit behaviors. The first optimal network structure is fully-connected and symmetric. On the contrary, the second optimal network structure is highly sparse and asymmetric. The latter optimal is similar to what observed in different biological neuronal circuits. These observations lead us to hypothesize that independently from any given learning model, an efficient and effective biologic network that stores a number of limit behaviors close to its maximum capacity tends to develop a connectivity structure similar to one of the optimal networks we found. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
2012-02-01
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Chaotic itinerancy in the oscillator neural network without Lyapunov functions.
Uchiyama, Satoki; Fujisaka, Hirokazu
2004-09-01
Chaotic itinerancy (CI), which is defined as an incessant spontaneous switching phenomenon among attractor ruins in deterministic dynamical systems without Lyapunov functions, is numerically studied in the case of an oscillator neural network model. The model is the pseudoinverse-matrix version of the previous model [S. Uchiyama and H. Fujisaka, Phys. Rev. E 65, 061912 (2002)] that was studied theoretically with the aid of statistical neurodynamics. It is found that CI in neural nets can be understood as the intermittent dynamics of weakly destabilized chaotic retrieval solutions. Copyright 2004 American Institute of Physics
Spontaneous scale-free structure in adaptive networks with synchronously dynamical linking
NASA Astrophysics Data System (ADS)
Yuan, Wu-Jie; Zhou, Jian-Fang; Li, Qun; Chen, De-Bao; Wang, Zhen
2013-08-01
Inspired by the anti-Hebbian learning rule in neural systems, we study how the feedback from dynamical synchronization shapes network structure by adding new links. Through extensive numerical simulations, we find that an adaptive network spontaneously forms scale-free structure, as confirmed in many real systems. Moreover, the adaptive process produces two nontrivial power-law behaviors of deviation strength from mean activity of the network and negative degree correlation, which exists widely in technological and biological networks. Importantly, these scalings are robust to variation of the adaptive network parameters, which may have meaningful implications in the scale-free formation and manipulation of dynamical networks. Our study thus suggests an alternative adaptive mechanism for the formation of scale-free structure with negative degree correlation, which means that nodes of high degree tend to connect, on average, with others of low degree and vice versa. The relevance of the results to structure formation and dynamical property in neural networks is briefly discussed as well.
Dynamic Information Encoding With Dynamic Synapses in Neural Adaptation
Li, Luozheng; Mi, Yuanyuan; Zhang, Wenhao; Wang, Da-Hui; Wu, Si
2018-01-01
Adaptation refers to the general phenomenon that the neural system dynamically adjusts its response property according to the statistics of external inputs. In response to an invariant stimulation, neuronal firing rates first increase dramatically and then decrease gradually to a low level close to the background activity. This prompts a question: during the adaptation, how does the neural system encode the repeated stimulation with attenuated firing rates? It has been suggested that the neural system may employ a dynamical encoding strategy during the adaptation, the information of stimulus is mainly encoded by the strong independent spiking of neurons at the early stage of the adaptation; while the weak but synchronized activity of neurons encodes the stimulus information at the later stage of the adaptation. The previous study demonstrated that short-term facilitation (STF) of electrical synapses, which increases the synchronization between neurons, can provide a mechanism to realize dynamical encoding. In the present study, we further explore whether short-term plasticity (STP) of chemical synapses, an interaction form more common than electrical synapse in the cortex, can support dynamical encoding. We build a large-size network with chemical synapses between neurons. Notably, facilitation of chemical synapses only enhances pair-wise correlations between neurons mildly, but its effect on increasing synchronization of the network can be significant, and hence it can serve as a mechanism to convey the stimulus information. To read-out the stimulus information, we consider that a downstream neuron receives balanced excitatory and inhibitory inputs from the network, so that the downstream neuron only responds to synchronized firings of the network. Therefore, the response of the downstream neuron indicates the presence of the repeated stimulation. Overall, our study demonstrates that STP of chemical synapse can serve as a mechanism to realize dynamical neural encoding. We believe that our study shed lights on the mechanism underlying the efficient neural information processing via adaptation. PMID:29636675
A class of convergent neural network dynamics
NASA Astrophysics Data System (ADS)
Fiedler, Bernold; Gedeon, Tomáš
1998-01-01
We consider a class of systems of differential equations in Rn which exhibits convergent dynamics. We find a Lyapunov function and show that every bounded trajectory converges to the set of equilibria. Our result generalizes the results of Cohen and Grossberg (1983) for convergent neural networks. It replaces the symmetry assumption on the matrix of weights by the assumption on the structure of the connections in the neural network. We prove the convergence result also for a large class of Lotka-Volterra systems. These are naturally defined on the closed positive orthant. We show that there are no heteroclinic cycles on the boundary of the positive orthant for the systems in this class.
Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.
Aftab, Muhammad Saleheen; Shafiq, Muhammad
2015-11-01
This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Cui, Yiqian; Shi, Junyou; Wang, Zili
2015-11-01
Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Transversal homoclinic orbits in a transiently chaotic neural network.
Chen, Shyan-Shiou; Shih, Chih-Wen
2002-09-01
We study the existence of snap-back repellers, hence the existence of transversal homoclinic orbits in a discrete-time neural network. Chaotic behaviors for the network system in the sense of Li and Yorke or Marotto can then be concluded. The result is established by analyzing the structures of the system and allocating suitable parameters in constructing the fixed points and their pre-images for the system. The investigation provides a theoretical confirmation on the scenario of transient chaos for the system. All the parameter conditions for the theory can be examined numerically. The numerical ranges for the parameters which yield chaotic dynamics and convergent dynamics provide significant information in the annealing process in solving combinatorial optimization problems using this transiently chaotic neural network. (c) 2002 American Institute of Physics.
Variable Neural Adaptive Robust Control: A Switched System Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.
2015-05-01
Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewisemore » quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.« less
Financial Time Series Prediction Using Spiking Neural Networks
Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam
2014-01-01
In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618
Neural dynamic programming and its application to control systems
NASA Astrophysics Data System (ADS)
Seong, Chang-Yun
There are few general practical feedback control methods for nonlinear MIMO (multi-input-multi-output) systems, although such methods exist for their linear counterparts. Neural Dynamic Programming (NDP) is proposed as a practical design method of optimal feedback controllers for nonlinear MIMO systems. NDP is an offspring of both neural networks and optimal control theory. In optimal control theory, the optimal solution to any nonlinear MIMO control problem may be obtained from the Hamilton-Jacobi-Bellman equation (HJB) or the Euler-Lagrange equations (EL). The two sets of equations provide the same solution in different forms: EL leads to a sequence of optimal control vectors, called Feedforward Optimal Control (FOC); HJB yields a nonlinear optimal feedback controller, called Dynamic Programming (DP). DP produces an optimal solution that can reject disturbances and uncertainties as a result of feedback. Unfortunately, computation and storage requirements associated with DP solutions can be problematic, especially for high-order nonlinear systems. This dissertation presents an approximate technique for solving the DP problem based on neural network techniques that provides many of the performance benefits (e.g., optimality and feedback) of DP and benefits from the numerical properties of neural networks. We formulate neural networks to approximate optimal feedback solutions whose existence DP justifies. We show the conditions under which NDP closely approximates the optimal solution. Finally, we introduce the learning operator characterizing the learning process of the neural network in searching the optimal solution. The analysis of the learning operator provides not only a fundamental understanding of the learning process in neural networks but also useful guidelines for selecting the number of weights of the neural network. As a result, NDP finds---with a reasonable amount of computation and storage---the optimal feedback solutions to nonlinear MIMO control problems that would be very difficult to solve with DP. NDP was demonstrated on several applications such as the lateral autopilot logic for a Boeing 747, the minimum fuel control of a double-integrator plant with bounded control, the backward steering of a two-trailer truck, and the set-point control of a two-link robot arm.
Hidden Markov models and neural networks for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
Neural networks plus hidden Markov models (HMM) can provide excellent detection and false alarm rate performance in fault detection applications, as shown in this viewgraph presentation. Modified models allow for novelty detection. Key contributions of neural network models are: (1) excellent nonparametric discrimination capability; (2) a good estimator of posterior state probabilities, even in high dimensions, and thus can be embedded within overall probabilistic model (HMM); and (3) simple to implement compared to other nonparametric models. Neural network/HMM monitoring model is currently being integrated with the new Deep Space Network (DSN) antenna controller software and will be on-line monitoring a new DSN 34-m antenna (DSS-24) by July, 1994.
An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation.
Wang, Runchun; Cohen, Gregory; Stiefel, Klaus M; Hamilton, Tara Julia; Tapson, Jonathan; van Schaik, André
2013-01-01
We present an FPGA implementation of a re-configurable, polychronous spiking neural network with a large capacity for spatial-temporal patterns. The proposed neural network generates delay paths de novo, so that only connections that actually appear in the training patterns will be created. This allows the proposed network to use all the axons (variables) to store information. Spike Timing Dependent Delay Plasticity is used to fine-tune and add dynamics to the network. We use a time multiplexing approach allowing us to achieve 4096 (4k) neurons and up to 1.15 million programmable delay axons on a Virtex 6 FPGA. Test results show that the proposed neural network is capable of successfully recalling more than 95% of all spikes for 96% of the stored patterns. The tests also show that the neural network is robust to noise from random input spikes.
Constructive autoassociative neural network for facial recognition.
Fernandes, Bruno J T; Cavalcanti, George D C; Ren, Tsang I
2014-01-01
Autoassociative artificial neural networks have been used in many different computer vision applications. However, it is difficult to define the most suitable neural network architecture because this definition is based on previous knowledge and depends on the problem domain. To address this problem, we propose a constructive autoassociative neural network called CANet (Constructive Autoassociative Neural Network). CANet integrates the concepts of receptive fields and autoassociative memory in a dynamic architecture that changes the configuration of the receptive fields by adding new neurons in the hidden layer, while a pruning algorithm removes neurons from the output layer. Neurons in the CANet output layer present lateral inhibitory connections that improve the recognition rate. Experiments in face recognition and facial expression recognition show that the CANet outperforms other methods presented in the literature.
Neural Computations in a Dynamical System with Multiple Time Scales.
Mi, Yuanyuan; Lin, Xiaohan; Wu, Si
2016-01-01
Neural systems display rich short-term dynamics at various levels, e.g., spike-frequency adaptation (SFA) at the single-neuron level, and short-term facilitation (STF) and depression (STD) at the synapse level. These dynamical features typically cover a broad range of time scales and exhibit large diversity in different brain regions. It remains unclear what is the computational benefit for the brain to have such variability in short-term dynamics. In this study, we propose that the brain can exploit such dynamical features to implement multiple seemingly contradictory computations in a single neural circuit. To demonstrate this idea, we use continuous attractor neural network (CANN) as a working model and include STF, SFA and STD with increasing time constants in its dynamics. Three computational tasks are considered, which are persistent activity, adaptation, and anticipative tracking. These tasks require conflicting neural mechanisms, and hence cannot be implemented by a single dynamical feature or any combination with similar time constants. However, with properly coordinated STF, SFA and STD, we show that the network is able to implement the three computational tasks concurrently. We hope this study will shed light on the understanding of how the brain orchestrates its rich dynamics at various levels to realize diverse cognitive functions.
A recurrent self-organizing neural fuzzy inference network.
Juang, C F; Lin, C T
1999-01-01
A recurrent self-organizing neural fuzzy inference network (RSONFIN) is proposed in this paper. The RSONFIN is inherently a recurrent multilayered connectionist network for realizing the basic elements and functions of dynamic fuzzy inference, and may be considered to be constructed from a series of dynamic fuzzy rules. The temporal relations embedded in the network are built by adding some feedback connections representing the memory elements to a feedforward neural fuzzy network. Each weight as well as node in the RSONFIN has its own meaning and represents a special element in a fuzzy rule. There are no hidden nodes (i.e., no membership functions and fuzzy rules) initially in the RSONFIN. They are created on-line via concurrent structure identification (the construction of dynamic fuzzy if-then rules) and parameter identification (the tuning of the free parameters of membership functions). The structure learning together with the parameter learning forms a fast learning algorithm for building a small, yet powerful, dynamic neural fuzzy network. Two major characteristics of the RSONFIN can thus be seen: 1) the recurrent property of the RSONFIN makes it suitable for dealing with temporal problems and 2) no predetermination, like the number of hidden nodes, must be given, since the RSONFIN can find its optimal structure and parameters automatically and quickly. Moreover, to reduce the number of fuzzy rules generated, a flexible input partition method, the aligned clustering-based algorithm, is proposed. Various simulations on temporal problems are done and performance comparisons with some existing recurrent networks are also made. Efficiency of the RSONFIN is verified from these results.
Homeostatic Scaling of Excitability in Recurrent Neural Networks
Remme, Michiel W. H.; Wadman, Wytse J.
2012-01-01
Neurons adjust their intrinsic excitability when experiencing a persistent change in synaptic drive. This process can prevent neural activity from moving into either a quiescent state or a saturated state in the face of ongoing plasticity, and is thought to promote stability of the network in which neurons reside. However, most neurons are embedded in recurrent networks, which require a delicate balance between excitation and inhibition to maintain network stability. This balance could be disrupted when neurons independently adjust their intrinsic excitability. Here, we study the functioning of activity-dependent homeostatic scaling of intrinsic excitability (HSE) in a recurrent neural network. Using both simulations of a recurrent network consisting of excitatory and inhibitory neurons that implement HSE, and a mean-field description of adapting excitatory and inhibitory populations, we show that the stability of such adapting networks critically depends on the relationship between the adaptation time scales of both neuron populations. In a stable adapting network, HSE can keep all neurons functioning within their dynamic range, while the network is undergoing several (patho)physiologically relevant types of plasticity, such as persistent changes in external drive, changes in connection strengths, or the loss of inhibitory cells from the network. However, HSE cannot prevent the unstable network dynamics that result when, due to such plasticity, recurrent excitation in the network becomes too strong compared to feedback inhibition. This suggests that keeping a neural network in a stable and functional state requires the coordination of distinct homeostatic mechanisms that operate not only by adjusting neural excitability, but also by controlling network connectivity. PMID:22570604
Zhang, Guodong; Zeng, Zhigang; Hu, Junhao
2018-01-01
This paper is concerned with the global exponential dissipativity of memristive inertial neural networks with discrete and distributed time-varying delays. By constructing appropriate Lyapunov-Krasovskii functionals, some new sufficient conditions ensuring global exponential dissipativity of memristive inertial neural networks are derived. Moreover, the globally exponential attractive sets and positive invariant sets are also presented here. In addition, the new proposed results here complement and extend the earlier publications on conventional or memristive neural network dynamical systems. Finally, numerical simulations are given to illustrate the effectiveness of obtained results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.
Liu, Meiqin
2009-09-01
This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.
Noise in Neural Networks: Thresholds, Hysteresis, and Neuromodulation of Signal-To-Noise
NASA Astrophysics Data System (ADS)
Keeler, James D.; Pichler, Elgar E.; Ross, John
1989-03-01
We study a neural-network model including Gaussian noise, higher-order neuronal interactions, and neuromodulation. For a first-order network, there is a threshold in the noise level (phase transition) above which the network displays only disorganized behavior and critical slowing down near the noise threshold. The network can tolerate more noise if it has higher-order feedback interactions, which also lead to hysteresis and multistability in the network dynamics. The signal-to-noise ratio can be adjusted in a biological neural network by neuromodulators such as norepinephrine. Comparisons are made to experimental results and further investigations are suggested to test the effects of hysteresis and neuromodulation in pattern recognition and learning. We propose that norepinephrine may ``quench'' the neural patterns of activity to enhance the ability to learn details.
A loop-based neural architecture for structured behavior encoding and decoding.
Gisiger, Thomas; Boukadoum, Mounir
2018-02-01
We present a new type of artificial neural network that generalizes on anatomical and dynamical aspects of the mammal brain. Its main novelty lies in its topological structure which is built as an array of interacting elementary motifs shaped like loops. These loops come in various types and can implement functions such as gating, inhibitory or executive control, or encoding of task elements to name a few. Each loop features two sets of neurons and a control region, linked together by non-recurrent projections. The two neural sets do the bulk of the loop's computations while the control unit specifies the timing and the conditions under which the computations implemented by the loop are to be performed. By functionally linking many such loops together, a neural network is obtained that may perform complex cognitive computations. To demonstrate the potential offered by such a system, we present two neural network simulations. The first illustrates the structure and dynamics of a single loop implementing a simple gating mechanism. The second simulation shows how connecting four loops in series can produce neural activity patterns that are sufficient to pass a simplified delayed-response task. We also show that this network reproduces electrophysiological measurements gathered in various regions of the brain of monkeys performing similar tasks. We also demonstrate connections between this type of neural network and recurrent or long short-term memory network models, and suggest ways to generalize them for future artificial intelligence research. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Barkhatov, N. A.; Revunov, S. E.; Vorobjev, V. G.; Yagodkina, O. I.
2018-03-01
The cause-and-effect relations of the dynamics of high-latitude geomagnetic activity (in terms of the AL index) and the type of the magnetic cloud of the solar wind are studied with the use of artificial neural networks. A recurrent neural network model has been created based on the search for the optimal physically coupled input and output parameters characterizing the action of a plasma flux belonging to a certain magnetic cloud type on the magnetosphere. It has been shown that, with IMF components as input parameters of neural networks with allowance for a 90-min prehistory, it is possible to retrieve the AL sequence with an accuracy to 80%. The successful retrieval of the AL dynamics by the used data indicates the presence of a close nonlinear connection of the AL index with cloud parameters. The created neural network models can be applied with high efficiency to retrieve the AL index, both in periods of isolated magnetospheric substorms and in periods of the interaction between the Earth's magnetosphere and magnetic clouds of different types. The developed model of AL index retrieval can be used to detect magnetic clouds.
Pilots Rate Augmented Generalized Predictive Control for Reconfiguration
NASA Technical Reports Server (NTRS)
Soloway, Don; Haley, Pam
2004-01-01
The objective of this paper is to report the results from the research being conducted in reconfigurable fight controls at NASA Ames. A study was conducted with three NASA Dryden test pilots to evaluate two approaches of reconfiguring an aircraft's control system when failures occur in the control surfaces and engine. NASA Ames is investigating both a Neural Generalized Predictive Control scheme and a Neural Network based Dynamic Inverse controller. This paper highlights the Predictive Control scheme where a simple augmentation to reduce zero steady-state error led to the neural network predictor model becoming redundant for the task. Instead of using a neural network predictor model, a nominal single point linear model was used and then augmented with an error corrector. This paper shows that the Generalized Predictive Controller and the Dynamic Inverse Neural Network controller perform equally well at reconfiguration, but with less rate requirements from the actuators. Also presented are the pilot ratings for each controller for various failure scenarios and two samples of the required control actuation during reconfiguration. Finally, the paper concludes by stepping through the Generalized Predictive Control's reconfiguration process for an elevator failure.
Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.
Hardy, N F; Buonomano, Dean V
2018-02-01
Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.
Application of neural networks to autonomous rendezvous and docking of space vehicles
NASA Technical Reports Server (NTRS)
Dabney, Richard W.
1992-01-01
NASA-Marshall has investigated the feasibility of numerous autonomous rendezvous and docking (ARD) candidate techniques. Neural networks have been studied as a viable basis for such systems' implementation, due to their intrinsic representation of such nonlinear functions as those for which analytical solutions are either difficult or nonexistent. Neural networks are also able to recognize and adapt to changes in their dynamic environment, thereby enhancing redundancy and fault tolerance. Outstanding performance has been obtained from ARD azimuth, elevation, and roll networks of this type.
Frequency Domain Analysis of Narx Neural Networks
NASA Astrophysics Data System (ADS)
Chance, J. E.; Worden, K.; Tomlinson, G. R.
1998-06-01
A method is proposed for interpreting the behaviour of NARX neural networks. The correspondence between time-delay neural networks and Volterra series is extended to the NARX class of networks. The Volterra kernels, or rather, their Fourier transforms, are obtained via harmonic probing. In the same way that the Volterra kernels generalize the impulse response to non-linear systems, the Volterra kernel transforms can be viewed as higher-order analogues of the Frequency Response Functions commonly used in Engineering dynamics; they can be interpreted in much the same way.
Dynamic changes in neural circuit topology following mild mechanical injury in vitro.
Patel, Tapan P; Ventre, Scott C; Meaney, David F
2012-01-01
Despite its enormous incidence, mild traumatic brain injury is not well understood. One aspect that needs more definition is how the mechanical energy during injury affects neural circuit function. Recent developments in cellular imaging probes provide an opportunity to assess the dynamic state of neural networks with single-cell resolution. In this article, we developed imaging methods to assess the state of dissociated cortical networks exposed to mild injury. We estimated the imaging conditions needed to achieve accurate measures of network properties, and applied these methodologies to evaluate if mild mechanical injury to cortical neurons produces graded changes to either spontaneous network activity or altered network topology. We found that modest injury produced a transient increase in calcium activity that dissipated within 1 h after injury. Alternatively, moderate mechanical injury produced immediate disruption in network synchrony, loss in excitatory tone, and increased modular topology. A calcium-activated neutral protease (calpain) was a key intermediary in these changes; blocking calpain activation restored the network nearly completely to its pre-injury state. Together, these findings show a more complex change in neural circuit behavior than previously reported for mild mechanical injury, and highlight at least one important early mechanism responsible for these changes.
ER fluid applications to vibration control devices and an adaptive neural-net controller
NASA Astrophysics Data System (ADS)
Morishita, Shin; Ura, Tamaki
1993-07-01
Four applications of electrorheological (ER) fluid to vibration control actuators and an adaptive neural-net control system suitable for the controller of ER actuators are described: a shock absorber system for automobiles, a squeeze film damper bearing for rotational machines, a dynamic damper for multidegree-of-freedom structures, and a vibration isolator. An adaptive neural-net control system composed of a forward model network for structural identification and a controller network is introduced for the control system of these ER actuators. As an example study of intelligent vibration control systems, an experiment was performed in which the ER dynamic damper was attached to a beam structure and controlled by the present neural-net controller so that the vibration in several modes of the beam was reduced with a single dynamic damper.
Dynamics of neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-01
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently, synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.
Dynamics of neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Kanter, Ido
2007-05-15
Synchronization of neural networks has been used for public channel protocols in cryptography. In the case of tree parity machines the dynamics of both bidirectional synchronization and unidirectional learning is driven by attractive and repulsive stochastic forces. Thus it can be described well by a random walk model for the overlap between participating neural networks. For that purpose transition probabilities and scaling laws for the step sizes are derived analytically. Both these calculations as well as numerical simulations show that bidirectional interaction leads to full synchronization on average. In contrast, successful learning is only possible by means of fluctuations. Consequently,more » synchronization is much faster than learning, which is essential for the security of the neural key-exchange protocol. However, this qualitative difference between bidirectional and unidirectional interaction vanishes if tree parity machines with more than three hidden units are used, so that those neural networks are not suitable for neural cryptography. In addition, the effective number of keys which can be generated by the neural key-exchange protocol is calculated using the entropy of the weight distribution. As this quantity increases exponentially with the system size, brute-force attacks on neural cryptography can easily be made unfeasible.« less
NASA Astrophysics Data System (ADS)
Maslennikov, O. V.; Nekorkin, V. I.
2017-10-01
Dynamical networks are systems of active elements (nodes) interacting with each other through links. Examples are power grids, neural structures, coupled chemical oscillators, and communications networks, all of which are characterized by a networked structure and intrinsic dynamics of their interacting components. If the coupling structure of a dynamical network can change over time due to nodal dynamics, then such a system is called an adaptive dynamical network. The term ‘adaptive’ implies that the coupling topology can be rewired; the term ‘dynamical’ implies the presence of internal node and link dynamics. The main results of research on adaptive dynamical networks are reviewed. Key notions and definitions of the theory of complex networks are given, and major collective effects that emerge in adaptive dynamical networks are described.
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S.; Agarwal, Dev P.
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data. PMID:26366169
Fuzzy Counter Propagation Neural Network Control for a Class of Nonlinear Dynamical Systems.
Sakhre, Vandana; Jain, Sanjeev; Sapkal, Vilas S; Agarwal, Dev P
2015-01-01
Fuzzy Counter Propagation Neural Network (FCPN) controller design is developed, for a class of nonlinear dynamical systems. In this process, the weight connecting between the instar and outstar, that is, input-hidden and hidden-output layer, respectively, is adjusted by using Fuzzy Competitive Learning (FCL). FCL paradigm adopts the principle of learning, which is used to calculate Best Matched Node (BMN) which is proposed. This strategy offers a robust control of nonlinear dynamical systems. FCPN is compared with the existing network like Dynamic Network (DN) and Back Propagation Network (BPN) on the basis of Mean Absolute Error (MAE), Mean Square Error (MSE), Best Fit Rate (BFR), and so forth. It envisages that the proposed FCPN gives better results than DN and BPN. The effectiveness of the proposed FCPN algorithms is demonstrated through simulations of four nonlinear dynamical systems and multiple input and single output (MISO) and a single input and single output (SISO) gas furnace Box-Jenkins time series data.
Brain-Inspired Constructive Learning Algorithms with Evolutionally Additive Nonlinear Neurons
NASA Astrophysics Data System (ADS)
Fang, Le-Heng; Lin, Wei; Luo, Qiang
In this article, inspired partially by the physiological evidence of brain’s growth and development, we developed a new type of constructive learning algorithm with evolutionally additive nonlinear neurons. The new algorithms have remarkable ability in effective regression and accurate classification. In particular, the algorithms are able to sustain a certain reduction of the loss function when the dynamics of the trained network are bogged down in the vicinity of the local minima. The algorithm augments the neural network by adding only a few connections as well as neurons whose activation functions are nonlinear, nonmonotonic, and self-adapted to the dynamics of the loss functions. Indeed, we analytically demonstrate the reduction dynamics of the algorithm for different problems, and further modify the algorithms so as to obtain an improved generalization capability for the augmented neural networks. Finally, through comparing with the classical algorithm and architecture for neural network construction, we show that our constructive learning algorithms as well as their modified versions have better performances, such as faster training speed and smaller network size, on several representative benchmark datasets including the MNIST dataset for handwriting digits.
Signal Processing in Periodically Forced Gradient Frequency Neural Networks
Kim, Ji Chul; Large, Edward W.
2015-01-01
Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858
The TensorMol-0.1 model chemistry: a neural network augmented with long-range physics.
Yao, Kun; Herr, John E; Toth, David W; Mckintyre, Ryker; Parkhill, John
2018-02-28
Traditional force fields cannot model chemical reactivity, and suffer from low generality without re-fitting. Neural network potentials promise to address these problems, offering energies and forces with near ab initio accuracy at low cost. However a data-driven approach is naturally inefficient for long-range interatomic forces that have simple physical formulas. In this manuscript we construct a hybrid model chemistry consisting of a nearsighted neural network potential with screened long-range electrostatic and van der Waals physics. This trained potential, simply dubbed "TensorMol-0.1", is offered in an open-source Python package capable of many of the simulation types commonly used to study chemistry: geometry optimizations, harmonic spectra, open or periodic molecular dynamics, Monte Carlo, and nudged elastic band calculations. We describe the robustness and speed of the package, demonstrating its millihartree accuracy and scalability to tens-of-thousands of atoms on ordinary laptops. We demonstrate the performance of the model by reproducing vibrational spectra, and simulating the molecular dynamics of a protein. Our comparisons with electronic structure theory and experimental data demonstrate that neural network molecular dynamics is poised to become an important tool for molecular simulation, lowering the resource barrier to simulating chemistry.
Neural network application to aircraft control system design
NASA Technical Reports Server (NTRS)
Troudet, Terry; Garg, Sanjay; Merrill, Walter C.
1991-01-01
The feasibility of using artificial neural networks as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research are identified to enhance the practical applicability of neural networks to flight control design.
Neural network application to aircraft control system design
NASA Technical Reports Server (NTRS)
Troudet, Terry; Garg, Sanjay; Merrill, Walter C.
1991-01-01
The feasibility of using artificial neural network as control systems for modern, complex aerospace vehicles is investigated via an example aircraft control design study. The problem considered is that of designing a controller for an integrated airframe/propulsion longitudinal dynamics model of a modern fighter aircraft to provide independent control of pitch rate and airspeed responses to pilot command inputs. An explicit model following controller using H infinity control design techniques is first designed to gain insight into the control problem as well as to provide a baseline for evaluation of the neurocontroller. Using the model of the desired dynamics as a command generator, a multilayer feedforward neural network is trained to control the vehicle model within the physical limitations of the actuator dynamics. This is achieved by minimizing an objective function which is a weighted sum of tracking errors and control input commands and rates. To gain insight in the neurocontrol, linearized representations of the nonlinear neurocontroller are analyzed along a commanded trajectory. Linear robustness analysis tools are then applied to the linearized neurocontroller models and to the baseline H infinity based controller. Future areas of research identified to enhance the practical applicability of neural networks to flight control design.
Neural Network Assisted Inverse Dynamic Guidance for Terminally Constrained Entry Flight
Chen, Wanchun
2014-01-01
This paper presents a neural network assisted entry guidance law that is designed by applying Bézier approximation. It is shown that a fully constrained approximation of a reference trajectory can be made by using the Bézier curve. Applying this approximation, an inverse dynamic system for an entry flight is solved to generate guidance command. The guidance solution thus gotten ensures terminal constraints for position, flight path, and azimuth angle. In order to ensure terminal velocity constraint, a prediction of the terminal velocity is required, based on which, the approximated Bézier curve is adjusted. An artificial neural network is used for this prediction of the terminal velocity. The method enables faster implementation in achieving fully constrained entry flight. Results from simulations indicate improved performance of the neural network assisted method. The scheme is expected to have prospect for further research on automated onboard control of terminal velocity for both reentry and terminal guidance laws. PMID:24723821
Adaptive critic learning techniques for engine torque and air-fuel ratio control.
Liu, Derong; Javaherian, Hossein; Kovalenko, Olesia; Huang, Ting
2008-08-01
A new approach for engine calibration and control is proposed. In this paper, we present our research results on the implementation of adaptive critic designs for self-learning control of automotive engines. A class of adaptive critic designs that can be classified as (model-free) action-dependent heuristic dynamic programming is used in this research project. The goals of the present learning control design for automotive engines include improved performance, reduced emissions, and maintained optimum performance under various operating conditions. Using the data from a test vehicle with a V8 engine, we developed a neural network model of the engine and neural network controllers based on the idea of approximate dynamic programming to achieve optimal control. We have developed and simulated self-learning neural network controllers for both engine torque (TRQ) and exhaust air-fuel ratio (AFR) control. The goal of TRQ control and AFR control is to track the commanded values. For both control problems, excellent neural network controller transient performance has been achieved.
NASA Astrophysics Data System (ADS)
Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin
2016-09-01
Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.
NASA Astrophysics Data System (ADS)
Wan, Xiaodong; Wang, Yuanxun; Zhao, Dawei; Huang, YongAn
2017-09-01
Our study aims at developing an effective quality monitoring system in small scale resistance spot welding of titanium alloy. The measured electrical signals were interpreted in combination with the nugget development. Features were extracted from the dynamic resistance and electrode voltage curve. A higher welding current generally indicated a lower overall dynamic resistance level. A larger electrode voltage peak and higher change rate of electrode voltage could be detected under a smaller electrode force or higher welding current condition. Variation of the extracted features and weld quality was found more sensitive to the change of welding current than electrode force. Different neural network model were proposed for weld quality prediction. The back propagation neural network was more proper in failure load estimation. The probabilistic neural network model was more appropriate to be applied in quality level classification. A real-time and on-line weld quality monitoring system may be developed by taking advantages of both methods.
Chaisangmongkon, Warasinee; Swaminathan, Sruthi K.; Freedman, David J.; Wang, Xiao-Jing
2017-01-01
Summary Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels. Analysis of the trained networks elucidates that robust transient trajectories of the neural population are the key driver of sequential categorical decisions. The directions of trajectories are governed by network self-organized connectivity, defining a ‘neural landscape’, consisting of a task-tailored arrangement of slow states and dynamical tunnels. With this model, we can identify functionally-relevant circuit motifs and generalize the framework to solve other categorization tasks. PMID:28334612
A spiking neural integrator model of the adaptive control of action by the medial prefrontal cortex.
Bekolay, Trevor; Laubach, Mark; Eliasmith, Chris
2014-01-29
Subjects performing simple reaction-time tasks can improve reaction times by learning the expected timing of action-imperative stimuli and preparing movements in advance. Success or failure on the previous trial is often an important factor for determining whether a subject will attempt to time the stimulus or wait for it to occur before initiating action. The medial prefrontal cortex (mPFC) has been implicated in enabling the top-down control of action depending on the outcome of the previous trial. Analysis of spike activity from the rat mPFC suggests that neural integration is a key mechanism for adaptive control in precisely timed tasks. We show through simulation that a spiking neural network consisting of coupled neural integrators captures the neural dynamics of the experimentally recorded mPFC. Errors lead to deviations in the normal dynamics of the system, a process that could enable learning from past mistakes. We expand on this coupled integrator network to construct a spiking neural network that performs a reaction-time task by following either a cue-response or timing strategy, and show that it performs the task with similar reaction times as experimental subjects while maintaining the same spiking dynamics as the experimentally recorded mPFC.
Movement decoupling control for two-axis fast steering mirror
NASA Astrophysics Data System (ADS)
Wang, Rui; Qiao, Yongming; Lv, Tao
2017-02-01
Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.
Adaptive online inverse control of a shape memory alloy wire actuator using a dynamic neural network
NASA Astrophysics Data System (ADS)
Mai, Huanhuan; Song, Gangbing; Liao, Xiaofeng
2013-01-01
Shape memory alloy (SMA) actuators exhibit severe hysteresis, a nonlinear behavior, which complicates control strategies and limits their applications. This paper presents a new approach to controlling an SMA actuator through an adaptive inverse model based controller that consists of a dynamic neural network (DNN) identifier, a copy dynamic neural network (CDNN) feedforward term and a proportional (P) feedback action. Unlike fixed hysteresis models used in most inverse controllers, the proposed one uses a DNN to identify online the relationship between the applied voltage to the actuator and the displacement (the inverse model). Even without a priori knowledge of the SMA hysteresis and without pre-training, the proposed controller can precisely control the SMA wire actuator in various tracking tasks by identifying online the inverse model of the SMA actuator. Experiments were conducted, and experimental results demonstrated real-time modeling capabilities of DNN and the performance of the adaptive inverse controller.
Optimal mapping of neural-network learning on message-passing multicomputers
NASA Technical Reports Server (NTRS)
Chu, Lon-Chan; Wah, Benjamin W.
1992-01-01
A minimization of learning-algorithm completion time is sought in the present optimal-mapping study of the learning process in multilayer feed-forward artificial neural networks (ANNs) for message-passing multicomputers. A novel approximation algorithm for mappings of this kind is derived from observations of the dominance of a parallel ANN algorithm over its communication time. Attention is given to both static and dynamic mapping schemes for systems with static and dynamic background workloads, as well as to experimental results obtained for simulated mappings on multicomputers with dynamic background workloads.
A new bio-inspired stimulator to suppress hyper-synchronized neural firing in a cortical network.
Amiri, Masoud; Amiri, Mahmood; Nazari, Soheila; Faez, Karim
2016-12-07
Hyper-synchronous neural oscillations are the character of several neurological diseases such as epilepsy. On the other hand, glial cells and particularly astrocytes can influence neural synchronization. Therefore, based on the recent researches, a new bio-inspired stimulator is proposed which basically is a dynamical model of the astrocyte biophysical model. The performance of the new stimulator is investigated on a large-scale, cortical network. Both excitatory and inhibitory synapses are also considered in the simulated spiking neural network. The simulation results show that the new stimulator has a good performance and is able to reduce recurrent abnormal excitability which in turn avoids the hyper-synchronous neural firing in the spiking neural network. In this way, the proposed stimulator has a demand controlled characteristic and is a good candidate for deep brain stimulation (DBS) technique to successfully suppress the neural hyper-synchronization. Copyright © 2016 Elsevier Ltd. All rights reserved.
1994-06-09
Ethics and the Soul 1-221 P. Werbos A Net Program for Natural Language Comprehension 1-863 J. Weiss Applications Oral ANN Design of Image Processing...Controlling Nonlinear Dynamic Systems Using Neuro-Fuzzy Networks 1-787 E. Teixera, G. Laforga, H. Azevedo Neural Fuzzy Logics as a Tool for Design Ecological ...Discrete Neural Network 11-466 Z. Cheng-fu Representation of Number A Theory of Mathematical Modeling 11-479 J. Cristofano An Ecological Approach to
Inhibition delay increases neural network capacity through Stirling transform.
Nogaret, Alain; King, Alastair
2018-03-01
Inhibitory neural networks are found to encode high volumes of information through delayed inhibition. We show that inhibition delay increases storage capacity through a Stirling transform of the minimum capacity which stabilizes locally coherent oscillations. We obtain both the exact and asymptotic formulas for the total number of dynamic attractors. Our results predict a (ln2)^{-N}-fold increase in capacity for an N-neuron network and demonstrate high-density associative memories which host a maximum number of oscillations in analog neural devices.
Inhibition delay increases neural network capacity through Stirling transform
NASA Astrophysics Data System (ADS)
Nogaret, Alain; King, Alastair
2018-03-01
Inhibitory neural networks are found to encode high volumes of information through delayed inhibition. We show that inhibition delay increases storage capacity through a Stirling transform of the minimum capacity which stabilizes locally coherent oscillations. We obtain both the exact and asymptotic formulas for the total number of dynamic attractors. Our results predict a (ln2) -N-fold increase in capacity for an N -neuron network and demonstrate high-density associative memories which host a maximum number of oscillations in analog neural devices.
Terminal attractors for addressable memory in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1988-01-01
A new type of attractors - terminal attractors - for an addressable memory in neural networks operating in continuous time is introduced. These attractors represent singular solutions of the dynamical system. They intersect (or envelope) the families of regular solutions while each regular solution approaches the terminal attractor in a finite time period. It is shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the weight matrix.
Avalanche and edge-of-chaos criticality do not necessarily co-occur in neural networks.
Kanders, Karlis; Lorimer, Tom; Stoop, Ruedi
2017-04-01
There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.
Avalanche and edge-of-chaos criticality do not necessarily co-occur in neural networks
NASA Astrophysics Data System (ADS)
Kanders, Karlis; Lorimer, Tom; Stoop, Ruedi
2017-04-01
There are indications that for optimizing neural computation, neural networks may operate at criticality. Previous approaches have used distinct fingerprints of criticality, leaving open the question whether the different notions would necessarily reflect different aspects of one and the same instance of criticality, or whether they could potentially refer to distinct instances of criticality. In this work, we choose avalanche criticality and edge-of-chaos criticality and demonstrate for a recurrent spiking neural network that avalanche criticality does not necessarily entrain dynamical edge-of-chaos criticality. This suggests that the different fingerprints may pertain to distinct phenomena.
How synapses can enhance sensibility of a neural network
NASA Astrophysics Data System (ADS)
Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.
2018-02-01
In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.
Wang, Fen; Chen, Yuanlong; Liu, Meichun
2018-02-01
Stochastic memristor-based bidirectional associative memory (BAM) neural networks with time delays play an increasingly important role in the design and implementation of neural network systems. Under the framework of Filippov solutions, the issues of the pth moment exponential stability of stochastic memristor-based BAM neural networks are investigated. By using the stochastic stability theory, Itô's differential formula and Young inequality, the criteria are derived. Meanwhile, with Lyapunov approach and Cauchy-Schwarz inequality, we derive some sufficient conditions for the mean square exponential stability of the above systems. The obtained results improve and extend previous works on memristor-based or usual neural networks dynamical systems. Four numerical examples are provided to illustrate the effectiveness of the proposed results. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the origin of reproducible sequential activity in neural circuits
NASA Astrophysics Data System (ADS)
Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.
2004-12-01
Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.
On the origin of reproducible sequential activity in neural circuits.
Afraimovich, V S; Zhigulin, V P; Rabinovich, M I
2004-12-01
Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.
Wang, Dongshu; Huang, Lihong
2014-03-01
In this paper, we investigate the periodic dynamical behaviors for a class of general Cohen-Grossberg neural networks with discontinuous right-hand sides, time-varying and distributed delays. By means of retarded differential inclusions theory and the fixed point theorem of multi-valued maps, the existence of periodic solutions for the neural networks is obtained. After that, we derive some sufficient conditions for the global exponential stability and convergence of the neural networks, in terms of nonsmooth analysis theory with generalized Lyapunov approach. Without assuming the boundedness (or the growth condition) and monotonicity of the discontinuous neuron activation functions, our results will also be valid. Moreover, our results extend previous works not only on discrete time-varying and distributed delayed neural networks with continuous or even Lipschitz continuous activations, but also on discrete time-varying and distributed delayed neural networks with discontinuous activations. We give some numerical examples to show the applicability and effectiveness of our main results. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Lagrangian Formulation of Neural Networks I: Theory and Analog Dynamics
NASA Technical Reports Server (NTRS)
Mjolsness, Eric; Miranker, Willard L.
1997-01-01
We expand the mathematicla apparatus for relaxation networks, which conventionally consists of an objective function E and a dynamics given by a system of differenctial equations along whose trajectories E is diminished.
Hellyer, Peter John; Clopath, Claudia; Kehagia, Angie A; Turkheimer, Federico E; Leech, Robert
2017-08-01
In recent years, there have been many computational simulations of spontaneous neural dynamics. Here, we describe a simple model of spontaneous neural dynamics that controls an agent moving in a simple virtual environment. These dynamics generate interesting brain-environment feedback interactions that rapidly destabilize neural and behavioral dynamics demonstrating the need for homeostatic mechanisms. We investigate roles for homeostatic plasticity both locally (local inhibition adjusting to balance excitatory input) as well as more globally (regional "task negative" activity that compensates for "task positive", sensory input in another region) balancing neural activity and leading to more stable behavior (trajectories through the environment). Our results suggest complementary functional roles for both local and macroscale mechanisms in maintaining neural and behavioral dynamics and a novel functional role for macroscopic "task-negative" patterns of activity (e.g., the default mode network).
Voytek, Bradley; Knight, Robert T
2015-06-15
Perception, cognition, and social interaction depend upon coordinated neural activity. This coordination operates within noisy, overlapping, and distributed neural networks operating at multiple timescales. These networks are built upon a structural scaffolding with intrinsic neuroplasticity that changes with development, aging, disease, and personal experience. In this article, we begin from the perspective that successful interregional communication relies upon the transient synchronization between distinct low-frequency (<80 Hz) oscillations, allowing for brief windows of communication via phase-coordinated local neuronal spiking. From this, we construct a theoretical framework for dynamic network communication, arguing that these networks reflect a balance between oscillatory coupling and local population spiking activity and that these two levels of activity interact. We theorize that when oscillatory coupling is too strong, spike timing within the local neuronal population becomes too synchronous; when oscillatory coupling is too weak, spike timing is too disorganized. Each results in specific disruptions to neural communication. These alterations in communication dynamics may underlie cognitive changes associated with healthy development and aging, in addition to neurological and psychiatric disorders. A number of neurological and psychiatric disorders-including Parkinson's disease, autism, depression, schizophrenia, and anxiety-are associated with abnormalities in oscillatory activity. Although aging, psychiatric and neurological disease, and experience differ in the biological changes to structural gray or white matter, neurotransmission, and gene expression, our framework suggests that any resultant cognitive and behavioral changes in normal or disordered states or their treatment are a product of how these physical processes affect dynamic network communication. Copyright © 2015 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Pinning synchronization of memristor-based neural networks with time-varying delays.
Yang, Zhanyu; Luo, Biao; Liu, Derong; Li, Yueheng
2017-09-01
In this paper, the synchronization of memristor-based neural networks with time-varying delays via pinning control is investigated. A novel pinning method is introduced to synchronize two memristor-based neural networks which denote drive system and response system, respectively. The dynamics are studied by theories of differential inclusions and nonsmooth analysis. In addition, some sufficient conditions are derived to guarantee asymptotic synchronization and exponential synchronization of memristor-based neural networks via the presented pinning control. Furthermore, some improvements about the proposed control method are also discussed in this paper. Finally, the effectiveness of the obtained results is demonstrated by numerical simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multistability and instability analysis of recurrent neural networks with time-varying delays.
Zhang, Fanghai; Zeng, Zhigang
2018-01-01
This paper provides new theoretical results on the multistability and instability analysis of recurrent neural networks with time-varying delays. It is shown that such n-neuronal recurrent neural networks have exactly [Formula: see text] equilibria, [Formula: see text] of which are locally exponentially stable and the others are unstable, where k 0 is a nonnegative integer such that k 0 ≤n. By using the combination method of two different divisions, recurrent neural networks can possess more dynamic properties. This method improves and extends the existing results in the literature. Finally, one numerical example is provided to show the superiority and effectiveness of the presented results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing
2015-12-01
We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.
The dynamical analysis of modified two-compartment neuron model and FPGA implementation
NASA Astrophysics Data System (ADS)
Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao
2017-10-01
The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.
Cruz-Garza, Jesus G; Hernandez, Zachery R; Tse, Teresa; Caducoy, Eunice; Abibullaev, Berdakh; Contreras-Vidal, Jose L
2015-10-04
Understanding typical and atypical development remains one of the fundamental questions in developmental human neuroscience. Traditionally, experimental paradigms and analysis tools have been limited to constrained laboratory tasks and contexts due to technical limitations imposed by the available set of measuring and analysis techniques and the age of the subjects. These limitations severely limit the study of developmental neural dynamics and associated neural networks engaged in cognition, perception and action in infants performing "in action and in context". This protocol presents a novel approach to study infants and young children as they freely organize their own behavior, and its consequences in a complex, partly unpredictable and highly dynamic environment. The proposed methodology integrates synchronized high-density active scalp electroencephalography (EEG), inertial measurement units (IMUs), video recording and behavioral analysis to capture brain activity and movement non-invasively in freely-behaving infants. This setup allows for the study of neural network dynamics in the developing brain, in action and context, as these networks are recruited during goal-oriented, exploration and social interaction tasks.
Neural network-based adaptive dynamic surface control for permanent magnet synchronous motors.
Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Chen, Bing; Lin, Chong
2015-03-01
This brief considers the problem of neural networks (NNs)-based adaptive dynamic surface control (DSC) for permanent magnet synchronous motors (PMSMs) with parameter uncertainties and load torque disturbance. First, NNs are used to approximate the unknown and nonlinear functions of PMSM drive system and a novel adaptive DSC is constructed to avoid the explosion of complexity in the backstepping design. Next, under the proposed adaptive neural DSC, the number of adaptive parameters required is reduced to only one, and the designed neural controllers structure is much simpler than some existing results in literature, which can guarantee that the tracking error converges to a small neighborhood of the origin. Then, simulations are given to illustrate the effectiveness and potential of the new design technique.
Fitting of dynamic recurrent neural network models to sensory stimulus-response data.
Doruk, R Ozgur; Zhang, Kechen
2018-06-02
We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.
Testing of information condensation in a model reverberating spiking neural network.
Vidybida, Alexander
2011-06-01
Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/ periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.
Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2
NASA Technical Reports Server (NTRS)
Culbert, Christopher J. (Editor)
1993-01-01
Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645
Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto
2014-01-01
Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.
NASA Astrophysics Data System (ADS)
Wang, Jin
Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, can be described by attractor dynamics. We developed a theoretical framework for global dynamics by quantifying the landscape associated with the steady state probability distributions and steady state curl flux, measuring the degree of non-equilibrium through detailed balance breaking. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by the flux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. The flux is responsible for coherent oscillations on the ring. We suggest the flux may provide the driving force for associations among memories. Both landscape and flux determine the kinetic paths and speed of decision making. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. The theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results show an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key elements in neural networks.
Neural basis for dynamic updating of object representation in visual working memory.
Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun
2010-02-15
In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.
Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics
Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni
2015-01-01
In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645
Collective dynamics of 'small-world' networks.
Watts, D J; Strogatz, S H
1998-06-04
Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.
Research on FBG-Based CFRP Structural Damage Identification Using BP Neural Network
NASA Astrophysics Data System (ADS)
Geng, Xiangyi; Lu, Shizeng; Jiang, Mingshun; Sui, Qingmei; Lv, Shanshan; Xiao, Hang; Jia, Yuxi; Jia, Lei
2018-06-01
A damage identification system of carbon fiber reinforced plastics (CFRP) structures is investigated using fiber Bragg grating (FBG) sensors and back propagation (BP) neural network. FBG sensors are applied to construct the sensing network to detect the structural dynamic response signals generated by active actuation. The damage identification model is built based on the BP neural network. The dynamic signal characteristics extracted by the Fourier transform are the inputs, and the damage states are the outputs of the model. Besides, damages are simulated by placing lumped masses with different weights instead of inducing real damages, which is confirmed to be feasible by finite element analysis (FEA). At last, the damage identification system is verified on a CFRP plate with 300 mm × 300 mm experimental area, with the accurate identification of varied damage states. The system provides a practical way for CFRP structural damage identification.
Neural network identification of aircraft nonlinear aerodynamic characteristics
NASA Astrophysics Data System (ADS)
Egorchev, M. V.; Tiumentsev, Yu V.
2018-02-01
The simulation problem for the controlled aircraft motion is considered in the case of imperfect knowledge of the modeling object and its operating conditions. The work aims to develop a class of modular semi-empirical dynamic models that combine the capabilities of theoretical and neural network modeling. We consider the use of semi-empirical neural network models for solving the problem of identifying aerodynamic characteristics of an aircraft. We also discuss the formation problem for a representative set of data characterizing the behavior of a simulated dynamic system, which is one of the critical tasks in the synthesis of ANN-models. The effectiveness of the proposed approach is demonstrated using a simulation example of the aircraft angular motion and identifying the corresponding coefficients of aerodynamic forces and moments.
Smart Sensing and Recognition Based on Models of Neural Networks
1990-11-15
9P-o ,yY-’. AD-A230 701 University of Pensylvania Philadelphia, PA 19104-6390 SMART SENSING AND RECOGNITION BASED ON MODELS OF NEURAL NETWORKS ... networks , photonic 1 implementations, nonlinear dynamical signal processing 9 ABSTRACT (Continue on reverse if necessary and identify by block number...not develop in isolation but in synergism with sensory organs and their feature forming networks . This means that development of artificial pattern
NASA Astrophysics Data System (ADS)
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2012-01-01
The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis
NASA Astrophysics Data System (ADS)
Ignatyev, D. I.
2018-06-01
High-angles-of-attack dynamics of aircraft are complicated with dangerous phenomena such as wing rock, stall, and spin. Autonomous dynamically scaled aircraft model mounted in three-degree-of-freedom (3DoF) dynamic rig is proposed for studying aircraft dynamics and prototyping of control laws in wind tunnel. Dynamics of the scaled aircraft model in 3DoF manoeuvre rig in wind tunnel is considered. The model limit-cycle oscillations are obtained at high angles of attack. A neural network (NN) adaptive control suppressing wing rock motion is designed. The wing rock suppression with the proposed control law is validated using nonlinear time-domain simulations.
Organization of Anti-Phase Synchronization Pattern in Neural Networks: What are the Key Factors?
Li, Dong; Zhou, Changsong
2011-01-01
Anti-phase oscillation has been widely observed in cortical neural network. Elucidating the mechanism underlying the organization of anti-phase pattern is of significance for better understanding more complicated pattern formations in brain networks. In dynamical systems theory, the organization of anti-phase oscillation pattern has usually been considered to relate to time delay in coupling. This is consistent to conduction delays in real neural networks in the brain due to finite propagation velocity of action potentials. However, other structural factors in cortical neural network, such as modular organization (connection density) and the coupling types (excitatory or inhibitory), could also play an important role. In this work, we investigate the anti-phase oscillation pattern organized on a two-module network of either neuronal cell model or neural mass model, and analyze the impact of the conduction delay times, the connection densities, and coupling types. Our results show that delay times and coupling types can play key roles in this organization. The connection densities may have an influence on the stability if an anti-phase pattern exists due to the other factors. Furthermore, we show that anti-phase synchronization of slow oscillations can be achieved with small delay times if there is interaction between slow and fast oscillations. These results are significant for further understanding more realistic spatiotemporal dynamics of cortico-cortical communications. PMID:22232576
Mining Gene Regulatory Networks by Neural Modeling of Expression Time-Series.
Rubiolo, Mariano; Milone, Diego H; Stegmayer, Georgina
2015-01-01
Discovering gene regulatory networks from data is one of the most studied topics in recent years. Neural networks can be successfully used to infer an underlying gene network by modeling expression profiles as times series. This work proposes a novel method based on a pool of neural networks for obtaining a gene regulatory network from a gene expression dataset. They are used for modeling each possible interaction between pairs of genes in the dataset, and a set of mining rules is applied to accurately detect the subjacent relations among genes. The results obtained on artificial and real datasets confirm the method effectiveness for discovering regulatory networks from a proper modeling of the temporal dynamics of gene expression profiles.
Generalised Transfer Functions of Neural Networks
NASA Astrophysics Data System (ADS)
Fung, C. F.; Billings, S. A.; Zhang, H.
1997-11-01
When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.
Direct Adaptive Aircraft Control Using Dynamic Cell Structure Neural Networks
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1997-01-01
A Dynamic Cell Structure (DCS) Neural Network was developed which learns topology representing networks (TRNS) of F-15 aircraft aerodynamic stability and control derivatives. The network is integrated into a direct adaptive tracking controller. The combination produces a robust adaptive architecture capable of handling multiple accident and off- nominal flight scenarios. This paper describes the DCS network and modifications to the parameter estimation procedure. The work represents one step towards an integrated real-time reconfiguration control architecture for rapid prototyping of new aircraft designs. Performance was evaluated using three off-line benchmarks and on-line nonlinear Virtual Reality simulation. Flight control was evaluated under scenarios including differential stabilator lock, soft sensor failure, control and stability derivative variations, and air turbulence.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
Plasticity of brain wave network interactions and evolution across physiologic states
Liu, Kang K. L.; Bartsch, Ronny P.; Lin, Aijing; Mantegna, Rosario N.; Ivanov, Plamen Ch.
2015-01-01
Neural plasticity transcends a range of spatio-temporal scales and serves as the basis of various brain activities and physiologic functions. At the microscopic level, it enables the emergence of brain waves with complex temporal dynamics. At the macroscopic level, presence and dominance of specific brain waves is associated with important brain functions. The role of neural plasticity at different levels in generating distinct brain rhythms and how brain rhythms communicate with each other across brain areas to generate physiologic states and functions remains not understood. Here we perform an empirical exploration of neural plasticity at the level of brain wave network interactions representing dynamical communications within and between different brain areas in the frequency domain. We introduce the concept of time delay stability (TDS) to quantify coordinated bursts in the activity of brain waves, and we employ a system-wide Network Physiology integrative approach to probe the network of coordinated brain wave activations and its evolution across physiologic states. We find an association between network structure and physiologic states. We uncover a hierarchical reorganization in the brain wave networks in response to changes in physiologic state, indicating new aspects of neural plasticity at the integrated level. Globally, we find that the entire brain network undergoes a pronounced transition from low connectivity in Deep Sleep and REM to high connectivity in Light Sleep and Wake. In contrast, we find that locally, different brain areas exhibit different network dynamics of brain wave interactions to achieve differentiation in function during different sleep stages. Moreover, our analyses indicate that plasticity also emerges in frequency-specific networks, which represent interactions across brain locations mediated through a specific frequency band. Comparing frequency-specific networks within the same physiologic state we find very different degree of network connectivity and link strength, while at the same time each frequency-specific network is characterized by a different signature pattern of sleep-stage stratification, reflecting a remarkable flexibility in response to change in physiologic state. These new aspects of neural plasticity demonstrate that in addition to dominant brain waves, the network of brain wave interactions is a previously unrecognized hallmark of physiologic state and function. PMID:26578891
On-line training of recurrent neural networks with continuous topology adaptation.
Obradovic, D
1996-01-01
This paper presents an online procedure for training dynamic neural networks with input-output recurrences whose topology is continuously adjusted to the complexity of the target system dynamics. This is accomplished by changing the number of the elements of the network hidden layer whenever the existing topology cannot capture the dynamics presented by the new data. The training mechanism is based on the suitably altered extended Kalman filter (EKF) algorithm which is simultaneously used for the network parameter adjustment and for its state estimation. The network consists of a single hidden layer with Gaussian radial basis functions (GRBF), and a linear output layer. The choice of the GRBF is induced by the requirements of the online learning. The latter implies the network architecture which permits only local influence of the new data point in order not to forget the previously learned dynamics. The continuous topology adaptation is implemented in our algorithm to avoid memory and computational problems of using a regular grid of GRBF'S which covers the network input space. Furthermore, we show that the resulting parameter increase can be handled "smoothly" without interfering with the already acquired information. If the target system dynamics are changing over time, we show that a suitable forgetting factor can be used to "unlearn" the no longer-relevant dynamics. The quality of the recurrent network training algorithm is demonstrated on the identification of nonlinear dynamic systems.
Revealing networks from dynamics: an introduction
NASA Astrophysics Data System (ADS)
Timme, Marc; Casadiego, Jose
2014-08-01
What can we learn from the collective dynamics of a complex network about its interaction topology? Taking the perspective from nonlinear dynamics, we briefly review recent progress on how to infer structural connectivity (direct interactions) from accessing the dynamics of the units. Potential applications range from interaction networks in physics, to chemical and metabolic reactions, protein and gene regulatory networks as well as neural circuits in biology and electric power grids or wireless sensor networks in engineering. Moreover, we briefly mention some standard ways of inferring effective or functional connectivity.
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
An Application Development Platform for Neuromorphic Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dean, Mark; Chan, Jason; Daffron, Christopher
2016-01-01
Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model arti cial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.
Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.
Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues
2012-09-01
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shao, Yuxiang; Chen, Qing; Wei, Zhenhua
Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.
Neural networks for data mining electronic text collections
NASA Astrophysics Data System (ADS)
Walker, Nicholas; Truman, Gregory
1997-04-01
The use of neural networks in information retrieval and text analysis has primarily suffered from the issues of adequate document representation, the ability to scale to very large collections, dynamism in the face of new information and the practical difficulties of basing the design on the use of supervised training sets. Perhaps the most important approach to begin solving these problems is the use of `intermediate entities' which reduce the dimensionality of document representations and the size of documents collections to manageable levels coupled with the use of unsupervised neural network paradigms. This paper describes the issues, a fully configured neural network-based text analysis system--dataHARVEST--aimed at data mining text collections which begins this process, along with the remaining difficulties and potential ways forward.
Neural Network Control of a Magnetically Suspended Rotor System
NASA Technical Reports Server (NTRS)
Choi, Benjamin; Brown, Gerald; Johnson, Dexter
1997-01-01
Abstract Magnetic bearings offer significant advantages because of their noncontact operation, which can reduce maintenance. Higher speeds, no friction, no lubrication, weight reduction, precise position control, and active damping make them far superior to conventional contact bearings. However, there are technical barriers that limit the application of this technology in industry. One of them is the need for a nonlinear controller that can overcome the system nonlinearity and uncertainty inherent in magnetic bearings. This paper discusses the use of a neural network as a nonlinear controller that circumvents system nonlinearity. A neural network controller was well trained and successfully demonstrated on a small magnetic bearing rig. This work demonstrated the feasibility of using a neural network to control nonlinear magnetic bearings and systems with unknown dynamics.
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Visual NNet: An Educational ANN's Simulation Environment Reusing Matlab Neural Networks Toolbox
ERIC Educational Resources Information Center
Garcia-Roselló, Emilio; González-Dacosta, Jacinto; Lado, Maria J.; Méndez, Arturo J.; Garcia Pérez-Schofield, Baltasar; Ferrer, Fátima
2011-01-01
Artificial Neural Networks (ANN's) are nowadays a common subject in different curricula of graduate and postgraduate studies. Due to the complex algorithms involved and the dynamic nature of ANN's, simulation software has been commonly used to teach this subject. This software has usually been developed specifically for learning purposes, because…
Yi, Qu; Zhan-ming, Li; Er-chao, Li
2012-11-01
A new fault detection and diagnosis (FDD) problem via the output probability density functions (PDFs) for non-gausian stochastic distribution systems (SDSs) is investigated. The PDFs can be approximated by radial basis functions (RBFs) neural networks. Different from conventional FDD problems, the measured information for FDD is the output stochastic distributions and the stochastic variables involved are not confined to Gaussian ones. A (RBFs) neural network technique is proposed so that the output PDFs can be formulated in terms of the dynamic weighings of the RBFs neural network. In this work, a nonlinear adaptive observer-based fault detection and diagnosis algorithm is presented by introducing the tuning parameter so that the residual is as sensitive as possible to the fault. Stability and Convergency analysis is performed in fault detection and fault diagnosis analysis for the error dynamic system. At last, an illustrated example is given to demonstrate the efficiency of the proposed algorithm, and satisfactory results have been obtained. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Neural learning of constrained nonlinear transformations
NASA Technical Reports Server (NTRS)
Barhen, Jacob; Gulati, Sandeep; Zak, Michail
1989-01-01
Two issues that are fundamental to developing autonomous intelligent robots, namely, rudimentary learning capability and dexterous manipulation, are examined. A powerful neural learning formalism is introduced for addressing a large class of nonlinear mapping problems, including redundant manipulator inverse kinematics, commonly encountered during the design of real-time adaptive control mechanisms. Artificial neural networks with terminal attractor dynamics are used. The rapid network convergence resulting from the infinite local stability of these attractors allows the development of fast neural learning algorithms. Approaches to manipulator inverse kinematics are reviewed, the neurodynamics model is discussed, and the neural learning algorithm is presented.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
Evolving neural networks with genetic algorithms to study the string landscape
NASA Astrophysics Data System (ADS)
Ruehle, Fabian
2017-08-01
We study possible applications of artificial neural networks to examine the string landscape. Since the field of application is rather versatile, we propose to dynamically evolve these networks via genetic algorithms. This means that we start from basic building blocks and combine them such that the neural network performs best for the application we are interested in. We study three areas in which neural networks can be applied: to classify models according to a fixed set of (physically) appealing features, to find a concrete realization for a computation for which the precise algorithm is known in principle but very tedious to actually implement, and to predict or approximate the outcome of some involved mathematical computation which performs too inefficient to apply it, e.g. in model scans within the string landscape. We present simple examples that arise in string phenomenology for all three types of problems and discuss how they can be addressed by evolving neural networks from genetic algorithms.
Li, Xuanying; Li, Xiaotong; Hu, Cheng
2017-12-01
In this paper, without transforming the second order inertial neural networks into the first order differential systems by some variable substitutions, asymptotic stability and synchronization for a class of delayed inertial neural networks are investigated. Firstly, a new Lyapunov functional is constructed to directly propose the asymptotic stability of the inertial neural networks, and some new stability criteria are derived by means of Barbalat Lemma. Additionally, by designing a new feedback control strategy, the asymptotic synchronization of the addressed inertial networks is studied and some effective conditions are obtained. To reduce the control cost, an adaptive control scheme is designed to realize the asymptotic synchronization. It is noted that the dynamical behaviors of inertial neural networks are directly analyzed in this paper by constructing some new Lyapunov functionals, this is totally different from the traditional reduced-order variable substitution method. Finally, some numerical simulations are given to demonstrate the effectiveness of the derived theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Berke, Laszlo; Patnaik, Surya N.; Murthy, Pappu L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated by using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network with the code NETS. Optimum designs for new design conditions were predicted by using the trained network. Neural net prediction of optimum designs was found to be satisfactory for most of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
Optimum Design of Aerospace Structural Components Using Neural Networks
NASA Technical Reports Server (NTRS)
Berke, L.; Patnaik, S. N.; Murthy, P. L. N.
1993-01-01
The application of artificial neural networks to capture structural design expertise is demonstrated. The principal advantage of a trained neural network is that it requires a trivial computational effort to produce an acceptable new design. For the class of problems addressed, the development of a conventional expert system would be extremely difficult. In the present effort, a structural optimization code with multiple nonlinear programming algorithms and an artificial neural network code NETS were used. A set of optimum designs for a ring and two aircraft wings for static and dynamic constraints were generated using the optimization codes. The optimum design data were processed to obtain input and output pairs, which were used to develop a trained artificial neural network using the code NETS. Optimum designs for new design conditions were predicted using the trained network. Neural net prediction of optimum designs was found to be satisfactory for the majority of the output design parameters. However, results from the present study indicate that caution must be exercised to ensure that all design variables are within selected error bounds.
NASA Technical Reports Server (NTRS)
Clancy, Daniel J.; Oezguener, Uemit; Graham, Ronald E.
1994-01-01
The potential for excessive plume impingement loads on Space Station Freedom solar arrays, caused by jet firings from an approaching Space Shuttle, is addressed. An artificial neural network is designed to determine commanded solar array beta gimbal angle for minimum plume loads. The commanded angle would be determined dynamically. The network design proposed involves radial basis functions as activation functions. Design, development, and simulation of this network design are discussed.
Adhikari, Shyam Prasad; Yang, Changju; Slot, Krzysztof; Kim, Hyongsuk
2018-01-10
This paper presents a vision sensor-based solution to the challenging problem of detecting and following trails in highly unstructured natural environments like forests, rural areas and mountains, using a combination of a deep neural network and dynamic programming. The deep neural network (DNN) concept has recently emerged as a very effective tool for processing vision sensor signals. A patch-based DNN is trained with supervised data to classify fixed-size image patches into "trail" and "non-trail" categories, and reshaped to a fully convolutional architecture to produce trail segmentation map for arbitrary-sized input images. As trail and non-trail patches do not exhibit clearly defined shapes or forms, the patch-based classifier is prone to misclassification, and produces sub-optimal trail segmentation maps. Dynamic programming is introduced to find an optimal trail on the sub-optimal DNN output map. Experimental results showing accurate trail detection for real-world trail datasets captured with a head mounted vision system are presented.
Yan, Zheng; Wang, Jun
2014-03-01
This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.
Fasoli, Diego; Cattani, Anna; Panzeri, Stefano
2018-05-01
Despite their biological plausibility, neural network models with asymmetric weights are rarely solved analytically, and closed-form solutions are available only in some limiting cases or in some mean-field approximations. We found exact analytical solutions of an asymmetric spin model of neural networks with arbitrary size without resorting to any approximation, and we comprehensively studied its dynamical and statistical properties. The network had discrete time evolution equations and binary firing rates, and it could be driven by noise with any distribution. We found analytical expressions of the conditional and stationary joint probability distributions of the membrane potentials and the firing rates. By manipulating the conditional probability distribution of the firing rates, we extend to stochastic networks the associating learning rule previously introduced by Personnaz and coworkers. The new learning rule allowed the safe storage, under the presence of noise, of point and cyclic attractors, with useful implications for content-addressable memories. Furthermore, we studied the bifurcation structure of the network dynamics in the zero-noise limit. We analytically derived examples of the codimension 1 and codimension 2 bifurcation diagrams of the network, which describe how the neuronal dynamics changes with the external stimuli. This showed that the network may undergo transitions among multistable regimes, oscillatory behavior elicited by asymmetric synaptic connections, and various forms of spontaneous symmetry breaking. We also calculated analytically groupwise correlations of neural activity in the network in the stationary regime. This revealed neuronal regimes where, statistically, the membrane potentials and the firing rates are either synchronous or asynchronous. Our results are valid for networks with any number of neurons, although our equations can be realistically solved only for small networks. For completeness, we also derived the network equations in the thermodynamic limit of infinite network size and we analytically studied their local bifurcations. All the analytical results were extensively validated by numerical simulations.
Kim, Junkyeong; Lee, Chaggil; Park, Seunghee
2017-06-07
Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process.
Kim, Junkyeong; Lee, Chaggil; Park, Seunghee
2017-01-01
Concrete is one of the most common materials used to construct a variety of civil infrastructures. However, since concrete might be susceptible to brittle fracture, it is essential to confirm the strength of concrete at the early-age stage of the curing process to prevent unexpected collapse. To address this issue, this study proposes a novel method to estimate the early-age strength of concrete, by integrating an artificial neural network algorithm with a dynamic response measurement of the concrete material. The dynamic response signals of the concrete, including both electromechanical impedances and guided ultrasonic waves, are obtained from an embedded piezoelectric sensor module. The cross-correlation coefficient of the electromechanical impedance signals and the amplitude of the guided ultrasonic wave signals are selected to quantify the variation in dynamic responses according to the strength of the concrete. Furthermore, an artificial neural network algorithm is used to verify a relationship between the variation in dynamic response signals and concrete strength. The results of an experimental study confirm that the proposed approach can be effectively applied to estimate the strength of concrete material from the early-age stage of the curing process. PMID:28590456
Flight control with adaptive critic neural network
NASA Astrophysics Data System (ADS)
Han, Dongchen
2001-10-01
In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.
Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 1
NASA Technical Reports Server (NTRS)
Culbert, Christopher J. (Editor)
1993-01-01
Documented here are papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake. The workshop was held June 1-3, 1992 at the Lyndon B. Johnson Space Center in Houston, Texas. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control, and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making.
Complex Networks in Psychological Models
NASA Astrophysics Data System (ADS)
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Dynamic neural activity during stress signals resilient coping
Sinha, Rajita; Lacadie, Cheryl M.; Constable, R. Todd; Seo, Dongju
2016-01-01
Active coping underlies a healthy stress response, but neural processes supporting such resilient coping are not well-known. Using a brief, sustained exposure paradigm contrasting highly stressful, threatening, and violent stimuli versus nonaversive neutral visual stimuli in a functional magnetic resonance imaging (fMRI) study, we show significant subjective, physiologic, and endocrine increases and temporally related dynamically distinct patterns of neural activation in brain circuits underlying the stress response. First, stress-specific sustained increases in the amygdala, striatum, hypothalamus, midbrain, right insula, and right dorsolateral prefrontal cortex (DLPFC) regions supported the stress processing and reactivity circuit. Second, dynamic neural activation during stress versus neutral runs, showing early increases followed by later reduced activation in the ventrolateral prefrontal cortex (VLPFC), dorsal anterior cingulate cortex (dACC), left DLPFC, hippocampus, and left insula, suggested a stress adaptation response network. Finally, dynamic stress-specific mobilization of the ventromedial prefrontal cortex (VmPFC), marked by initial hypoactivity followed by increased VmPFC activation, pointed to the VmPFC as a key locus of the emotional and behavioral control network. Consistent with this finding, greater neural flexibility signals in the VmPFC during stress correlated with active coping ratings whereas lower dynamic activity in the VmPFC also predicted a higher level of maladaptive coping behaviors in real life, including binge alcohol intake, emotional eating, and frequency of arguments and fights. These findings demonstrate acute functional neuroplasticity during stress, with distinct and separable brain networks that underlie critical components of the stress response, and a specific role for VmPFC neuroflexibility in stress-resilient coping. PMID:27432990
NASA Astrophysics Data System (ADS)
Boaretto, B. R. R.; Budzinski, R. C.; Prado, T. L.; Kurths, J.; Lopes, S. R.
2018-05-01
It is known that neural networks under small-world topology can present anomalous synchronization and nonstationary behavior for weak coupling regimes. Here, we propose methods to suppress the anomalous synchronization and also to diminish the nonstationary behavior occurring in weakly coupled neural network under small-world topology. We consider a network of 2000 thermally sensitive identical neurons, based on the model of Hodgkin-Huxley in a small-world topology, with the probability of adding non local connection equal to p = 0 . 001. Based on experimental protocols to suppress anomalous synchronization, as well as nonstationary behavior of the neural network dynamics, we make use of (i) external stimulus (pulsed current); (ii) biologic parameters changing (neuron membrane conductance changes); and (iii) body temperature changes. Quantification analysis to evaluate phase synchronization makes use of the Kuramoto's order parameter, while recurrence quantification analysis, particularly the determinism, computed over the easily accessible mean field of network, the local field potential (LFP), is used to evaluate nonstationary states. We show that the methods proposed can control the anomalous synchronization and nonstationarity occurring for weak coupling parameter without any effect on the individual neuron dynamics, neither in the expected asymptotic synchronized states occurring for large values of the coupling parameter.
NASA Astrophysics Data System (ADS)
Chang, Hsien-Cheng
Two novel synergistic systems consisting of artificial neural networks and fuzzy inference systems are developed to determine geophysical properties by using well log data. These systems are employed to improve the determination accuracy in carbonate rocks, which are generally more complex than siliciclastic rocks. One system, consisting of a single adaptive resonance theory (ART) neural network and three fuzzy inference systems (FISs), is used to determine the permeability category. The other system, which is composed of three ART neural networks and a single FIS, is employed to determine the lithofacies. The geophysical properties studied in this research, permeability category and lithofacies, are treated as categorical data. The permeability values are transformed into a "permeability category" to account for the effects of scale differences between core analyses and well logs, and heterogeneity in the carbonate rocks. The ART neural networks dynamically cluster the input data sets into different groups. The FIS is used to incorporate geologic experts' knowledge, which is usually in linguistic forms, into systems. These synergistic systems thus provide viable alternative solutions to overcome the effects of heterogeneity, the uncertainties of carbonate rock depositional environments, and the scarcity of well log data. The results obtained in this research show promising improvements over backpropagation neural networks. For the permeability category, the prediction accuracies are 68.4% and 62.8% for the multiple-single ART neural network-FIS and a single backpropagation neural network, respectively. For lithofacies, the prediction accuracies are 87.6%, 79%, and 62.8% for the single-multiple ART neural network-FIS, a single ART neural network, and a single backpropagation neural network, respectively. The sensitivity analysis results show that the multiple-single ART neural networks-FIS and a single ART neural network possess the same matching trends in determining lithofacies. This research shows that the adaptive resonance theory neural networks enable decision-makers to clearly distinguish the importance of different pieces of data which are useful in three-dimensional subsurface modeling. Geologic experts' knowledge can be easily applied and maintained by using the fuzzy inference systems.
NASA Astrophysics Data System (ADS)
Tang, Evelyn; Giusti, Chad; Baum, Graham; Gu, Shi; Pollock, Eli; Kahn, Ari; Roalf, David; Moore, Tyler; Ruparel, Kosha; Gur, Ruben; Gur, Raquel; Satterthwaite, Theodore; Bassett, Danielle
Motivated by a recent demonstration that the network architecture of white matter supports emerging control of diverse neural dynamics as children mature into adults, we seek to investigate structural mechanisms that support these changes. Beginning from a network representation of diffusion imaging data, we simulate network evolution with a set of simple growth rules built on principles of network control. Notably, the optimal evolutionary trajectory displays a striking correspondence to the progression of child to adult brain, suggesting that network control is a driver of development. More generally, and in comparison to the complete set of available models, we demonstrate that all brain networks from child to adult are structured in a manner highly optimized for the control of diverse neural dynamics. Within this near-optimality, we observe differences in the predicted control mechanisms of the child and adult brains, suggesting that the white matter architecture in children has a greater potential to increasingly support brain state transitions, potentially underlying cognitive switching.
NASA Astrophysics Data System (ADS)
Liu, Xiaosong; Shan, Zebiao; Li, Yuanchun
2017-04-01
Pinpoint landing is a critical step in some asteroid exploring missions. This paper is concerned with the descent trajectory control for soft touching down on a small irregularly-shaped asteroid. A dynamic boundary layer based neural network quasi-sliding mode control law is proposed to track a desired descending path. The asteroid's gravitational acceleration acting on the spacecraft is described by the polyhedron method. Considering the presence of input constraint and unmodeled acceleration, the dynamic equation of relative motion is presented first. The desired descending path is planned using cubic polynomial method, and a collision detection algorithm is designed. To perform trajectory tracking, a neural network sliding mode control law is given first, where the sliding mode control is used to ensure the convergence of system states. Two radial basis function neural networks (RBFNNs) are respectively used as an approximator for the unmodeled term and a compensator for the difference between the actual control input with magnitude constraint and nominal control. To improve the chattering induced by the traditional sliding mode control and guarantee the reachability of the system, a specific saturation function with dynamic boundary layer is proposed to replace the sign function in the preceding control law. Through the Lyapunov approach, the reachability condition of the control system is given. The improved control law can guarantee the system state move within a gradually shrinking quasi-sliding mode band. Numerical simulation results demonstrate the effectiveness of the proposed control strategy.
Toutounji, Hazem; Pipa, Gordon
2014-01-01
It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings. PMID:24651447
Structure-function clustering in multiplex brain networks
NASA Astrophysics Data System (ADS)
Crofts, J. J.; Forrester, M.; O'Dea, R. D.
2016-10-01
A key question in neuroscience is to understand how a rich functional repertoire of brain activity arises within relatively static networks of structurally connected neural populations: elucidating the subtle interactions between evoked “functional connectivity” and the underlying “structural connectivity” has the potential to address this. These structural-functional networks (and neural networks more generally) are more naturally described using a multilayer or multiplex network approach, in favour of standard single-layer network analyses that are more typically applied to such systems. In this letter, we address such issues by exploring important structure-function relations in the Macaque cortical network by modelling it as a duplex network that comprises an anatomical layer, describing the known (macro-scale) network topology of the Macaque monkey, and a functional layer derived from simulated neural activity. We investigate and characterize correlations between structural and functional layers, as system parameters controlling simulated neural activity are varied, by employing recently described multiplex network measures. Moreover, we propose a novel measure of multiplex structure-function clustering which allows us to investigate the emergence of functional connections that are distinct from the underlying cortical structure, and to highlight the dependence of multiplex structure on the neural dynamical regime.
Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans
Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude
2013-01-01
Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894
Spatiotemporal discrimination in neural networks with short-term synaptic plasticity
NASA Astrophysics Data System (ADS)
Shlaer, Benjamin; Miller, Paul
2015-03-01
Cells in recurrently connected neural networks exhibit bistability, which allows for stimulus information to persist in a circuit even after stimulus offset, i.e. short-term memory. However, such a system does not have enough hysteresis to encode temporal information about the stimuli. The biophysically described phenomenon of synaptic depression decreases synaptic transmission strengths due to increased presynaptic activity. This short-term reduction in synaptic strengths can destabilize attractor states in excitatory recurrent neural networks, causing the network to move along stimulus dependent dynamical trajectories. Such a network can successfully separate amplitudes and durations of stimuli from the number of successive stimuli. Stimulus number, duration and intensity encoding in randomly connected attractor networks with synaptic depression. Front. Comput. Neurosci. 7:59., and so provides a strong candidate network for the encoding of spatiotemporal information. Here we explicitly demonstrate the capability of a recurrent neural network with short-term synaptic depression to discriminate between the temporal sequences in which spatial stimuli are presented.
Cummine, Jacqueline; Cribben, Ivor; Luu, Connie; Kim, Esther; Bahktiari, Reyhaneh; Georgiou, George; Boliek, Carol A
2016-05-01
The neural circuitry associated with language processing is complex and dynamic. Graphical models are useful for studying complex neural networks as this method provides information about unique connectivity between regions within the context of the entire network of interest. Here, the authors explored the neural networks during covert reading to determine the role of feedforward and feedback loops in covert speech production. Brain activity of skilled adult readers was assessed in real word and pseudoword reading tasks with functional MRI (fMRI). The authors provide evidence for activity coherence in the feedforward system (inferior frontal gyrus-supplementary motor area) during real word reading and in the feedback system (supramarginal gyrus-precentral gyrus) during pseudoword reading. Graphical models provided evidence of an extensive, highly connected, neural network when individuals read real words that relied on coordination of the feedforward system. In contrast, when individuals read pseudowords the authors found a limited/restricted network that relied on coordination of the feedback system. Together, these results underscore the importance of considering multiple pathways and articulatory loops during language tasks and provide evidence for a print-to-speech neural network. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.
Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus
2017-01-01
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator
Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus
2017-01-01
Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730
Du, Jialu; Hu, Xin; Liu, Hongbo; Chen, C L Philip
2015-11-01
This paper develops an adaptive robust output feedback control scheme for dynamically positioned ships with unavailable velocities and unknown dynamic parameters in an unknown time-variant disturbance environment. The controller is designed by incorporating the high-gain observer and radial basis function (RBF) neural networks in vectorial backstepping method. The high-gain observer provides the estimations of the ship position and heading as well as velocities. The RBF neural networks are employed to compensate for the uncertainties of ship dynamics. The adaptive laws incorporating a leakage term are designed to estimate the weights of RBF neural networks and the bounds of unknown time-variant environmental disturbances. In contrast to the existing results of dynamic positioning (DP) controllers, the proposed control scheme relies only on the ship position and heading measurements and does not require a priori knowledge of the ship dynamics and external disturbances. By means of Lyapunov functions, it is theoretically proved that our output feedback controller can control a ship's position and heading to the arbitrarily small neighborhood of the desired target values while guaranteeing that all signals in the closed-loop DP control system are uniformly ultimately bounded. Finally, simulations involving two ships are carried out, and simulation results demonstrate the effectiveness of the proposed control scheme.
Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.
Zhang, Yanjun; Tao, Gang; Chen, Mou
2016-09-01
This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.
NASA Technical Reports Server (NTRS)
Leong, Harrison Monfook
1988-01-01
General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.
Gradient calculations for dynamic recurrent neural networks: a survey.
Pearlmutter, B A
1995-01-01
Surveys learning algorithms for recurrent neural networks with hidden units and puts the various techniques into a common framework. The authors discuss fixed point learning algorithms, namely recurrent backpropagation and deterministic Boltzmann machines, and nonfixed point algorithms, namely backpropagation through time, Elman's history cutoff, and Jordan's output feedback architecture. Forward propagation, an on-line technique that uses adjoint equations, and variations thereof, are also discussed. In many cases, the unified presentation leads to generalizations of various sorts. The author discusses advantages and disadvantages of temporally continuous neural networks in contrast to clocked ones continues with some "tricks of the trade" for training, using, and simulating continuous time and recurrent neural networks. The author presents some simulations, and at the end, addresses issues of computational complexity and learning speed.
Massengill, L W; Mundie, D B
1992-01-01
A neural network IC based on a dynamic charge injection is described. The hardware design is space and power efficient, and achieves massive parallelism of analog inner products via charge-based multipliers and spatially distributed summing buses. Basic synaptic cells are constructed of exponential pulse-decay modulation (EPDM) dynamic injection multipliers operating sequentially on propagating signal vectors and locally stored analog weights. Individually adjustable gain controls on each neutron reduce the effects of limited weight dynamic range. A hardware simulator/trainer has been developed which incorporates the physical (nonideal) characteristics of actual circuit components into the training process, thus absorbing nonlinearities and parametric deviations into the macroscopic performance of the network. Results show that charge-based techniques may achieve a high degree of neural density and throughput using standard CMOS processes.
Robust fault detection of wind energy conversion systems based on dynamic neural networks.
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate.
Robust Fault Detection of Wind Energy Conversion Systems Based on Dynamic Neural Networks
Talebi, Nasser; Sadrnia, Mohammad Ali; Darabi, Ahmad
2014-01-01
Occurrence of faults in wind energy conversion systems (WECSs) is inevitable. In order to detect the occurred faults at the appropriate time, avoid heavy economic losses, ensure safe system operation, prevent damage to adjacent relevant systems, and facilitate timely repair of failed components; a fault detection system (FDS) is required. Recurrent neural networks (RNNs) have gained a noticeable position in FDSs and they have been widely used for modeling of complex dynamical systems. One method for designing an FDS is to prepare a dynamic neural model emulating the normal system behavior. By comparing the outputs of the real system and neural model, incidence of the faults can be identified. In this paper, by utilizing a comprehensive dynamic model which contains both mechanical and electrical components of the WECS, an FDS is suggested using dynamic RNNs. The presented FDS detects faults of the generator's angular velocity sensor, pitch angle sensors, and pitch actuators. Robustness of the FDS is achieved by employing an adaptive threshold. Simulation results show that the proposed scheme is capable to detect the faults shortly and it has very low false and missed alarms rate. PMID:24744774
NASA Astrophysics Data System (ADS)
Zamora-López, Gorka; Chen, Yuhan; Deco, Gustavo; Kringelbach, Morten L.; Zhou, Changsong
2016-12-01
The large-scale structural ingredients of the brain and neural connectomes have been identified in recent years. These are, similar to the features found in many other real networks: the arrangement of brain regions into modules and the presence of highly connected regions (hubs) forming rich-clubs. Here, we examine how modules and hubs shape the collective dynamics on networks and we find that both ingredients lead to the emergence of complex dynamics. Comparing the connectomes of C. elegans, cats, macaques and humans to surrogate networks in which either modules or hubs are destroyed, we find that functional complexity always decreases in the perturbed networks. A comparison between simulated and empirically obtained resting-state functional connectivity indicates that the human brain, at rest, lies in a dynamical state that reflects the largest complexity its anatomical connectome can host. Last, we generalise the topology of neural connectomes into a new hierarchical network model that successfully combines modular organisation with rich-club forming hubs. This is achieved by centralising the cross-modular connections through a preferential attachment rule. Our network model hosts more complex dynamics than other hierarchical models widely used as benchmarks.
Zamora-López, Gorka; Chen, Yuhan; Deco, Gustavo; Kringelbach, Morten L.; Zhou, Changsong
2016-01-01
The large-scale structural ingredients of the brain and neural connectomes have been identified in recent years. These are, similar to the features found in many other real networks: the arrangement of brain regions into modules and the presence of highly connected regions (hubs) forming rich-clubs. Here, we examine how modules and hubs shape the collective dynamics on networks and we find that both ingredients lead to the emergence of complex dynamics. Comparing the connectomes of C. elegans, cats, macaques and humans to surrogate networks in which either modules or hubs are destroyed, we find that functional complexity always decreases in the perturbed networks. A comparison between simulated and empirically obtained resting-state functional connectivity indicates that the human brain, at rest, lies in a dynamical state that reflects the largest complexity its anatomical connectome can host. Last, we generalise the topology of neural connectomes into a new hierarchical network model that successfully combines modular organisation with rich-club forming hubs. This is achieved by centralising the cross-modular connections through a preferential attachment rule. Our network model hosts more complex dynamics than other hierarchical models widely used as benchmarks. PMID:27917958
NASA Astrophysics Data System (ADS)
Wuensche, Andrew
DDLab is interactive graphics software for creating, visualizing, and analyzing many aspects of Cellular Automata, Random Boolean Networks, and Discrete Dynamical Networks in general and studying their behavior, both from the time-series perspective — space-time patterns, and from the state-space perspective — attractor basins. DDLab is relevant to research, applications, and education in the fields of complexity, self-organization, emergent phenomena, chaos, collision-based computing, neural networks, content addressable memory, genetic regulatory networks, dynamical encryption, generative art and music, and the study of the abstract mathematical/physical/dynamical phenomena in their own right.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle
NASA Technical Reports Server (NTRS)
Shin, Yoonghyun; Calise, Anthony J.; Motter, Mark A.
2005-01-01
This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation.
NASA Astrophysics Data System (ADS)
di Volo, Matteo; Burioni, Raffaella; Casartelli, Mario; Livi, Roberto; Vezzani, Alessandro
2016-01-01
We study the dynamics of networks with inhibitory and excitatory leak-integrate-and-fire neurons with short-term synaptic plasticity in the presence of depressive and facilitating mechanisms. The dynamics is analyzed by a heterogeneous mean-field approximation, which allows us to keep track of the effects of structural disorder in the network. We describe the complex behavior of different classes of excitatory and inhibitory components, which give rise to a rich dynamical phase diagram as a function of the fraction of inhibitory neurons. Using the same mean-field approach, we study and solve a global inverse problem: reconstructing the degree probability distributions of the inhibitory and excitatory components and the fraction of inhibitory neurons from the knowledge of the average synaptic activity field. This approach unveils new perspectives on the numerical study of neural network dynamics and the possibility of using these models as a test bed for the analysis of experimental data.
A neural-network approach to robotic control
NASA Technical Reports Server (NTRS)
Graham, D. P. W.; Deleuterio, G. M. T.
1993-01-01
An artificial neural-network paradigm for the control of robotic systems is presented. The approach is based on the Cerebellar Model Articulation Controller created by James Albus and incorporates several extensions. First, recognizing the essential structure of multibody equations of motion, two parallel modules are used that directly reflect the dynamical characteristics of multibody systems. Second, the architecture of the proposed network is imbued with a self-organizational capability which improves efficiency and accuracy. Also, the networks can be arranged in hierarchical fashion with each subsequent network providing finer and finer resolution.
Dynamic stability analysis of fractional order leaky integrator echo state neural networks
NASA Astrophysics Data System (ADS)
Pahnehkolaei, Seyed Mehdi Abedi; Alfi, Alireza; Tenreiro Machado, J. A.
2017-06-01
The Leaky integrator echo state neural network (Leaky-ESN) is an improved model of the recurrent neural network (RNN) and adopts an interconnected recurrent grid of processing neurons. This paper presents a new proof for the convergence of a Lyapunov candidate function to zero when time tends to infinity by means of the Caputo fractional derivative with order lying in the range (0, 1). The stability of Fractional-Order Leaky-ESN (FO Leaky-ESN) is then analyzed, and the existence, uniqueness and stability of the equilibrium point are provided. A numerical example demonstrates the feasibility of the proposed method.
Oscillator Neural Network Retrieving Sparsely Coded Phase Patterns
NASA Astrophysics Data System (ADS)
Aoyagi, Toshio; Nomura, Masaki
1999-08-01
Little is known theoretically about the associative memory capabilities of neural networks in which information is encoded not only in the mean firing rate but also in the timing of firings. Particularly, in the case of sparsely coded patterns, it is biologically important to consider the timings of firings and to study how such consideration influences storage capacities and quality of recalled patterns. For this purpose, we propose a simple extended model of oscillator neural networks to allow for expression of a nonfiring state. Analyzing both equilibrium states and dynamical properties in recalling processes, we find that the system possesses good associative memory.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-01-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452
The effects of dynamical synapses on firing rate activity: a spiking neural network model.
Khalil, Radwa; Moftah, Marie Z; Moustafa, Ahmed A
2017-11-01
Accumulating evidence relates the fine-tuning of synaptic maturation and regulation of neural network activity to several key factors, including GABA A signaling and a lateral spread length between neighboring neurons (i.e., local connectivity). Furthermore, a number of studies consider short-term synaptic plasticity (STP) as an essential element in the instant modification of synaptic efficacy in the neuronal network and in modulating responses to sustained ranges of external Poisson input frequency (IF). Nevertheless, evaluating the firing activity in response to the dynamical interaction between STP (triggered by ranges of IF) and these key parameters in vitro remains elusive. Therefore, we designed a spiking neural network (SNN) model in which we incorporated the following parameters: local density of arbor essences and a lateral spread length between neighboring neurons. We also created several network scenarios based on these key parameters. Then, we implemented two classes of STP: (1) short-term synaptic depression (STD) and (2) short-term synaptic facilitation (STF). Each class has two differential forms based on the parametric value of its synaptic time constant (either for depressing or facilitating synapses). Lastly, we compared the neural firing responses before and after the treatment with STP. We found that dynamical synapses (STP) have a critical differential role on evaluating and modulating the firing rate activity in each network scenario. Moreover, we investigated the impact of changing the balance between excitation (E) and inhibition (I) on stabilizing this firing activity. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Oscillations and chaos in neural networks: an exactly solvable model.
Wang, L P; Pichler, E E; Ross, J
1990-01-01
We consider a randomly diluted higher-order network with noise, consisting of McCulloch-Pitts neurons that interact by Hebbian-type connections. For this model, exact dynamical equations are derived and solved for both parallel and random sequential updating algorithms. For parallel dynamics, we find a rich spectrum of different behaviors including static retrieving and oscillatory and chaotic phenomena in different parts of the parameter space. The bifurcation parameters include first- and second-order neuronal interaction coefficients and a rescaled noise level, which represents the combined effects of the random synaptic dilution, interference between stored patterns, and additional background noise. We show that a marked difference in terms of the occurrence of oscillations or chaos exists between neural networks with parallel and random sequential dynamics. Images PMID:2251287
Tsvetanov, Kamen A; Henson, Richard N A; Tyler, Lorraine K; Razi, Adeel; Geerligs, Linda; Ham, Timothy E; Rowe, James B
2016-03-16
The maintenance of wellbeing across the lifespan depends on the preservation of cognitive function. We propose that successful cognitive aging is determined by interactions both within and between large-scale functional brain networks. Such connectivity can be estimated from task-free functional magnetic resonance imaging (fMRI), also known as resting-state fMRI (rs-fMRI). However, common correlational methods are confounded by age-related changes in the neurovascular signaling. To estimate network interactions at the neuronal rather than vascular level, we used generative models that specified both the neural interactions and a flexible neurovascular forward model. The networks' parameters were optimized to explain the spectral dynamics of rs-fMRI data in 602 healthy human adults from population-based cohorts who were approximately uniformly distributed between 18 and 88 years (www.cam-can.com). We assessed directed connectivity within and between three key large-scale networks: the salience network, dorsal attention network, and default mode network. We found that age influences connectivity both within and between these networks, over and above the effects on neurovascular coupling. Canonical correlation analysis revealed that the relationship between network connectivity and cognitive function was age-dependent: cognitive performance relied on neural dynamics more strongly in older adults. These effects were driven partly by reduced stability of neural activity within all networks, as expressed by an accelerated decay of neural information. Our findings suggest that the balance of excitatory connectivity between networks, and the stability of intrinsic neural representations within networks, changes with age. The cognitive function of older adults becomes increasingly dependent on these factors. Maintaining cognitive function is critical to successful aging. To study the neural basis of cognitive function across the lifespan, we studied a large population-based cohort (n = 602, 18-88 years), separating neural connectivity from vascular components of fMRI signals. Cognitive ability was influenced by the strength of connection within and between functional brain networks, and this positive relationship increased with age. In older adults, there was more rapid decay of intrinsic neuronal activity in multiple regions of the brain networks, which related to cognitive performance. Our data demonstrate increased reliance on network flexibility to maintain cognitive function, in the presence of more rapid decay of neural activity. These insights will facilitate the development of new strategies to maintain cognitive ability. Copyright © 2016 Tsvetanov et al.
Henson, Richard N.A.; Tyler, Lorraine K.; Razi, Adeel; Geerligs, Linda; Ham, Timothy E.; Rowe, James B.
2016-01-01
The maintenance of wellbeing across the lifespan depends on the preservation of cognitive function. We propose that successful cognitive aging is determined by interactions both within and between large-scale functional brain networks. Such connectivity can be estimated from task-free functional magnetic resonance imaging (fMRI), also known as resting-state fMRI (rs-fMRI). However, common correlational methods are confounded by age-related changes in the neurovascular signaling. To estimate network interactions at the neuronal rather than vascular level, we used generative models that specified both the neural interactions and a flexible neurovascular forward model. The networks' parameters were optimized to explain the spectral dynamics of rs-fMRI data in 602 healthy human adults from population-based cohorts who were approximately uniformly distributed between 18 and 88 years (www.cam-can.com). We assessed directed connectivity within and between three key large-scale networks: the salience network, dorsal attention network, and default mode network. We found that age influences connectivity both within and between these networks, over and above the effects on neurovascular coupling. Canonical correlation analysis revealed that the relationship between network connectivity and cognitive function was age-dependent: cognitive performance relied on neural dynamics more strongly in older adults. These effects were driven partly by reduced stability of neural activity within all networks, as expressed by an accelerated decay of neural information. Our findings suggest that the balance of excitatory connectivity between networks, and the stability of intrinsic neural representations within networks, changes with age. The cognitive function of older adults becomes increasingly dependent on these factors. SIGNIFICANCE STATEMENT Maintaining cognitive function is critical to successful aging. To study the neural basis of cognitive function across the lifespan, we studied a large population-based cohort (n = 602, 18–88 years), separating neural connectivity from vascular components of fMRI signals. Cognitive ability was influenced by the strength of connection within and between functional brain networks, and this positive relationship increased with age. In older adults, there was more rapid decay of intrinsic neuronal activity in multiple regions of the brain networks, which related to cognitive performance. Our data demonstrate increased reliance on network flexibility to maintain cognitive function, in the presence of more rapid decay of neural activity. These insights will facilitate the development of new strategies to maintain cognitive ability. PMID:26985024
Neural-network-designed pulse sequences for robust control of singlet-triplet qubits
NASA Astrophysics Data System (ADS)
Yang, Xu-Chen; Yung, Man-Hong; Wang, Xin
2018-04-01
Composite pulses are essential for universal manipulation of singlet-triplet spin qubits. In the absence of noise, they are required to perform arbitrary single-qubit operations due to the special control constraint of a singlet-triplet qubit, while in a noisy environment, more complicated sequences have been developed to dynamically correct the error. Tailoring these sequences typically requires numerically solving a set of nonlinear equations. Here we demonstrate that these pulse sequences can be generated by a well-trained, double-layer neural network. For sequences designed for the noise-free case, the trained neural network is capable of producing almost exactly the same pulses known in the literature. For more complicated noise-correcting sequences, the neural network produces pulses with slightly different line shapes, but the robustness against noises remains comparable. These results indicate that the neural network can be a judicious and powerful alternative to existing techniques in developing pulse sequences for universal fault-tolerant quantum computation.
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2015-11-01
The problem of coexistence and dynamical behaviors of multiple equilibrium points is addressed for a class of memristive Cohen-Grossberg neural networks with non-monotonic piecewise linear activation functions and time-varying delays. By virtue of the fixed point theorem, nonsmooth analysis theory and other analytical tools, some sufficient conditions are established to guarantee that such n-dimensional memristive Cohen-Grossberg neural networks can have 5(n) equilibrium points, among which 3(n) equilibrium points are locally exponentially stable. It is shown that greater storage capacity can be achieved by neural networks with the non-monotonic activation functions introduced herein than the ones with Mexican-hat-type activation function. In addition, unlike most existing multistability results of neural networks with monotonic activation functions, those obtained 3(n) locally stable equilibrium points are located both in saturated regions and unsaturated regions. The theoretical findings are verified by an illustrative example with computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dynamics of delay-coupled FitzHugh-Nagumo neural rings.
Mao, Xiaochen; Sun, Jianqiao; Li, Shaofan
2018-01-01
This paper studies the dynamical behaviors of a pair of FitzHugh-Nagumo neural networks with bidirectional delayed couplings. It presents a detailed analysis of delay-independent and delay-dependent stabilities and the existence of bifurcated oscillations. Illustrative examples are performed to validate the analytical results and to discover interesting phenomena. It is shown that the network exhibits a variety of complicated activities, such as multiple stability switches, the coexistence of periodic and quasi-periodic oscillations, the coexistence of periodic and chaotic orbits, and the coexisting chaotic attractors.
Dynamics of delay-coupled FitzHugh-Nagumo neural rings
NASA Astrophysics Data System (ADS)
Mao, Xiaochen; Sun, Jianqiao; Li, Shaofan
2018-01-01
This paper studies the dynamical behaviors of a pair of FitzHugh-Nagumo neural networks with bidirectional delayed couplings. It presents a detailed analysis of delay-independent and delay-dependent stabilities and the existence of bifurcated oscillations. Illustrative examples are performed to validate the analytical results and to discover interesting phenomena. It is shown that the network exhibits a variety of complicated activities, such as multiple stability switches, the coexistence of periodic and quasi-periodic oscillations, the coexistence of periodic and chaotic orbits, and the coexisting chaotic attractors.
Characterizing Deep Brain Stimulation effects in computationally efficient neural network models.
Latteri, Alberta; Arena, Paolo; Mazzone, Paolo
2011-04-15
Recent studies on the medical treatment of Parkinson's disease (PD) led to the introduction of the so called Deep Brain Stimulation (DBS) technique. This particular therapy allows to contrast actively the pathological activity of various Deep Brain structures, responsible for the well known PD symptoms. This technique, frequently joined to dopaminergic drugs administration, replaces the surgical interventions implemented to contrast the activity of specific brain nuclei, called Basal Ganglia (BG). This clinical protocol gave the possibility to analyse and inspect signals measured from the electrodes implanted into the deep brain regions. The analysis of these signals led to the possibility to study the PD as a specific case of dynamical synchronization in biological neural networks, with the advantage to apply the theoretical analysis developed in such scientific field to find efficient treatments to face with this important disease. Experimental results in fact show that the PD neurological diseases are characterized by a pathological signal synchronization in BG. Parkinsonian tremor, for example, is ascribed to be caused by neuron populations of the Thalamic and Striatal structures that undergo an abnormal synchronization. On the contrary, in normal conditions, the activity of the same neuron populations do not appear to be correlated and synchronized. To study in details the effect of the stimulation signal on a pathological neural medium, efficient models of these neural structures were built, which are able to show, without any external input, the intrinsic properties of a pathological neural tissue, mimicking the BG synchronized dynamics.We start considering a model already introduced in the literature to investigate the effects of electrical stimulation on pathologically synchronized clusters of neurons. This model used Morris Lecar type neurons. This neuron model, although having a high level of biological plausibility, requires a large computational effort to simulate large scale networks. For this reason we considered a reduced order model, the Izhikevich one, which is computationally much lighter. The comparison between neural lattices built using both neuron models provided comparable results, both without traditional stimulation and in presence of all the stimulation protocols. This was a first result toward the study and simulation of the large scale neural networks involved in pathological dynamics.Using the reduced order model an inspection on the activity of two neural lattices was also carried out at the aim to analyze how the stimulation in one area could affect the dynamics in another area, like the usual medical treatment protocols require.The study of population dynamics that was carried out allowed us to investigate, through simulations, the positive effects of the stimulation signals in terms of desynchronization of the neural dynamics. The results obtained constitute a significant added value to the analysis of synchronization and desynchronization effects due to neural stimulation. This work gives the opportunity to more efficiently study the effect of stimulation in large scale yet computationally efficient neural networks. Results were compared both with the other mathematical models, using Morris Lecar and Izhikevich neurons, and with simulated Local Field Potentials (LFP).
A neural network with modular hierarchical learning
NASA Technical Reports Server (NTRS)
Baldi, Pierre F. (Inventor); Toomarian, Nikzad (Inventor)
1994-01-01
This invention provides a new hierarchical approach for supervised neural learning of time dependent trajectories. The modular hierarchical methodology leads to architectures which are more structured than fully interconnected networks. The networks utilize a general feedforward flow of information and sparse recurrent connections to achieve dynamic effects. The advantages include the sparsity of units and connections, the modular organization. A further advantage is that the learning is much more circumscribed learning than in fully interconnected systems. The present invention is embodied by a neural network including a plurality of neural modules each having a pre-established performance capability wherein each neural module has an output outputting present results of the performance capability and an input for changing the present results of the performance capabilitiy. For pattern recognition applications, the performance capability may be an oscillation capability producing a repeating wave pattern as the present results. In the preferred embodiment, each of the plurality of neural modules includes a pre-established capability portion and a performance adjustment portion connected to control the pre-established capability portion.
A Neural Network Model to Learn Multiple Tasks under Dynamic Environments
NASA Astrophysics Data System (ADS)
Tsumori, Kenji; Ozawa, Seiichi
When environments are dynamically changed for agents, the knowledge acquired in an environment might be useless in future. In such dynamic environments, agents should be able to not only acquire new knowledge but also modify old knowledge in learning. However, modifying all knowledge acquired before is not efficient because the knowledge once acquired may be useful again when similar environment reappears and some knowledge can be shared among different environments. To learn efficiently in such environments, we propose a neural network model that consists of the following modules: resource allocating network, long-term & short-term memory, and environment change detector. We evaluate the model under a class of dynamic environments where multiple function approximation tasks are sequentially given. The experimental results demonstrate that the proposed model possesses stable incremental learning, accurate environmental change detection, proper association and recall of old knowledge, and efficient knowledge transfer.
Hu, Cheng; Yu, Juan; Chen, Zhanheng; Jiang, Haijun; Huang, Tingwen
2017-05-01
In this paper, the fixed-time stability of dynamical systems and the fixed-time synchronization of coupled discontinuous neural networks are investigated under the framework of Filippov solution. Firstly, by means of reduction to absurdity, a theorem of fixed-time stability is established and a high-precision estimation of the settling-time is given. It is shown by theoretic proof that the estimation bound of the settling time given in this paper is less conservative and more accurate compared with the classical results. Besides, as an important application, the fixed-time synchronization of coupled neural networks with discontinuous activation functions is proposed. By designing a discontinuous control law and using the theory of differential inclusions, some new criteria are derived to ensure the fixed-time synchronization of the addressed coupled networks. Finally, two numerical examples are provided to show the effectiveness and validity of the theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neural-tree call admission controller for ATM networks
NASA Astrophysics Data System (ADS)
Rughooputh, Harry C. S.
1999-03-01
Asynchronous Transfer Mode (ATM) has been recommended by ITU-T as the transport method for broadband integrated services digital networks. In high-speed ATM networks different types of multimedia traffic streams with widely varying traffic characteristics and Quality of Service (QoS) are asynchronously multiplexed on transmission links and switched without window flow control as found in X.25. In such an environment, a traffic control scheme is required to manage the required QoS of each class individually. To meet the QoS requirements, Bandwidth Allocation and Call Admission Control (CAC) in ATM networks must be able to adapt gracefully to the dynamic behavior of traffic and the time-varying nature of the network condition. In this paper, a Neural Network approach for CAC is proposed. The call admission problem is addressed by designing controllers based on Neural Tree Networks. Simulations reveal that the proposed scheme is not only simple but it also offers faster response than conventional neural/neuro-fuzzy controllers.
Design of Neural Networks for Fast Convergence and Accuracy
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1998-01-01
A novel procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed to provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component spacecraft design changes and measures of its performance. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The design algorithm attempts to avoid the local minima phenomenon that hampers the traditional network training. A numerical example is performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Efficient self-organizing multilayer neural network for nonlinear system modeling.
Han, Hong-Gui; Wang, Li-Dan; Qiao, Jun-Fei
2013-07-01
It has been shown extensively that the dynamic behaviors of a neural system are strongly influenced by the network architecture and learning process. To establish an artificial neural network (ANN) with self-organizing architecture and suitable learning algorithm for nonlinear system modeling, an automatic axon-neural network (AANN) is investigated in the following respects. First, the network architecture is constructed automatically to change both the number of hidden neurons and topologies of the neural network during the training process. The approach introduced in adaptive connecting-and-pruning algorithm (ACP) is a type of mixed mode operation, which is equivalent to pruning or adding the connecting of the neurons, as well as inserting some required neurons directly. Secondly, the weights are adjusted, using a feedforward computation (FC) to obtain the information for the gradient during learning computation. Unlike most of the previous studies, AANN is able to self-organize the architecture and weights, and to improve the network performances. Also, the proposed AANN has been tested on a number of benchmark problems, ranging from nonlinear function approximating to nonlinear systems modeling. The experimental results show that AANN can have better performances than that of some existing neural networks. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram
2017-04-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.
Using neural networks for prediction of air pollution index in industrial city
NASA Astrophysics Data System (ADS)
Rahman, P. A.; Panchenko, A. A.; Safarov, A. M.
2017-10-01
This scientific paper is dedicated to the use of artificial neural networks for the ecological prediction of state of the atmospheric air of an industrial city for capability of the operative environmental decisions. In the paper, there is also the described development of two types of prediction models for determining of the air pollution index on the basis of neural networks: a temporal (short-term forecast of the pollutants content in the air for the nearest days) and a spatial (forecast of atmospheric pollution index in any point of city). The stages of development of the neural network models are briefly overviewed and description of their parameters is also given. The assessment of the adequacy of the prediction models, based on the calculation of the correlation coefficient between the output and reference data, is also provided. Moreover, due to the complexity of perception of the «neural network code» of the offered models by the ordinary users, the software implementations allowing practical usage of neural network models are also offered. It is established that the obtained neural network models provide sufficient reliable forecast, which means that they are an effective tool for analyzing and predicting the behavior of dynamics of the air pollution in an industrial city. Thus, this scientific work successfully develops the urgent matter of forecasting of the atmospheric air pollution index in industrial cities based on the use of neural network models.
Two Unipolar Terminal-Attractor-Based Associative Memories
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Wu, Chwan-Hwa
1995-01-01
Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).
Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.
Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen
2018-05-01
In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.
Symmetry Breaking in Space-Time Hierarchies Shapes Brain Dynamics and Behavior.
Pillai, Ajay S; Jirsa, Viktor K
2017-06-07
In order to maintain brain function, neural activity needs to be tightly coordinated within the brain network. How this coordination is achieved and related to behavior is largely unknown. It has been previously argued that the study of the link between brain and behavior is impossible without a guiding vision. Here we propose behavioral-level concepts and mechanisms embodied as structured flows on manifold (SFM) that provide a formal description of behavior as a low-dimensional process emerging from a network's dynamics dependent on the symmetry and invariance properties of the network connectivity. Specifically, we demonstrate that the symmetry breaking of network connectivity constitutes a timescale hierarchy resulting in the emergence of an attractive functional subspace. We show that behavior emerges when appropriate conditions imposed upon the couplings are satisfied, justifying the conductance-based nature of synaptic couplings. Our concepts propose design principles for networks predicting how behavior and task rules are represented in real neural circuits and open new avenues for the analyses of neural data. Copyright © 2017 Elsevier Inc. All rights reserved.
Modeling Belt-Servomechanism by Chebyshev Functional Recurrent Neuro-Fuzzy Network
NASA Astrophysics Data System (ADS)
Huang, Yuan-Ruey; Kang, Yuan; Chu, Ming-Hui; Chang, Yeon-Pun
A novel Chebyshev functional recurrent neuro-fuzzy (CFRNF) network is developed from a combination of the Takagi-Sugeno-Kang (TSK) fuzzy model and the Chebyshev recurrent neural network (CRNN). The CFRNF network can emulate the nonlinear dynamics of a servomechanism system. The system nonlinearity is addressed by enhancing the input dimensions of the consequent parts in the fuzzy rules due to functional expansion of a Chebyshev polynomial. The back propagation algorithm is used to adjust the parameters of the antecedent membership functions as well as those of consequent functions. To verify the performance of the proposed CFRNF, the experiment of the belt servomechanism is presented in this paper. Both of identification methods of adaptive neural fuzzy inference system (ANFIS) and recurrent neural network (RNN) are also studied for modeling of the belt servomechanism. The analysis and comparison results indicate that CFRNF makes identification of complex nonlinear dynamic systems easier. It is verified that the accuracy and convergence of the CFRNF are superior to those of ANFIS and RNN by the identification results of a belt servomechanism.
NASA Astrophysics Data System (ADS)
Hortos, William S.
1997-04-01
The use of artificial neural networks (NNs) to address the channel assignment problem (CAP) for cellular time-division multiple access and code-division multiple access networks has previously been investigated by this author and many others. The investigations to date have been based on a hexagonal cell structure established by omnidirectional antennas at the base stations. No account was taken of the use of spatial isolation enabled by directional antennas to reduce interference between mobiles. Any reduction in interference translates into increased capacity and consequently alters the performance of the NNs. Previous studies have sought to improve the performance of Hopfield- Tank network algorithms and self-organizing feature map algorithms applied primarily to static channel assignment (SCA) for cellular networks that handle uniformly distributed, stationary traffic in each cell for a single type of service. The resulting algorithms minimize energy functions representing interference constraint and ad hoc conditions that promote convergence to optimal solutions. While the structures of the derived neural network algorithms (NNAs) offer the potential advantages of inherent parallelism and adaptability to changing system conditions, this potential has yet to be fulfilled the CAP for emerging mobile networks. The next-generation communication infrastructures must accommodate dynamic operating conditions. Macrocell topologies are being refined to microcells and picocells that can be dynamically sectored by adaptively controlled, directional antennas and programmable transceivers. These networks must support the time-varying demands for personal communication services (PCS) that simultaneously carry voice, data and video and, thus, require new dynamic channel assignment (DCA) algorithms. This paper examines the impact of dynamic cell sectoring and geometric conditioning on NNAs developed for SCA in omnicell networks with stationary traffic to improve the metrics of convergence rate and call blocking. Genetic algorithms (GAs) are also considered in PCS networks as a means to overcome the known weakness of Hopfield NNAs in determining global minima. The resulting GAs for DCA in PCS networks are compared to improved DCA algorithms based on Hopfield NNs for stationary cellular networks. Algorithm performance is compared on the basis of rate of convergence, blocking probability, analytic complexity, and parametric sensitivity to transient traffic demands and channel interference.
NASA Astrophysics Data System (ADS)
Kim, Nakwan
Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.
Recurrent Neural Network for Computing the Drazin Inverse.
Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin
2015-11-01
This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.
A neural network controller of a flotation process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durao, F.; Cortez, L.
1995-12-31
The dynamic control of a froth flotation section is simulated through a neural network feedback controller, trained in order to stabilize the concentrate metal grade and recovery by applying random step changes to the feed metal grade. The results of the application example show that this controller seems to be sufficiently robust and a good alternative to handle a non-linear process.
Finite time convergent learning law for continuous neural networks.
Chairez, Isaac
2014-02-01
This paper addresses the design of a discontinuous finite time convergent learning law for neural networks with continuous dynamics. The neural network was used here to obtain a non-parametric model for uncertain systems described by a set of ordinary differential equations. The source of uncertainties was the presence of some external perturbations and poor knowledge of the nonlinear function describing the system dynamics. A new adaptive algorithm based on discontinuous algorithms was used to adjust the weights of the neural network. The adaptive algorithm was derived by means of a non-standard Lyapunov function that is lower semi-continuous and differentiable in almost the whole space. A compensator term was included in the identifier to reject some specific perturbations using a nonlinear robust algorithm. Two numerical examples demonstrated the improvements achieved by the learning algorithm introduced in this paper compared to classical schemes with continuous learning methods. The first one dealt with a benchmark problem used in the paper to explain how the discontinuous learning law works. The second one used the methane production model to show the benefits in engineering applications of the learning law proposed in this paper. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multi-Agent Market Modeling of Foreign Exchange Rates
NASA Astrophysics Data System (ADS)
Zimmermann, Georg; Neuneier, Ralph; Grothmann, Ralph
A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The oeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach. The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.
Prefrontal Cortex Networks Shift from External to Internal Modes during Learning.
Brincat, Scott L; Miller, Earl K
2016-09-14
As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with "internal" memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)-regions critical for sensory associations-of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11-27 Hz) oscillatory power and synchrony associated with "top-down" or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired "top-down" knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. Copyright © 2016 the authors 0270-6474/16/369739-16$15.00/0.
Prefrontal Cortex Networks Shift from External to Internal Modes during Learning
Brincat, Scott L.
2016-01-01
As we learn about items in our environment, their neural representations become increasingly enriched with our acquired knowledge. But there is little understanding of how network dynamics and neural processing related to external information changes as it becomes laden with “internal” memories. We sampled spiking and local field potential activity simultaneously from multiple sites in the lateral prefrontal cortex (PFC) and the hippocampus (HPC)—regions critical for sensory associations—of monkeys performing an object paired-associate learning task. We found that in the PFC, evoked potentials to, and neural information about, external sensory stimulation decreased while induced beta-band (∼11–27 Hz) oscillatory power and synchrony associated with “top-down” or internal processing increased. By contrast, the HPC showed little evidence of learning-related changes in either spiking activity or network dynamics. The results suggest that during associative learning, PFC networks shift their resources from external to internal processing. SIGNIFICANCE STATEMENT As we learn about items in our environment, their representations in our brain become increasingly enriched with our acquired “top-down” knowledge. We found that in the prefrontal cortex, but not the hippocampus, processing of external sensory inputs decreased while internal network dynamics related to top-down processing increased. The results suggest that during learning, prefrontal cortex networks shift their resources from external (sensory) to internal (memory) processing. PMID:27629722
Mindfulness and dynamic functional neural connectivity in children and adolescents.
Marusak, Hilary A; Elrahal, Farrah; Peters, Craig A; Kundu, Prantik; Lombardo, Michael V; Calhoun, Vince D; Goldberg, Elimelech K; Cohen, Cindy; Taub, Jeffrey W; Rabinak, Christine A
2018-01-15
Interventions that promote mindfulness consistently show salutary effects on cognition and emotional wellbeing in adults, and more recently, in children and adolescents. However, we lack understanding of the neurobiological mechanisms underlying mindfulness in youth that should allow for more judicious application of these interventions in clinical and educational settings. Using multi-echo multi-band fMRI, we examined dynamic (i.e., time-varying) and conventional static resting-state connectivity between core neurocognitive networks (i.e., salience/emotion, default mode, central executive) in 42 children and adolescents (ages 6-17). We found that trait mindfulness in youth relates to dynamic but not static resting-state connectivity. Specifically, more mindful youth transitioned more between brain states over the course of the scan, spent overall less time in a certain connectivity state, and showed a state-specific reduction in connectivity between salience/emotion and central executive networks. The number of state transitions mediated the link between higher mindfulness and lower anxiety, providing new insights into potential neural mechanisms underlying benefits of mindfulness on psychological health in youth. Our results provide new evidence that mindfulness in youth relates to functional neural dynamics and interactions between neurocognitive networks, over time. Copyright © 2017 Elsevier B.V. All rights reserved.
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
2015-07-24
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions.
Spatiotemporal neural network dynamics for the processing of dynamic facial expressions
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
2015-01-01
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708
Chimera states in brain networks: Empirical neural vs. modular fractal connectivity
NASA Astrophysics Data System (ADS)
Chouzouris, Teresa; Omelchenko, Iryna; Zakharova, Anna; Hlinka, Jaroslav; Jiruska, Premysl; Schöll, Eckehard
2018-04-01
Complex spatiotemporal patterns, called chimera states, consist of coexisting coherent and incoherent domains and can be observed in networks of coupled oscillators. The interplay of synchrony and asynchrony in complex brain networks is an important aspect in studies of both the brain function and disease. We analyse the collective dynamics of FitzHugh-Nagumo neurons in complex networks motivated by its potential application to epileptology and epilepsy surgery. We compare two topologies: an empirical structural neural connectivity derived from diffusion-weighted magnetic resonance imaging and a mathematically constructed network with modular fractal connectivity. We analyse the properties of chimeras and partially synchronized states and obtain regions of their stability in the parameter planes. Furthermore, we qualitatively simulate the dynamics of epileptic seizures and study the influence of the removal of nodes on the network synchronizability, which can be useful for applications to epileptic surgery.
Deep Potential Molecular Dynamics: A Scalable Model with the Accuracy of Quantum Mechanics
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Han, Jiequn; Wang, Han; Car, Roberto; E, Weinan
2018-04-01
We introduce a scheme for molecular simulations, the deep potential molecular dynamics (DPMD) method, based on a many-body potential and interatomic forces generated by a carefully crafted deep neural network trained with ab initio data. The neural network model preserves all the natural symmetries in the problem. It is first-principles based in the sense that there are no ad hoc components aside from the network model. We show that the proposed scheme provides an efficient and accurate protocol in a variety of systems, including bulk materials and molecules. In all these cases, DPMD gives results that are essentially indistinguishable from the original data, at a cost that scales linearly with system size.
Use of probabilistic neural networks for emitter correlation
NASA Astrophysics Data System (ADS)
Maloney, P. S.
1990-08-01
The Probabilistic Neural Network (PNN) as described by Specht''3 has been successfully applied to a number of emitter correlation problems involving operational data for training and testing of the neural net work. The PNN has been found to be a reliable classification tool for determining emitter type or even identifying specific emitter platforms given appropriate representative data sets for training con sisting only of parametric data from electronic intelligence (ELINT) reports. Four separate feasibility studies have been conducted to prove the usefulness of PNN in this application area: . Hull-to-emitter correlation (HULTEC) for identification of seagoing emitter platforms . Identification of landbased emitters from airborne sensors . Pulse sorting according to emitter of origin . Emitter typing based on a dynamically learning neural network. 1 .
Maier, M A; Shupe, L E; Fetz, E E
2005-10-01
Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations. The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.
Bhowmik, David; Shanahan, Murray
2013-01-01
Groups of neurons firing synchronously are hypothesized to underlie many cognitive functions such as attention, associative learning, memory, and sensory selection. Recent theories suggest that transient periods of synchronization and desynchronization provide a mechanism for dynamically integrating and forming coalitions of functionally related neural areas, and that at these times conditions are optimal for information transfer. Oscillating neural populations display a great amount of spectral complexity, with several rhythms temporally coexisting in different structures and interacting with each other. This paper explores inter-band frequency modulation between neural oscillators using models of quadratic integrate-and-fire neurons and Hodgkin-Huxley neurons. We vary the structural connectivity in a network of neural oscillators, assess the spectral complexity, and correlate the inter-band frequency modulation. We contrast this correlation against measures of metastable coalition entropy and synchrony. Our results show that oscillations in different neural populations modulate each other so as to change frequency, and that the interaction of these fluctuating frequencies in the network as a whole is able to drive different neural populations towards episodes of synchrony. Further to this, we locate an area in the connectivity space in which the system directs itself in this way so as to explore a large repertoire of synchronous coalitions. We suggest that such dynamics facilitate versatile exploration, integration, and communication between functionally related neural areas, and thereby supports sophisticated cognitive processing in the brain. PMID:23614040
Mäkinen, Meeri Eeva-Liisa; Ylä-Outinen, Laura; Narkilahti, Susanna
2018-01-01
The electrical activity of the brain arises from single neurons communicating with each other. However, how single neurons interact during early development to give rise to neural network activity remains poorly understood. We studied the emergence of synchronous neural activity in human pluripotent stem cell (hPSC)-derived neural networks simultaneously on a single-neuron level and network level. The contribution of gamma-aminobutyric acid (GABA) and gap junctions to the development of synchronous activity in hPSC-derived neural networks was studied with GABA agonist and antagonist and by blocking gap junctional communication, respectively. We characterized the dynamics of the network-wide synchrony in hPSC-derived neural networks with high spatial resolution (calcium imaging) and temporal resolution microelectrode array (MEA). We found that the emergence of synchrony correlates with a decrease in very strong GABA excitation. However, the synchronous network was found to consist of a heterogeneous mixture of synchronously active cells with variable responses to GABA, GABA agonists and gap junction blockers. Furthermore, we show how single-cell distributions give rise to the network effect of GABA, GABA agonists and gap junction blockers. Finally, based on our observations, we suggest that the earliest form of synchronous neuronal activity depends on gap junctions and a decrease in GABA induced depolarization but not on GABAA mediated signaling. PMID:29559893
Temporal neural networks and transient analysis of complex engineering systems
NASA Astrophysics Data System (ADS)
Uluyol, Onder
A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.
Tong, Shaocheng; Wang, Tong; Li, Yongming; Zhang, Huaguang
2014-06-01
This paper discusses the problem of adaptive neural network output feedback control for a class of stochastic nonlinear strict-feedback systems. The concerned systems have certain characteristics, such as unknown nonlinear uncertainties, unknown dead-zones, unmodeled dynamics and without the direct measurements of state variables. In this paper, the neural networks (NNs) are employed to approximate the unknown nonlinear uncertainties, and then by representing the dead-zone as a time-varying system with a bounded disturbance. An NN state observer is designed to estimate the unmeasured states. Based on both backstepping design technique and a stochastic small-gain theorem, a robust adaptive NN output feedback control scheme is developed. It is proved that all the variables involved in the closed-loop system are input-state-practically stable in probability, and also have robustness to the unmodeled dynamics. Meanwhile, the observer errors and the output of the system can be regulated to a small neighborhood of the origin by selecting appropriate design parameters. Simulation examples are also provided to illustrate the effectiveness of the proposed approach.
Intelligent approach to prognostic enhancements of diagnostic systems
NASA Astrophysics Data System (ADS)
Vachtsevanos, George; Wang, Peng; Khiripet, Noppadon; Thakker, Ash; Galie, Thomas R.
2001-07-01
This paper introduces a novel methodology to prognostics based on a dynamic wavelet neural network construct and notions from the virtual sensor area. This research has been motivated and supported by the U.S. Navy's active interest in integrating advanced diagnostic and prognostic algorithms in existing Naval digital control and monitoring systems. A rudimentary diagnostic platform is assumed to be available providing timely information about incipient or impending failure conditions. We focus on the development of a prognostic algorithm capable of predicting accurately and reliably the remaining useful lifetime of a failing machine or component. The prognostic module consists of a virtual sensor and a dynamic wavelet neural network as the predictor. The virtual sensor employs process data to map real measurements into difficult to monitor fault quantities. The prognosticator uses a dynamic wavelet neural network as a nonlinear predictor. Means to manage uncertainty and performance metrics are suggested for comparison purposes. An interface to an available shipboard Integrated Condition Assessment System is described and applications to shipboard equipment are discussed. Typical results from pump failures are presented to illustrate the effectiveness of the methodology.
Fractional Hopfield Neural Networks: Fractional Dynamic Associative Recurrent Neural Networks.
Pu, Yi-Fei; Yi, Zhang; Zhou, Ji-Liu
2017-10-01
This paper mainly discusses a novel conceptual framework: fractional Hopfield neural networks (FHNN). As is commonly known, fractional calculus has been incorporated into artificial neural networks, mainly because of its long-term memory and nonlocality. Some researchers have made interesting attempts at fractional neural networks and gained competitive advantages over integer-order neural networks. Therefore, it is naturally makes one ponder how to generalize the first-order Hopfield neural networks to the fractional-order ones, and how to implement FHNN by means of fractional calculus. We propose to introduce a novel mathematical method: fractional calculus to implement FHNN. First, we implement fractor in the form of an analog circuit. Second, we implement FHNN by utilizing fractor and the fractional steepest descent approach, construct its Lyapunov function, and further analyze its attractors. Third, we perform experiments to analyze the stability and convergence of FHNN, and further discuss its applications to the defense against chip cloning attacks for anticounterfeiting. The main contribution of our work is to propose FHNN in the form of an analog circuit by utilizing a fractor and the fractional steepest descent approach, construct its Lyapunov function, prove its Lyapunov stability, analyze its attractors, and apply FHNN to the defense against chip cloning attacks for anticounterfeiting. A significant advantage of FHNN is that its attractors essentially relate to the neuron's fractional order. FHNN possesses the fractional-order-stability and fractional-order-sensitivity characteristics.
[Early warning on measles through the neural networks].
Yu, Bin; Ding, Chun; Wei, Shan-bo; Chen, Bang-hua; Liu, Pu-lin; Luo, Tong-yong; Wang, Jia-gang; Pan, Zhi-wei; Lu, Jun-an
2011-01-01
To discuss the effects on early warning of measles, using the neural networks. Based on the available data through monthly and weekly reports on measles from January 1986 to August 2006 in Wuhan city. The modal was developed using the neural networks to predict and analyze the prevalence and incidence of measles. When the dynamic time series modal was established with back propagation (BP) networks consisting of two layers, if p was assigned as 9, the convergence speed was acceptable and the correlation coefficient was equal to 0.85. It was more acceptable for monthly forecasting the specific value, but better for weekly forecasting the classification under probabilistic neural networks (PNN). When data was big enough to serve the purpose, it seemed more feasible for early warning using the two-layer BP networks. However, when data was not enough, then PNN could be used for the purpose of prediction. This method seemed feasible to be used in the system for early warning.
SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING.
Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima
2016-03-01
A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions.
Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.
NASA Astrophysics Data System (ADS)
Sasaki, Hironori
This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.
Organization of excitable dynamics in hierarchical biological networks.
Müller-Linow, Mark; Hilgetag, Claus C; Hütt, Marc-Thorsten
2008-09-26
This study investigates the contributions of network topology features to the dynamic behavior of hierarchically organized excitable networks. Representatives of different types of hierarchical networks as well as two biological neural networks are explored with a three-state model of node activation for systematically varying levels of random background network stimulation. The results demonstrate that two principal topological aspects of hierarchical networks, node centrality and network modularity, correlate with the network activity patterns at different levels of spontaneous network activation. The approach also shows that the dynamic behavior of the cerebral cortical systems network in the cat is dominated by the network's modular organization, while the activation behavior of the cellular neuronal network of Caenorhabditis elegans is strongly influenced by hub nodes. These findings indicate the interaction of multiple topological features and dynamic states in the function of complex biological networks.
NASA Astrophysics Data System (ADS)
Lin, Daw-Tung; Ligomenides, Panos A.; Dayhoff, Judith E.
1993-08-01
Inspired from the time delays that occur in neurobiological signal transmission, we describe an adaptive time delay neural network (ATNN) which is a powerful dynamic learning technique for spatiotemporal pattern transformation and temporal sequence identification. The dynamic properties of this network are formulated through the adaptation of time-delays and synapse weights, which are adjusted on-line based on gradient descent rules according to the evolution of observed inputs and outputs. We have applied the ATNN to examples that possess spatiotemporal complexity, with temporal sequences that are completed by the network. The ATNN is able to be applied to pattern completion. Simulation results show that the ATNN learns the topology of a circular and figure eight trajectories within 500 on-line training iterations, and reproduces the trajectory dynamically with very high accuracy. The ATNN was also trained to model the Fourier series expansion of the sum of different odd harmonics. The resulting network provides more flexibility and efficiency than the TDNN and allows the network to seek optimal values for time-delays as well as optimal synapse weights.
Electronic implementation of associative memory based on neural network models
NASA Technical Reports Server (NTRS)
Moopenn, A.; Lambe, John; Thakoor, A. P.
1987-01-01
An electronic embodiment of a neural network based associative memory in the form of a binary connection matrix is described. The nature of false memory errors, their effect on the information storage capacity of binary connection matrix memories, and a novel technique to eliminate such errors with the help of asymmetrical extra connections are discussed. The stability of the matrix memory system incorporating a unique local inhibition scheme is analyzed in terms of local minimization of an energy function. The memory's stability, dynamic behavior, and recall capability are investigated using a 32-'neuron' electronic neural network memory with a 1024-programmable binary connection matrix.
Jin, Long; Liao, Bolin; Liu, Mei; Xiao, Lin; Guo, Dongsheng; Yan, Xiaogang
2017-01-01
By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network. PMID:28955217
NASA Astrophysics Data System (ADS)
Qian, Xiaoshan
2018-01-01
The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.
Zanutto, B. Silvano
2017-01-01
Animals are proposed to learn the latent rules governing their environment in order to maximize their chances of survival. However, rules may change without notice, forcing animals to keep a memory of which one is currently at work. Rule switching can lead to situations in which the same stimulus/response pairing is positively and negatively rewarded in the long run, depending on variables that are not accessible to the animal. This fact raises questions on how neural systems are capable of reinforcement learning in environments where the reinforcement is inconsistent. Here we address this issue by asking about which aspects of connectivity, neural excitability and synaptic plasticity are key for a very general, stochastic spiking neural network model to solve a task in which rules change without being cued, taking the serial reversal task (SRT) as paradigm. Contrary to what could be expected, we found strong limitations for biologically plausible networks to solve the SRT. Especially, we proved that no network of neurons can learn a SRT if it is a single neural population that integrates stimuli information and at the same time is responsible of choosing the behavioural response. This limitation is independent of the number of neurons, neuronal dynamics or plasticity rules, and arises from the fact that plasticity is locally computed at each synapse, and that synaptic changes and neuronal activity are mutually dependent processes. We propose and characterize a spiking neural network model that solves the SRT, which relies on separating the functions of stimuli integration and response selection. The model suggests that experimental efforts to understand neural function should focus on the characterization of neural circuits according to their connectivity, neural dynamics, and the degree of modulation of synaptic plasticity with reward. PMID:29077735
Neural network approach to time-dependent dividing surfaces in classical reaction dynamics.
Schraft, Philippe; Junginger, Andrej; Feldmaier, Matthias; Bardakcioglu, Robin; Main, Jörg; Wunner, Günter; Hernandez, Rigoberto
2018-04-01
In a dynamical system, the transition between reactants and products is typically mediated by an energy barrier whose properties determine the corresponding pathways and rates. The latter is the flux through a dividing surface (DS) between the two corresponding regions, and it is exact only if it is free of recrossings. For time-independent barriers, the DS can be attached to the top of the corresponding saddle point of the potential energy surface, and in time-dependent systems, the DS is a moving object. The precise determination of these direct reaction rates, e.g., using transition state theory, requires the actual construction of a DS for a given saddle geometry, which is in general a demanding methodical and computational task, especially in high-dimensional systems. In this paper, we demonstrate how such time-dependent, global, and recrossing-free DSs can be constructed using neural networks. In our approach, the neural network uses the bath coordinates and time as input, and it is trained in a way that its output provides the position of the DS along the reaction coordinate. An advantage of this procedure is that, once the neural network is trained, the complete information about the dynamical phase space separation is stored in the network's parameters, and a precise distinction between reactants and products can be made for all possible system configurations, all times, and with little computational effort. We demonstrate this general method for two- and three-dimensional systems and explain its straightforward extension to even more degrees of freedom.
Neural network approach to time-dependent dividing surfaces in classical reaction dynamics
NASA Astrophysics Data System (ADS)
Schraft, Philippe; Junginger, Andrej; Feldmaier, Matthias; Bardakcioglu, Robin; Main, Jörg; Wunner, Günter; Hernandez, Rigoberto
2018-04-01
In a dynamical system, the transition between reactants and products is typically mediated by an energy barrier whose properties determine the corresponding pathways and rates. The latter is the flux through a dividing surface (DS) between the two corresponding regions, and it is exact only if it is free of recrossings. For time-independent barriers, the DS can be attached to the top of the corresponding saddle point of the potential energy surface, and in time-dependent systems, the DS is a moving object. The precise determination of these direct reaction rates, e.g., using transition state theory, requires the actual construction of a DS for a given saddle geometry, which is in general a demanding methodical and computational task, especially in high-dimensional systems. In this paper, we demonstrate how such time-dependent, global, and recrossing-free DSs can be constructed using neural networks. In our approach, the neural network uses the bath coordinates and time as input, and it is trained in a way that its output provides the position of the DS along the reaction coordinate. An advantage of this procedure is that, once the neural network is trained, the complete information about the dynamical phase space separation is stored in the network's parameters, and a precise distinction between reactants and products can be made for all possible system configurations, all times, and with little computational effort. We demonstrate this general method for two- and three-dimensional systems and explain its straightforward extension to even more degrees of freedom.
Moghadam, Saeed Montazeri; Seyyedsalehi, Seyyed Ali
2018-05-31
Nonlinear components extracted from deep structures of bottleneck neural networks exhibit a great ability to express input space in a low-dimensional manifold. Sharing and combining the components boost the capability of the neural networks to synthesize and interpolate new and imaginary data. This synthesis is possibly a simple model of imaginations in human brain where the components are expressed in a nonlinear low dimensional manifold. The current paper introduces a novel Dynamic Deep Bottleneck Neural Network to analyze and extract three main features of videos regarding the expression of emotions on the face. These main features are identity, emotion and expression intensity that are laid in three different sub-manifolds of one nonlinear general manifold. The proposed model enjoying the advantages of recurrent networks was used to analyze the sequence and dynamics of information in videos. It is noteworthy to mention that this model also has also the potential to synthesize new videos showing variations of one specific emotion on the face of unknown subjects. Experiments on discrimination and recognition ability of extracted components showed that the proposed model has an average of 97.77% accuracy in recognition of six prominent emotions (Fear, Surprise, Sadness, Anger, Disgust, and Happiness), and 78.17% accuracy in the recognition of intensity. The produced videos revealed variations from neutral to the apex of an emotion on the face of the unfamiliar test subject which is on average 0.8 similar to reference videos in the scale of the SSIM method. Copyright © 2018 Elsevier Ltd. All rights reserved.
Sase, Takumi; Katori, Yuichi; Komuro, Motomasa; Aihara, Kazuyuki
2017-01-01
We investigate a discrete-time network model composed of excitatory and inhibitory neurons and dynamic synapses with the aim at revealing dynamical properties behind oscillatory phenomena possibly related to brain functions. We use a stochastic neural network model to derive the corresponding macroscopic mean field dynamics, and subsequently analyze the dynamical properties of the network. In addition to slow and fast oscillations arising from excitatory and inhibitory networks, respectively, we show that the interaction between these two networks generates phase-amplitude cross-frequency coupling (CFC), in which multiple different frequency components coexist and the amplitude of the fast oscillation is modulated by the phase of the slow oscillation. Furthermore, we clarify the detailed properties of the oscillatory phenomena by applying the bifurcation analysis to the mean field model, and accordingly show that the intermittent and the continuous CFCs can be characterized by an aperiodic orbit on a closed curve and one on a torus, respectively. These two CFC modes switch depending on the coupling strength from the excitatory to inhibitory networks, via the saddle-node cycle bifurcation of a one-dimensional torus in map (MT1SNC), and may be associated with the function of multi-item representation. We believe that the present model might have potential for studying possible functional roles of phase-amplitude CFC in the cerebral cortex. PMID:28424606
Learning in Neural Networks: VLSI Implementation Strategies
NASA Technical Reports Server (NTRS)
Duong, Tuan Anh
1995-01-01
Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.
Neural network with dynamically adaptable neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
This invention is an adaptive neuron for use in neural network processors. The adaptive neuron participates in the supervised learning phase of operation on a co-equal basis with the synapse matrix elements by adaptively changing its gain in a similar manner to the change of weights in the synapse IO elements. In this manner, training time is decreased by as much as three orders of magnitude.
UAV Trajectory Modeling Using Neural Networks
NASA Technical Reports Server (NTRS)
Xue, Min
2017-01-01
Large amount of small Unmanned Aerial Vehicles (sUAVs) are projected to operate in the near future. Potential sUAV applications include, but not limited to, search and rescue, inspection and surveillance, aerial photography and video, precision agriculture, and parcel delivery. sUAVs are expected to operate in the uncontrolled Class G airspace, which is at or below 500 feet above ground level (AGL), where many static and dynamic constraints exist, such as ground properties and terrains, restricted areas, various winds, manned helicopters, and conflict avoidance among sUAVs. How to enable safe, efficient, and massive sUAV operations at the low altitude airspace remains a great challenge. NASA's Unmanned aircraft system Traffic Management (UTM) research initiative works on establishing infrastructure and developing policies, requirement, and rules to enable safe and efficient sUAVs' operations. To achieve this goal, it is important to gain insights of future UTM traffic operations through simulations, where the accurate trajectory model plays an extremely important role. On the other hand, like what happens in current aviation development, trajectory modeling should also serve as the foundation for any advanced concepts and tools in UTM. Accurate models of sUAV dynamics and control systems are very important considering the requirement of the meter level precision in UTM operations. The vehicle dynamics are relatively easy to derive and model, however, vehicle control systems remain unknown as they are usually kept by manufactures as a part of intellectual properties. That brings challenges to trajectory modeling for sUAVs. How to model the vehicle's trajectories with unknown control system? This work proposes to use a neural network to model a vehicle's trajectory. The neural network is first trained to learn the vehicle's responses at numerous conditions. Once being fully trained, given current vehicle states, winds, and desired future trajectory, the neural network should be able to predict the vehicle's future states at next time step. A complete 4-D trajectory are then generated step by step using the trained neural network. Experiments in this work show that the neural network can approximate the sUAV's model and predict the trajectory accurately.
Ehret, Phillip J; Monroe, Brian M; Read, Stephen J
2015-05-01
We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory. © 2014 by the Society for Personality and Social Psychology, Inc.
Gerhard, Felipe; Kispersky, Tilman; Gutierrez, Gabrielle J.; Marder, Eve; Kramer, Mark; Eden, Uri
2013-01-01
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities. PMID:23874181
Durstewitz, Daniel
2017-06-01
The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.
Random Boolean networks for autoassociative memory: Optimization and sequential learning
NASA Astrophysics Data System (ADS)
Sherrington, D.; Wong, K. Y. M.
Conventional neural networks are based on synaptic storage of information, even when the neural states are discrete and bounded. In general, the set of potential local operations is much greater. Here we discuss some aspects of the properties of networks of binary neurons with more general Boolean functions controlling the local dynamics. Two specific aspects are emphasised; (i) optimization in the presence of noise and (ii) a simple model for short-term memory exhibiting primacy and recency in the recall of sequentially taught patterns.
NASA Technical Reports Server (NTRS)
Villarreal, James A.; Shelton, Robert O.
1991-01-01
Introduced here is a novel technique which adds the dimension of time to the well known back propagation neural network algorithm. Cited here are several reasons why the inclusion of automated spatial and temporal associations are crucial to effective systems modeling. An overview of other works which also model spatiotemporal dynamics is furnished. A detailed description is given of the processes necessary to implement the space-time network algorithm. Several demonstrations that illustrate the capabilities and performance of this new architecture are given.
NASA Astrophysics Data System (ADS)
Zhang, Lin; Lu, Jian; Zhou, Jialin; Zhu, Jinqing; Li, Yunxuan; Wan, Qian
2018-03-01
Didi Dache is the most popular taxi order mobile app in China, which provides online taxi-hailing service. The obtained big database from this app could be used to analyze the complexities’ day-to-day dynamic evolution of Didi taxi trip network (DTTN) from the level of complex network dynamics. First, this paper proposes the data cleaning and modeling methods for expressing Nanjing’s DTTN as a complex network. Second, the three consecutive weeks’ data are cleaned to establish 21 DTTNs based on the proposed big data processing technology. Then, multiple topology measures that characterize the complexities’ day-to-day dynamic evolution of these networks are provided. Third, these measures of 21 DTTNs are calculated and subsequently explained with actual implications. They are used as a training set for modeling the BP neural network which is designed for predicting DTTN complexities evolution. Finally, the reliability of the designed BP neural network is verified by comparing with the actual data and the results obtained from ARIMA method simultaneously. Because network complexities are the basis for modeling cascading failures and conducting link prediction in complex system, this proposed research framework not only provides a novel perspective for analyzing DTTN from the level of system aggregated behavior, but can also be used to improve the DTTN management level.
McDonough, Ian M.; Nashiro, Kaoru
2014-01-01
An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130
Compact holographic optical neural network system for real-time pattern recognition
NASA Astrophysics Data System (ADS)
Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.
1996-08-01
One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.
NASA Astrophysics Data System (ADS)
Wang, Jing; Yang, Tianyu; Staskevich, Gennady; Abbe, Brian
2017-04-01
This paper studies the cooperative control problem for a class of multiagent dynamical systems with partially unknown nonlinear system dynamics. In particular, the control objective is to solve the state consensus problem for multiagent systems based on the minimisation of certain cost functions for individual agents. Under the assumption that there exist admissible cooperative controls for such class of multiagent systems, the formulated problem is solved through finding the optimal cooperative control using the approximate dynamic programming and reinforcement learning approach. With the aid of neural network parameterisation and online adaptive learning, our method renders a practically implementable approximately adaptive neural cooperative control for multiagent systems. Specifically, based on the Bellman's principle of optimality, the Hamilton-Jacobi-Bellman (HJB) equation for multiagent systems is first derived. We then propose an approximately adaptive policy iteration algorithm for multiagent cooperative control based on neural network approximation of the value functions. The convergence of the proposed algorithm is rigorously proved using the contraction mapping method. The simulation results are included to validate the effectiveness of the proposed algorithm.
Neural mechanisms of movement planning: motor cortex and beyond.
Svoboda, Karel; Li, Nuo
2018-04-01
Neurons in motor cortex and connected brain regions fire in anticipation of specific movements, long before movement occurs. This neural activity reflects internal processes by which the brain plans and executes volitional movements. The study of motor planning offers an opportunity to understand how the structure and dynamics of neural circuits support persistent internal states and how these states influence behavior. Recent advances in large-scale neural recordings are beginning to decipher the relationship of the dynamics of populations of neurons during motor planning and movements. New behavioral tasks in rodents, together with quantified perturbations, link dynamics in specific nodes of neural circuits to behavior. These studies reveal a neural network distributed across multiple brain regions that collectively supports motor planning. We review recent advances and highlight areas where further work is needed to achieve a deeper understanding of the mechanisms underlying motor planning and related cognitive processes. Copyright © 2017. Published by Elsevier Ltd.
Nie, Xiaobing; Zheng, Wei Xing
2015-05-01
This paper is concerned with the problem of coexistence and dynamical behaviors of multiple equilibrium points for neural networks with discontinuous non-monotonic piecewise linear activation functions and time-varying delays. The fixed point theorem and other analytical tools are used to develop certain sufficient conditions that ensure that the n-dimensional discontinuous neural networks with time-varying delays can have at least 5(n) equilibrium points, 3(n) of which are locally stable and the others are unstable. The importance of the derived results is that it reveals that the discontinuous neural networks can have greater storage capacity than the continuous ones. Moreover, different from the existing results on multistability of neural networks with discontinuous activation functions, the 3(n) locally stable equilibrium points obtained in this paper are located in not only saturated regions, but also unsaturated regions, due to the non-monotonic structure of discontinuous activation functions. A numerical simulation study is conducted to illustrate and support the derived theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Agerskov, Claus
2016-04-01
A neural network model is presented of novelty detection in the CA1 subdomain of the hippocampal formation from the perspective of information flow. This computational model is restricted on several levels by both anatomical information about hippocampal circuitry and behavioral data from studies done in rats. Several studies report that the CA1 area broadcasts a generalized novelty signal in response to changes in the environment. Using the neural engineering framework developed by Eliasmith et al., a spiking neural network architecture is created that is able to compare high-dimensional vectors, symbolizing semantic information, according to the semantic pointer hypothesis. This model then computes the similarity between the vectors, as both direct inputs and a recalled memory from a long-term memory network by performing the dot-product operation in a novelty neural network architecture. The developed CA1 model agrees with available neuroanatomical data, as well as the presented behavioral data, and so it is a biologically realistic model of novelty detection in the hippocampus, which can provide a feasible explanation for experimentally observed dynamics.
Ratnadurai-Giridharan, Shivakeshavan; Cheung, Chung C; Rubchinsky, Leonid L
2017-11-01
Conventional deep brain stimulation of basal ganglia uses high-frequency regular electrical pulses to treat Parkinsonian motor symptoms but has a series of limitations. Relatively new and not yet clinically tested, optogenetic stimulation is an effective experimental stimulation technique to affect pathological network dynamics. We compared the effects of electrical and optogenetic stimulation of the basal gangliaon the pathologicalParkinsonian rhythmic neural activity. We studied the network response to electrical stimulation and excitatory and inhibitory optogenetic stimulations. Different stimulations exhibit different interactions with pathological activity in the network. We studied these interactions for different network and stimulation parameter values. Optogenetic stimulation was found to be more efficient than electrical stimulation in suppressing pathological rhythmicity. Our findings indicate that optogenetic control of neural synchrony may be more efficacious than electrical control because of the different ways of how stimulations interact with network dynamics.
NASA Technical Reports Server (NTRS)
Rodriguez, Guillermo (Editor)
1990-01-01
Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.
Yue, Shigang; Rind, F Claire
2006-05-01
The lobula giant movement detector (LGMD) is an identified neuron in the locust brain that responds most strongly to the images of an approaching object such as a predator. Its computational model can cope with unpredictable environments without using specific object recognition algorithms. In this paper, an LGMD-based neural network is proposed with a new feature enhancement mechanism to enhance the expanded edges of colliding objects via grouped excitation for collision detection with complex backgrounds. The isolated excitation caused by background detail will be filtered out by the new mechanism. Offline tests demonstrated the advantages of the presented LGMD-based neural network in complex backgrounds. Real time robotics experiments using the LGMD-based neural network as the only sensory system showed that the system worked reliably in a wide range of conditions; in particular, the robot was able to navigate in arenas with structured surrounds and complex backgrounds.
Velmurugan, G; Rakkiyappan, R; Vembarasan, V; Cao, Jinde; Alsaedi, Ahmed
2017-02-01
As we know, the notion of dissipativity is an important dynamical property of neural networks. Thus, the analysis of dissipativity of neural networks with time delay is becoming more and more important in the research field. In this paper, the authors establish a class of fractional-order complex-valued neural networks (FCVNNs) with time delay, and intensively study the problem of dissipativity, as well as global asymptotic stability of the considered FCVNNs with time delay. Based on the fractional Halanay inequality and suitable Lyapunov functions, some new sufficient conditions are obtained that guarantee the dissipativity of FCVNNs with time delay. Moreover, some sufficient conditions are derived in order to ensure the global asymptotic stability of the addressed FCVNNs with time delay. Finally, two numerical simulations are posed to ensure that the attention of our main results are valuable. Copyright © 2016 Elsevier Ltd. All rights reserved.
A novel word spotting method based on recurrent neural networks.
Frinken, Volkmar; Fischer, Andreas; Manmatha, R; Bunke, Horst
2012-02-01
Keyword spotting refers to the process of retrieving all instances of a given keyword from a document. In the present paper, a novel keyword spotting method for handwritten documents is described. It is derived from a neural network-based system for unconstrained handwriting recognition. As such it performs template-free spotting, i.e., it is not necessary for a keyword to appear in the training set. The keyword spotting is done using a modification of the CTC Token Passing algorithm in conjunction with a recurrent neural network. We demonstrate that the proposed systems outperform not only a classical dynamic time warping-based approach but also a modern keyword spotting system, based on hidden Markov models. Furthermore, we analyze the performance of the underlying neural networks when using them in a recognition task followed by keyword spotting on the produced transcription. We point out the advantages of keyword spotting when compared to classic text line recognition.
Neuromorphic photonic networks using silicon photonic weight banks.
Tait, Alexander N; de Lima, Thomas Ferreira; Zhou, Ellen; Wu, Allie X; Nahmias, Mitchell A; Shastri, Bhavin J; Prucnal, Paul R
2017-08-07
Photonic systems for high-performance information processing have attracted renewed interest. Neuromorphic silicon photonics has the potential to integrate processing functions that vastly exceed the capabilities of electronics. We report first observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks. A mathematical isomorphism between the silicon photonic circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, a simulated 24-node silicon photonic neural network is programmed using "neural compiler" to solve a differential system emulation task. A 294-fold acceleration against a conventional benchmark is predicted. We also propose and derive power consumption analysis for modulator-class neurons that, as opposed to laser-class neurons, are compatible with silicon photonic platforms. At increased scale, Neuromorphic silicon photonics could access new regimes of ultrafast information processing for radio, control, and scientific computing.
Pharmacological Tools to Study the Role of Astrocytes in Neural Network Functions.
Peña-Ortega, Fernando; Rivera-Angulo, Ana Julia; Lorea-Hernández, Jonathan Julio
2016-01-01
Despite that astrocytes and microglia do not communicate by electrical impulses, they can efficiently communicate among them, with each other and with neurons, to participate in complex neural functions requiring broad cell-communication and long-lasting regulation of brain function. Glial cells express many receptors in common with neurons; secrete gliotransmitters as well as neurotrophic and neuroinflammatory factors, which allow them to modulate synaptic transmission and neural excitability. All these properties allow glial cells to influence the activity of neuronal networks. Thus, the incorporation of glial cell function into the understanding of nervous system dynamics will provide a more accurate view of brain function. Our current knowledge of glial cell biology is providing us with experimental tools to explore their participation in neural network modulation. In this chapter, we review some of the classical, as well as some recent, pharmacological tools developed for the study of astrocyte's influence in neural function. We also provide some examples of the use of these pharmacological agents to understand the role of astrocytes in neural network function and dysfunction.
Linking structure and activity in nonlinear spiking networks
Josić, Krešimir; Shea-Brown, Eric
2017-01-01
Recent experimental advances are producing an avalanche of data on both neural connectivity and neural activity. To take full advantage of these two emerging datasets we need a framework that links them, revealing how collective neural activity arises from the structure of neural connectivity and intrinsic neural dynamics. This problem of structure-driven activity has drawn major interest in computational neuroscience. Existing methods for relating activity and architecture in spiking networks rely on linearizing activity around a central operating point and thus fail to capture the nonlinear responses of individual neurons that are the hallmark of neural information processing. Here, we overcome this limitation and present a new relationship between connectivity and activity in networks of nonlinear spiking neurons by developing a diagrammatic fluctuation expansion based on statistical field theory. We explicitly show how recurrent network structure produces pairwise and higher-order correlated activity, and how nonlinearities impact the networks’ spiking activity. Our findings open new avenues to investigating how single-neuron nonlinearities—including those of different cell types—combine with connectivity to shape population activity and function. PMID:28644840
Gerstner, Wulfram
2017-01-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957
Selected Flight Test Results for Online Learning Neural Network-Based Flight Control System
NASA Technical Reports Server (NTRS)
Williams-Hayes, Peggy S.
2004-01-01
The NASA F-15 Intelligent Flight Control System project team developed a series of flight control concepts designed to demonstrate neural network-based adaptive controller benefits, with the objective to develop and flight-test control systems using neural network technology to optimize aircraft performance under nominal conditions and stabilize the aircraft under failure conditions. This report presents flight-test results for an adaptive controller using stability and control derivative values from an online learning neural network. A dynamic cell structure neural network is used in conjunction with a real-time parameter identification algorithm to estimate aerodynamic stability and control derivative increments to baseline aerodynamic derivatives in flight. This open-loop flight test set was performed in preparation for a future phase in which the learning neural network and parameter identification algorithm output would provide the flight controller with aerodynamic stability and control derivative updates in near real time. Two flight maneuvers are analyzed - pitch frequency sweep and automated flight-test maneuver designed to optimally excite the parameter identification algorithm in all axes. Frequency responses generated from flight data are compared to those obtained from nonlinear simulation runs. Flight data examination shows that addition of flight-identified aerodynamic derivative increments into the simulation improved aircraft pitch handling qualities.
Age-related changes in the ease of dynamical transitions in human brain activity.
Ezaki, Takahiro; Sakaki, Michiko; Watanabe, Takamitsu; Masuda, Naoki
2018-06-01
Executive functions, a set of cognitive processes that enable flexible behavioral control, are known to decay with aging. Because such complex mental functions are considered to rely on the dynamic coordination of functionally different neural systems, the age-related decline in executive functions should be underpinned by alteration of large-scale neural dynamics. However, the effects of age on brain dynamics have not been firmly formulated. Here, we investigate such age-related changes in brain dynamics by applying "energy landscape analysis" to publicly available functional magnetic resonance imaging data from healthy younger and older human adults. We quantified the ease of dynamical transitions between different major patterns of brain activity, and estimated it for the default mode network (DMN) and the cingulo-opercular network (CON) separately. We found that the two age groups shared qualitatively the same trajectories of brain dynamics in both the DMN and CON. However, in both of networks, the ease of transitions was significantly smaller in the older than the younger group. Moreover, the ease of transitions was associated with the performance in executive function tasks in a doubly dissociated manner: for the younger adults, the ability of executive functions was mainly correlated with the ease of transitions in the CON, whereas that for the older adults was specifically associated with the ease of transitions in the DMN. These results provide direct biological evidence for age-related changes in macroscopic brain dynamics and suggest that such neural dynamics play key roles when individuals carry out cognitively demanding tasks. © 2018 Wiley Periodicals, Inc.
Hierarchical classification with a competitive evolutionary neural tree.
Adams, R G.; Butchart, K; Davey, N
1999-04-01
A new, dynamic, tree structured network, the Competitive Evolutionary Neural Tree (CENT) is introduced. The network is able to provide a hierarchical classification of unlabelled data sets. The main advantage that the CENT offers over other hierarchical competitive networks is its ability to self determine the number, and structure, of the competitive nodes in the network, without the need for externally set parameters. The network produces stable classificatory structures by halting its growth using locally calculated heuristics. The results of network simulations are presented over a range of data sets, including Anderson's IRIS data set. The CENT network demonstrates its ability to produce a representative hierarchical structure to classify a broad range of data sets.
A continually online-trained neural network controller for brushless DC motor drives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubaai, A.; Kotaru, R.; Kankam, M.D.
2000-04-01
In this paper, a high-performance controller with simultaneous online identification and control is designed for brushless dc motor drives. The dynamics of the motor/load are modeled online, and controlled using two different neural network based identification and control schemes, as the system is in operation. In the first scheme, an attempt is made to control the rotor angular speed, utilizing a single three-hidden-layer network. The second scheme attempts to control the stator currents, using a predetermined control law as a function of the estimated states. This schemes incorporates three multilayered feedforward neural networks that are online trained, using the Levenburg-Marquadtmore » training algorithm. The control of the direct and quadrature components of the stator current successfully tracked a wide variety of trajectories after relatively short online training periods. The control strategy adapts to the uncertainties of the motor/load dynamics and, in addition, learns their inherent nonlinearities. Simulation results illustrated that a neurocontroller used in conjunction with adaptive control schemes can result in a flexible control device which may be utilized in a wide range of environments.« less
Reversible large–scale modification of cortical networks during neuroprosthetic control
Ganguly, Karunesh; Wallis, Jonathan D.
2012-01-01
Brain-Machine Interfaces (BMI) provide a framework to study cortical dynamics and the neural correlates of learning. Neuroprosthetic control has been associated with tuning changes in specific neurons directly projecting to the BMI (hereafter ‘direct neurons’). However, little is known about the larger network dynamics. By monitoring ensembles of neurons that were either causally linked to BMI control or indirectly involved, here we show that proficient neuroprosthetic control is associated with large-scale modifications to the cortical network in macaque monkeys. Specifically, there were changes in the preferred direction of both direct and indirect neurons. Interestingly, with learning, there was a relative decrease in the net modulation of indirect neural activity in comparison to the direct activity. These widespread differential changes in the direct and indirect population activity were remarkably stable from one day to the next and readily coexisted with the long-standing cortical network for upper limb control. Thus, the process of learning BMI control is associated with differential modification of neural populations based on their specific relation to movement control. PMID:21499255
Reversible large-scale modification of cortical networks during neuroprosthetic control.
Ganguly, Karunesh; Dimitrov, Dragan F; Wallis, Jonathan D; Carmena, Jose M
2011-05-01
Brain-machine interfaces (BMIs) provide a framework for studying cortical dynamics and the neural correlates of learning. Neuroprosthetic control has been associated with tuning changes in specific neurons directly projecting to the BMI (hereafter referred to as direct neurons). However, little is known about the larger network dynamics. By monitoring ensembles of neurons that were either causally linked to BMI control or indirectly involved, we found that proficient neuroprosthetic control is associated with large-scale modifications to the cortical network in macaque monkeys. Specifically, there were changes in the preferred direction of both direct and indirect neurons. Notably, with learning, there was a relative decrease in the net modulation of indirect neural activity in comparison with direct activity. These widespread differential changes in the direct and indirect population activity were markedly stable from one day to the next and readily coexisted with the long-standing cortical network for upper limb control. Thus, the process of learning BMI control is associated with differential modification of neural populations based on their specific relation to movement control.
Application of neural networks with orthogonal activation functions in control of dynamical systems
NASA Astrophysics Data System (ADS)
Nikolić, Saša S.; Antić, Dragan S.; Milojković, Marko T.; Milovanović, Miroslav B.; Perić, Staniša Lj.; Mitić, Darko B.
2016-04-01
In this article, we present a new method for the synthesis of almost and quasi-orthogonal polynomials of arbitrary order. Filters designed on the bases of these functions are generators of generalised quasi-orthogonal signals for which we derived and presented necessary mathematical background. Based on theoretical results, we designed and practically implemented generalised first-order (k = 1) quasi-orthogonal filter and proved its quasi-orthogonality via performed experiments. Designed filters can be applied in many scientific areas. In this article, generated functions were successfully implemented in Nonlinear Auto Regressive eXogenous (NARX) neural network as activation functions. One practical application of the designed orthogonal neural network is demonstrated through the example of control of the complex technical non-linear system - laboratory magnetic levitation system. Obtained results were compared with neural networks with standard activation functions and orthogonal functions of trigonometric shape. The proposed network demonstrated superiority over existing solutions in the sense of system performances.
A mixed-signal implementation of a polychronous spiking neural network with delay adaptation
Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André
2014-01-01
We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits. PMID:24672422
A mixed-signal implementation of a polychronous spiking neural network with delay adaptation.
Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan C; van Schaik, André
2014-01-01
We present a mixed-signal implementation of a re-configurable polychronous spiking neural network capable of storing and recalling spatio-temporal patterns. The proposed neural network contains one neuron array and one axon array. Spike Timing Dependent Delay Plasticity is used to fine-tune delays and add dynamics to the network. In our mixed-signal implementation, the neurons and axons have been implemented as both analog and digital circuits. The system thus consists of one FPGA, containing the digital neuron array and the digital axon array, and one analog IC containing the analog neuron array and the analog axon array. The system can be easily configured to use different combinations of each. We present and discuss the experimental results of all combinations of the analog and digital axon arrays and the analog and digital neuron arrays. The test results show that the proposed neural network is capable of successfully recalling more than 85% of stored patterns using both analog and digital circuits.
Evolving RBF neural networks for adaptive soft-sensor design.
Alexandridis, Alex
2013-12-01
This work presents an adaptive framework for building soft-sensors based on radial basis function (RBF) neural network models. The adaptive fuzzy means algorithm is utilized in order to evolve an RBF network, which approximates the unknown system based on input-output data from it. The methodology gradually builds the RBF network model, based on two separate levels of adaptation: On the first level, the structure of the hidden layer is modified by adding or deleting RBF centers, while on the second level, the synaptic weights are adjusted with the recursive least squares with exponential forgetting algorithm. The proposed approach is tested on two different systems, namely a simulated nonlinear DC Motor and a real industrial reactor. The results show that the produced soft-sensors can be successfully applied to model the two nonlinear systems. A comparison with two different adaptive modeling techniques, namely a dynamic evolving neural-fuzzy inference system (DENFIS) and neural networks trained with online backpropagation, highlights the advantages of the proposed methodology.
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Two-photon imaging and analysis of neural network dynamics
NASA Astrophysics Data System (ADS)
Lütcke, Henry; Helmchen, Fritjof
2011-08-01
The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to measure and analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behavior. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so-called 'microcircuits') remains comparably poor. Predominantly, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near-millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.
Theta phase precession and phase selectivity: a cognitive device description of neural coding
NASA Astrophysics Data System (ADS)
Zalay, Osbert C.; Bardakjian, Berj L.
2009-06-01
Information in neural systems is carried by way of phase and rate codes. Neuronal signals are processed through transformative biophysical mechanisms at the cellular and network levels. Neural coding transformations can be represented mathematically in a device called the cognitive rhythm generator (CRG). Incoming signals to the CRG are parsed through a bank of neuronal modes that orchestrate proportional, integrative and derivative transformations associated with neural coding. Mode outputs are then mixed through static nonlinearities to encode (spatio) temporal phase relationships. The static nonlinear outputs feed and modulate a ring device (limit cycle) encoding output dynamics. Small coupled CRG networks were created to investigate coding functionality associated with neuronal phase preference and theta precession in the hippocampus. Phase selectivity was found to be dependent on mode shape and polarity, while phase precession was a product of modal mixing (i.e. changes in the relative contribution or amplitude of mode outputs resulted in shifting phase preference). Nonlinear system identification was implemented to help validate the model and explain response characteristics associated with modal mixing; in particular, principal dynamic modes experimentally derived from a hippocampal neuron were inserted into a CRG and the neuron's dynamic response was successfully cloned. From our results, small CRG networks possessing disynaptic feedforward inhibition in combination with feedforward excitation exhibited frequency-dependent inhibitory-to-excitatory and excitatory-to-inhibitory transitions that were similar to transitions seen in a single CRG with quadratic modal mixing. This suggests nonlinear modal mixing to be a coding manifestation of the effect of network connectivity in shaping system dynamic behavior. We hypothesize that circuits containing disynaptic feedforward inhibition in the nervous system may be candidates for interpreting upstream rate codes to guide downstream processes such as phase precession, because of their demonstrated frequency-selective properties.
Integrate and fire neural networks, piecewise contractive maps and limit cycles.
Catsigeras, Eleonora; Guiraud, Pierre
2013-09-01
We study the global dynamics of integrate and fire neural networks composed of an arbitrary number of identical neurons interacting by inhibition and excitation. We prove that if the interactions are strong enough, then the support of the stable asymptotic dynamics consists of limit cycles. We also find sufficient conditions for the synchronization of networks containing excitatory neurons. The proofs are based on the analysis of the equivalent dynamics of a piecewise continuous Poincaré map associated to the system. We show that for efficient interactions the Poincaré map is piecewise contractive. Using this contraction property, we prove that there exist a countable number of limit cycles attracting all the orbits dropping into the stable subset of the phase space. This result applies not only to the Poincaré map under study, but also to a wide class of general n-dimensional piecewise contractive maps.
Crew exploration vehicle (CEV) attitude control using a neural-immunology/memory network
NASA Astrophysics Data System (ADS)
Weng, Liguo; Xia, Min; Wang, Wei; Liu, Qingshan
2015-01-01
This paper addresses the problem of the crew exploration vehicle (CEV) attitude control. CEVs are NASA's next-generation human spaceflight vehicles, and they use reaction control system (RCS) jet engines for attitude adjustment, which calls for control algorithms for firing the small propulsion engines mounted on vehicles. In this work, the resultant CEV dynamics combines both actuation and attitude dynamics. Therefore, it is highly nonlinear and even coupled with significant uncertainties. To cope with this situation, a neural-immunology/memory network is proposed. It is inspired by the human memory and immune systems. The control network does not rely on precise system dynamics information. Furthermore, the overall control scheme has a simple structure and demands much less computation as compared with most existing methods, making it attractive for real-time implementation. The effectiveness of this approach is also verified via simulation.
Fluctuation-Driven Neural Dynamics Reproduce Drosophila Locomotor Patterns
Cruchet, Steeve; Gustafson, Kyle; Benton, Richard; Floreano, Dario
2015-01-01
The neural mechanisms determining the timing of even simple actions, such as when to walk or rest, are largely mysterious. One intriguing, but untested, hypothesis posits a role for ongoing activity fluctuations in neurons of central action selection circuits that drive animal behavior from moment to moment. To examine how fluctuating activity can contribute to action timing, we paired high-resolution measurements of freely walking Drosophila melanogaster with data-driven neural network modeling and dynamical systems analysis. We generated fluctuation-driven network models whose outputs—locomotor bouts—matched those measured from sensory-deprived Drosophila. From these models, we identified those that could also reproduce a second, unrelated dataset: the complex time-course of odor-evoked walking for genetically diverse Drosophila strains. Dynamical models that best reproduced both Drosophila basal and odor-evoked locomotor patterns exhibited specific characteristics. First, ongoing fluctuations were required. In a stochastic resonance-like manner, these fluctuations allowed neural activity to escape stable equilibria and to exceed a threshold for locomotion. Second, odor-induced shifts of equilibria in these models caused a depression in locomotor frequency following olfactory stimulation. Our models predict that activity fluctuations in action selection circuits cause behavioral output to more closely match sensory drive and may therefore enhance navigation in complex sensory environments. Together these data reveal how simple neural dynamics, when coupled with activity fluctuations, can give rise to complex patterns of animal behavior. PMID:26600381
Spatial-temporal-spectral EEG patterns of BOLD functional network connectivity dynamics
NASA Astrophysics Data System (ADS)
Lamoš, Martin; Mareček, Radek; Slavíček, Tomáš; Mikl, Michal; Rektor, Ivan; Jan, Jiří
2018-06-01
Objective. Growing interest in the examination of large-scale brain network functional connectivity dynamics is accompanied by an effort to find the electrophysiological correlates. The commonly used constraints applied to spatial and spectral domains during electroencephalogram (EEG) data analysis may leave part of the neural activity unrecognized. We propose an approach that blindly reveals multimodal EEG spectral patterns that are related to the dynamics of the BOLD functional network connectivity. Approach. The blind decomposition of EEG spectrogram by parallel factor analysis has been shown to be a useful technique for uncovering patterns of neural activity. The simultaneously acquired BOLD fMRI data were decomposed by independent component analysis. Dynamic functional connectivity was computed on the component’s time series using a sliding window correlation, and between-network connectivity states were then defined based on the values of the correlation coefficients. ANOVA tests were performed to assess the relationships between the dynamics of between-network connectivity states and the fluctuations of EEG spectral patterns. Main results. We found three patterns related to the dynamics of between-network connectivity states. The first pattern has dominant peaks in the alpha, beta, and gamma bands and is related to the dynamics between the auditory, sensorimotor, and attentional networks. The second pattern, with dominant peaks in the theta and low alpha bands, is related to the visual and default mode network. The third pattern, also with peaks in the theta and low alpha bands, is related to the auditory and frontal network. Significance. Our previous findings revealed a relationship between EEG spectral pattern fluctuations and the hemodynamics of large-scale brain networks. In this study, we suggest that the relationship also exists at the level of functional connectivity dynamics among large-scale brain networks when no standard spatial and spectral constraints are applied on the EEG data.
From neural-based object recognition toward microelectronic eyes
NASA Technical Reports Server (NTRS)
Sheu, Bing J.; Bang, Sa Hyun
1994-01-01
Engineering neural network systems are best known for their abilities to adapt to the changing characteristics of the surrounding environment by adjusting system parameter values during the learning process. Rapid advances in analog current-mode design techniques have made possible the implementation of major neural network functions in custom VLSI chips. An electrically programmable analog synapse cell with large dynamic range can be realized in a compact silicon area. New designs of the synapse cells, neurons, and analog processor are presented. A synapse cell based on Gilbert multiplier structure can perform the linear multiplication for back-propagation networks. A double differential-pair synapse cell can perform the Gaussian function for radial-basis network. The synapse cells can be biased in the strong inversion region for high-speed operation or biased in the subthreshold region for low-power operation. The voltage gain of the sigmoid-function neurons is externally adjustable which greatly facilitates the search of optimal solutions in certain networks. Various building blocks can be intelligently connected to form useful industrial applications. Efficient data communication is a key system-level design issue for large-scale networks. We also present analog neural processors based on perceptron architecture and Hopfield network for communication applications. Biologically inspired neural networks have played an important role towards the creation of powerful intelligent machines. Accuracy, limitations, and prospects of analog current-mode design of the biologically inspired vision processing chips and cellular neural network chips are key design issues.
Harmonic stochastic resonance-enhanced signal detecting in NW small-world neural network
NASA Astrophysics Data System (ADS)
Wang, Dao-Guang; Liang, Xiao-Ming; Wang, Jing; Yang, Cheng-Fang; Liu, Kai; Lü, Hua-Ping
2010-11-01
The harmonic stochastic resonance-enhanced signal detecting in Newman-Watts small-world neural network is studied using the Hodgkin-Huxley dynamical equation with noise. If the connection probability p, coupling strength gsyn and noise intensity D matches well, higher order resonance will be found and an optimal signal-to-noise ratio will be obtained. Then, the reasons are given to explain the mechanism of this appearance.
Active Control of Complex Systems via Dynamic (Recurrent) Neural Networks
1992-05-30
course, to on-going changes brought about by learning processes. As research in neurodynamics proceeded, the concept of reverberatory information flows...Microstructure of Cognition . Vol. 1: Foundations, M.I.T. Press, Cambridge, Massachusetts, pp. 354-361, 1986. 100 I Schwarz, G., "Estimating the dimension of a...Continually Running Fully Recurrent Neural Networks, ICS Report 8805, Institute of Cognitive Science, University of California at San Diego, 1988. 10 II
Electronic neural network for dynamic resource allocation
NASA Technical Reports Server (NTRS)
Thakoor, A. P.; Eberhardt, S. P.; Daud, T.
1991-01-01
A VLSI implementable neural network architecture for dynamic assignment is presented. The resource allocation problems involve assigning members of one set (e.g. resources) to those of another (e.g. consumers) such that the global 'cost' of the associations is minimized. The network consists of a matrix of sigmoidal processing elements (neurons), where the rows of the matrix represent resources and columns represent consumers. Unlike previous neural implementations, however, association costs are applied directly to the neurons, reducing connectivity of the network to VLSI-compatible 0 (number of neurons). Each row (and column) has an additional neuron associated with it to independently oversee activations of all the neurons in each row (and each column), providing a programmable 'k-winner-take-all' function. This function simultaneously enforces blocking (excitatory/inhibitory) constraints during convergence to control the number of active elements in each row and column within desired boundary conditions. Simulations show that the network, when implemented in fully parallel VLSI hardware, offers optimal (or near-optimal) solutions within only a fraction of a millisecond, for problems up to 128 resources and 128 consumers, orders of magnitude faster than conventional computing or heuristic search methods.
Phase Transitions in Living Neural Networks
NASA Astrophysics Data System (ADS)
Williams-Garcia, Rashid Vladimir
Our nervous systems are composed of intricate webs of interconnected neurons interacting in complex ways. These complex interactions result in a wide range of collective behaviors with implications for features of brain function, e.g., information processing. Under certain conditions, such interactions can drive neural network dynamics towards critical phase transitions, where power-law scaling is conjectured to allow optimal behavior. Recent experimental evidence is consistent with this idea and it seems plausible that healthy neural networks would tend towards optimality. This hypothesis, however, is based on two problematic assumptions, which I describe and for which I present alternatives in this thesis. First, critical transitions may vanish due to the influence of an environment, e.g., a sensory stimulus, and so living neural networks may be incapable of achieving "critical" optimality. I develop a framework known as quasicriticality, in which a relative optimality can be achieved depending on the strength of the environmental influence. Second, the power-law scaling supporting this hypothesis is based on statistical analysis of cascades of activity known as neuronal avalanches, which conflate causal and non-causal activity, thus confounding important dynamical information. In this thesis, I present a new method to unveil causal links, known as causal webs, between neuronal activations, thus allowing for experimental tests of the quasicriticality hypothesis and other practical applications.
Spatially Nonlinear Interdependence of Alpha-Oscillatory Neural Networks under Chan Meditation
Chang, Chih-Hao
2013-01-01
This paper reports the results of our investigation of the effects of Chan meditation on brain electrophysiological behaviors from the viewpoint of spatially nonlinear interdependence among regional neural networks. Particular emphasis is laid on the alpha-dominated EEG (electroencephalograph). Continuous-time wavelet transform was adopted to detect the epochs containing substantial alpha activities. Nonlinear interdependence quantified by similarity index S(X∣Y), the influence of source signal Y on sink signal X, was applied to the nonlinear dynamical model in phase space reconstructed from multichannel EEG. Experimental group involved ten experienced Chan-Meditation practitioners, while control group included ten healthy subjects within the same age range, yet, without any meditation experience. Nonlinear interdependence among various cortical regions was explored for five local neural-network regions, frontal, posterior, right-temporal, left-temporal, and central regions. In the experimental group, the inter-regional interaction was evaluated for the brain dynamics under three different stages, at rest (stage R, pre-meditation background recording), in Chan meditation (stage M), and the unique Chakra-focusing practice (stage C). Experimental group exhibits stronger interactions among various local neural networks at stages M and C compared with those at stage R. The intergroup comparison demonstrates that Chan-meditation brain possesses better cortical inter-regional interactions than the resting brain of control group. PMID:24489583
Associative memory in an analog iterated-map neural network
NASA Astrophysics Data System (ADS)
Marcus, C. M.; Waugh, F. R.; Westervelt, R. M.
1990-03-01
The behavior of an analog neural network with parallel dynamics is studied analytically and numerically for two associative-memory learning algorithms, the Hebb rule and the pseudoinverse rule. Phase diagrams in the parameter space of analog gain β and storage ratio α are presented. For both learning rules, the networks have large ``recall'' phases in which retrieval states exist and convergence to a fixed point is guaranteed by a global stability criterion. We also demonstrate numerically that using a reduced analog gain increases the probability of recall starting from a random initial state. This phenomenon is comparable to thermal annealing used to escape local minima but has the advantage of being deterministic, and therefore easily implemented in electronic hardware. Similarities and differences between analog neural networks and networks with two-state neurons at finite temperature are also discussed.
Voloh, Benjamin; Womelsdorf, Thilo
2016-01-01
Short periods of oscillatory activation are ubiquitous signatures of neural circuits. A broad range of studies documents not only their circuit origins, but also a fundamental role for oscillatory activity in coordinating information transfer during goal directed behavior. Recent studies suggest that resetting the phase of ongoing oscillatory activity to endogenous or exogenous cues facilitates coordinated information transfer within circuits and between distributed brain areas. Here, we review evidence that pinpoints phase resetting as a critical marker of dynamic state changes of functional networks. Phase resets: (1) set a “neural context” in terms of narrow band frequencies that uniquely characterizes the activated circuits; (2) impose coherent low frequency phases to which high frequency activations can synchronize, identifiable as cross-frequency correlations across large anatomical distances; (3) are critical for neural coding models that depend on phase, increasing the informational content of neural representations; and (4) likely originate from the dynamics of canonical E-I circuits that are anatomically ubiquitous. These multiple signatures of phase resets are directly linked to enhanced information transfer and behavioral success. We survey how phase resets re-organize oscillations in diverse task contexts, including sensory perception, attentional stimulus selection, cross-modal integration, Pavlovian conditioning, and spatial navigation. The evidence we consider suggests that phase-resets can drive changes in neural excitability, ensemble organization, functional networks, and ultimately, overt behavior. PMID:27013986
NASA Astrophysics Data System (ADS)
Nasruddin; Lestari, M.; Supriyadi; Sholahudin
2018-03-01
The use of hydrogen gas in fuel cell technology has a huge opportunity to be applied in upcoming vehicle technology. One of the most important problems in fuel cell technology is the hydrogen storage. The adsorption of hydrogen in carbon-based materials attracts a lot of attention because of its reliability. This study investigated the adsorption of hydrogen gas in Single-walled Carbon Nano Tubes (SWCNT) with chilarity of (0, 12), (0, 15), and (0, 18) to find the optimum chilarity. Artificial Neural Networks (ANN) can be used to predict the hydrogen storage capacity at different pressure and temperature conditions appropriately, using simulated series of data. The Artificial Neural Network is modeled as a predictor of the hydrogen adsorption capacity which provides solutions to some deficiencies in molecular dynamics (MD) simulations. In a previous study, ANN configurations have been developed for 77k, 233k, and 298k temperatures in hydrogen gas storage. To prepare this prediction, ANN is modeled to find out the configurations that exist in the set of training and validation of specified data selection, the distance between data, and the number of neurons that produce the smallest error. This configuration is needed to make an accurate artificial neural network. The configuration of neural network was then applied to this research. The neural network analysis results show that the best configuration of artificial neural network in hydrogen storage is at 233K temperature i.e. on SWCNT with chilarity of (0.12).
San, Phyo Phyo; Ling, Sai Ho; Nuryani; Nguyen, Hung
2014-08-01
This paper focuses on the hybridization technology using rough sets concepts and neural computing for decision and classification purposes. Based on the rough set properties, the lower region and boundary region are defined to partition the input signal to a consistent (predictable) part and an inconsistent (random) part. In this way, the neural network is designed to deal only with the boundary region, which mainly consists of an inconsistent part of applied input signal causing inaccurate modeling of the data set. Owing to different characteristics of neural network (NN) applications, the same structure of conventional NN might not give the optimal solution. Based on the knowledge of application in this paper, a block-based neural network (BBNN) is selected as a suitable classifier due to its ability to evolve internal structures and adaptability in dynamic environments. This architecture will systematically incorporate the characteristics of application to the structure of hybrid rough-block-based neural network (R-BBNN). A global training algorithm, hybrid particle swarm optimization with wavelet mutation is introduced for parameter optimization of proposed R-BBNN. The performance of the proposed R-BBNN algorithm was evaluated by an application to the field of medical diagnosis using real hypoglycemia episodes in patients with Type 1 diabetes mellitus. The performance of the proposed hybrid system has been compared with some of the existing neural networks. The comparison results indicated that the proposed method has improved classification performance and results in early convergence of the network.
Toutounji, Hazem; Pasemann, Frank
2014-01-01
The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot.
Criticality meets learning: Criticality signatures in a self-organizing recurrent neural network
Del Papa, Bruno; Priesemann, Viola
2017-01-01
Many experiments have suggested that the brain operates close to a critical state, based on signatures of criticality such as power-law distributed neuronal avalanches. In neural network models, criticality is a dynamical state that maximizes information processing capacities, e.g. sensitivity to input, dynamical range and storage capacity, which makes it a favorable candidate state for brain function. Although models that self-organize towards a critical state have been proposed, the relation between criticality signatures and learning is still unclear. Here, we investigate signatures of criticality in a self-organizing recurrent neural network (SORN). Investigating criticality in the SORN is of particular interest because it has not been developed to show criticality. Instead, the SORN has been shown to exhibit spatio-temporal pattern learning through a combination of neural plasticity mechanisms and it reproduces a number of biological findings on neural variability and the statistics and fluctuations of synaptic efficacies. We show that, after a transient, the SORN spontaneously self-organizes into a dynamical state that shows criticality signatures comparable to those found in experiments. The plasticity mechanisms are necessary to attain that dynamical state, but not to maintain it. Furthermore, onset of external input transiently changes the slope of the avalanche distributions – matching recent experimental findings. Interestingly, the membrane noise level necessary for the occurrence of the criticality signatures reduces the model’s performance in simple learning tasks. Overall, our work shows that the biologically inspired plasticity and homeostasis mechanisms responsible for the SORN’s spatio-temporal learning abilities can give rise to criticality signatures in its activity when driven by random input, but these break down under the structured input of short repeating sequences. PMID:28552964
Toutounji, Hazem; Pasemann, Frank
2014-01-01
The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot. PMID:24904403
Neural dynamic optimization for control systems. I. Background.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Neural dynamic optimization for control systems.III. Applications.
Seong, C Y; Widrow, B
2001-01-01
For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.
Neural dynamic optimization for control systems.II. Theory.
Seong, C Y; Widrow, B
2001-01-01
The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.
Diano, Matteo; Tamietto, Marco; Celeghin, Alessia; Weiskrantz, Lawrence; Tatu, Mona-Karina; Bagnis, Arianna; Duca, Sergio; Geminiani, Giuliano; Cauda, Franco; Costa, Tommaso
2017-03-27
The quest to characterize the neural signature distinctive of different basic emotions has recently come under renewed scrutiny. Here we investigated whether facial expressions of different basic emotions modulate the functional connectivity of the amygdala with the rest of the brain. To this end, we presented seventeen healthy participants (8 females) with facial expressions of anger, disgust, fear, happiness, sadness and emotional neutrality and analyzed amygdala's psychophysiological interaction (PPI). In fact, PPI can reveal how inter-regional amygdala communications change dynamically depending on perception of various emotional expressions to recruit different brain networks, compared to the functional interactions it entertains during perception of neutral expressions. We found that for each emotion the amygdala recruited a distinctive and spatially distributed set of structures to interact with. These changes in amygdala connectional patters characterize the dynamic signature prototypical of individual emotion processing, and seemingly represent a neural mechanism that serves to implement the distinctive influence that each emotion exerts on perceptual, cognitive, and motor responses. Besides these differences, all emotions enhanced amygdala functional integration with premotor cortices compared to neutral faces. The present findings thus concur to reconceptualise the structure-function relation between brain-emotion from the traditional one-to-one mapping toward a network-based and dynamic perspective.
An adaptive neural swarm approach for intrusion defense in ad hoc networks
NASA Astrophysics Data System (ADS)
Cannady, James
2011-06-01
Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.
Hexacopter trajectory control using a neural network
NASA Astrophysics Data System (ADS)
Artale, V.; Collotta, M.; Pau, G.; Ricciardello, A.
2013-10-01
The modern flight control systems are complex due to their non-linear nature. In fact, modern aerospace vehicles are expected to have non-conventional flight envelopes and, then, they must guarantee a high level of robustness and adaptability in order to operate in uncertain environments. Neural Networks (NN), with real-time learning capability, for flight control can be used in applications with manned or unmanned aerial vehicles. Indeed, using proven lower level control algorithms with adaptive elements that exhibit long term learning could help in achieving better adaptation performance while performing aggressive maneuvers. In this paper we show a mathematical modeling and a Neural Network for a hexacopter dynamics in order to develop proper methods for stabilization and trajectory control.
Synchronization stability of memristor-based complex-valued neural networks with time delays.
Liu, Dan; Zhu, Song; Ye, Er
2017-12-01
This paper focuses on the dynamical property of a class of memristor-based complex-valued neural networks (MCVNNs) with time delays. By constructing the appropriate Lyapunov functional and utilizing the inequality technique, sufficient conditions are proposed to guarantee exponential synchronization of the coupled systems based on drive-response concept. The proposed results are very easy to verify, and they also extend some previous related works on memristor-based real-valued neural networks. Meanwhile, the obtained sufficient conditions of this paper may be conducive to qualitative analysis of some complex-valued nonlinear delayed systems. A numerical example is given to demonstrate the effectiveness of our theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neural network modeling of associative memory: Beyond the Hopfield model
NASA Astrophysics Data System (ADS)
Dasgupta, Chandan
1992-07-01
A number of neural network models, in which fixed-point and limit-cycle attractors of the underlying dynamics are used to store and associatively recall information, are described. In the first class of models, a hierarchical structure is used to store an exponentially large number of strongly correlated memories. The second class of models uses limit cycles to store and retrieve individual memories. A neurobiologically plausible network that generates low-amplitude periodic variations of activity, similar to the oscillations observed in electroencephalographic recordings, is also described. Results obtained from analytic and numerical studies of the properties of these networks are discussed.
Artificial Neural Network L* from different magnetospheric field models
NASA Astrophysics Data System (ADS)
Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.
2011-12-01
The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-12-12
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy.
The neural circuit and synaptic dynamics underlying perceptual decision-making
NASA Astrophysics Data System (ADS)
Liu, Feng
2015-03-01
Decision-making with several choice options is central to cognition. To elucidate the neural mechanisms of multiple-choice motion discrimination, we built a continuous recurrent network model to represent a local circuit in the lateral intraparietal area (LIP). The network is composed of pyramidal cells and interneurons, which are directionally tuned. All neurons are reciprocally connected, and the synaptic connectivity strength is heterogeneous. Specifically, we assume two types of inhibitory connectivity to pyramidal cells: opposite-feature and similar-feature inhibition. The model accounted for both physiological and behavioral data from monkey experiments. The network is endowed with slow excitatory reverberation, which subserves the buildup and maintenance of persistent neural activity, and predominant feedback inhibition, which underlies the winner-take-all competition and attractor dynamics. The opposite-feature and opposite-feature inhibition have different effects on decision-making, and only their combination allows for a categorical choice among 12 alternatives. Together, our work highlights the importance of structured synaptic inhibition in multiple-choice decision-making processes.
Sun, Shan-Bin; He, Yuan-Yuan; Zhou, Si-Da; Yue, Zhen-Jiang
2017-01-01
Measurement of dynamic responses plays an important role in structural health monitoring, damage detection and other fields of research. However, in aerospace engineering, the physical sensors are limited in the operational conditions of spacecraft, due to the severe environment in outer space. This paper proposes a virtual sensor model with partial vibration measurements using a convolutional neural network. The transmissibility function is employed as prior knowledge. A four-layer neural network with two convolutional layers, one fully connected layer, and an output layer is proposed as the predicting model. Numerical examples of two different structural dynamic systems demonstrate the performance of the proposed approach. The excellence of the novel technique is further indicated using a simply supported beam experiment comparing to a modal-model-based virtual sensor, which uses modal parameters, such as mode shapes, for estimating the responses of the faulty sensors. The results show that the presented data-driven response virtual sensor technique can predict structural response with high accuracy. PMID:29231868
A model for integrating elementary neural functions into delayed-response behavior.
Gisiger, Thomas; Kerszberg, Michel
2006-04-01
It is well established that various cortical regions can implement a wide array of neural processes, yet the mechanisms which integrate these processes into behavior-producing, brain-scale activity remain elusive. We propose that an important role in this respect might be played by executive structures controlling the traffic of information between the cortical regions involved. To illustrate this hypothesis, we present a neural network model comprising a set of interconnected structures harboring stimulus-related activity (visual representation, working memory, and planning), and a group of executive units with task-related activity patterns that manage the information flowing between them. The resulting dynamics allows the network to perform the dual task of either retaining an image during a delay (delayed-matching to sample task), or recalling from this image another one that has been associated with it during training (delayed-pair association task). The model reproduces behavioral and electrophysiological data gathered on the inferior temporal and prefrontal cortices of primates performing these same tasks. It also makes predictions on how neural activity coding for the recall of the image associated with the sample emerges and becomes prospective during the training phase. The network dynamics proves to be very stable against perturbations, and it exhibits signs of scale-invariant organization and cooperativity. The present network represents a possible neural implementation for active, top-down, prospective memory retrieval in primates. The model suggests that brain activity leading to performance of cognitive tasks might be organized in modular fashion, simple neural functions becoming integrated into more complex behavior by executive structures harbored in prefrontal cortex and/or basal ganglia.
Driving the brain towards creativity and intelligence: A network control theory analysis.
Kenett, Yoed N; Medaglia, John D; Beaty, Roger E; Chen, Qunlin; Betzel, Richard F; Thompson-Schill, Sharon L; Qiu, Jiang
2018-01-04
High-level cognitive constructs, such as creativity and intelligence, entail complex and multiple processes, including cognitive control processes. Recent neurocognitive research on these constructs highlight the importance of dynamic interaction across neural network systems and the role of cognitive control processes in guiding such a dynamic interaction. How can we quantitatively examine the extent and ways in which cognitive control contributes to creativity and intelligence? To address this question, we apply a computational network control theory (NCT) approach to structural brain imaging data acquired via diffusion tensor imaging in a large sample of participants, to examine how NCT relates to individual differences in distinct measures of creative ability and intelligence. Recent application of this theory at the neural level is built on a model of brain dynamics, which mathematically models patterns of inter-region activity propagated along the structure of an underlying network. The strength of this approach is its ability to characterize the potential role of each brain region in regulating whole-brain network function based on its anatomical fingerprint and a simplified model of node dynamics. We find that intelligence is related to the ability to "drive" the brain system into easy to reach neural states by the right inferior parietal lobe and lower integration abilities in the left retrosplenial cortex. We also find that creativity is related to the ability to "drive" the brain system into difficult to reach states by the right dorsolateral prefrontal cortex (inferior frontal junction) and higher integration abilities in sensorimotor areas. Furthermore, we found that different facets of creativity-fluency, flexibility, and originality-relate to generally similar but not identical network controllability processes. We relate our findings to general theories on intelligence and creativity. Copyright © 2018 Elsevier Ltd. All rights reserved.
Li, Yang; Oku, Makito; He, Guoguang; Aihara, Kazuyuki
2017-04-01
In this study, a method is proposed that eliminates spiral waves in a locally connected chaotic neural network (CNN) under some simplified conditions, using a dynamic phase space constraint (DPSC) as a control method. In this method, a control signal is constructed from the feedback internal states of the neurons to detect phase singularities based on their amplitude reduction, before modulating a threshold value to truncate the refractory internal states of the neurons and terminate the spirals. Simulations showed that with appropriate parameter settings, the network was directed from a spiral wave state into either a plane wave (PW) state or a synchronized oscillation (SO) state, where the control vanished automatically and left the original CNN model unaltered. Each type of state had a characteristic oscillation frequency, where spiral wave states had the highest, and the intra-control dynamics was dominated by low-frequency components, thereby indicating slow adjustments to the state variables. In addition, the PW-inducing and SO-inducing control processes were distinct, where the former generally had longer durations but smaller average proportions of affected neurons in the network. Furthermore, variations in the control parameter allowed partial selectivity of the control results, which were accompanied by modulation of the control processes. The results of this study broaden the applicability of DPSC to chaos control and they may also facilitate the utilization of locally connected CNNs in memory retrieval and the exploration of traveling wave dynamics in biological neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dumedah, Gift; Walker, Jeffrey P.; Chik, Li
2014-07-01
Soil moisture information is critically important for water management operations including flood forecasting, drought monitoring, and groundwater recharge estimation. While an accurate and continuous record of soil moisture is required for these applications, the available soil moisture data, in practice, is typically fraught with missing values. There are a wide range of methods available to infilling hydrologic variables, but a thorough inter-comparison between statistical methods and artificial neural networks has not been made. This study examines 5 statistical methods including monthly averages, weighted Pearson correlation coefficient, a method based on temporal stability of soil moisture, and a weighted merging of the three methods, together with a method based on the concept of rough sets. Additionally, 9 artificial neural networks are examined, broadly categorized into feedforward, dynamic, and radial basis networks. These 14 infilling methods were used to estimate missing soil moisture records and subsequently validated against known values for 13 soil moisture monitoring stations for three different soil layer depths in the Yanco region in southeast Australia. The evaluation results show that the top three highest performing methods are the nonlinear autoregressive neural network, rough sets method, and monthly replacement. A high estimation accuracy (root mean square error (RMSE) of about 0.03 m/m) was found in the nonlinear autoregressive network, due to its regression based dynamic network which allows feedback connections through discrete-time estimation. An equally high accuracy (0.05 m/m RMSE) in the rough sets procedure illustrates the important role of temporal persistence of soil moisture, with the capability to account for different soil moisture conditions.
Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.
Li, Xiumin; Small, Michael
2012-06-01
Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.
NASA Astrophysics Data System (ADS)
Raff, L. M.; Malshe, M.; Hagan, M.; Doughan, D. I.; Rockley, M. G.; Komanduri, R.
2005-02-01
A neural network/trajectory approach is presented for the development of accurate potential-energy hypersurfaces that can be utilized to conduct ab initio molecular dynamics (AIMD) and Monte Carlo studies of gas-phase chemical reactions, nanometric cutting, and nanotribology, and of a variety of mechanical properties of importance in potential microelectromechanical systems applications. The method is sufficiently robust that it can be applied to a wide range of polyatomic systems. The overall method integrates ab initio electronic structure calculations with importance sampling techniques that permit the critical regions of configuration space to be determined. The computed ab initio energies and gradients are then accurately interpolated using neural networks (NN) rather than arbitrary parametrized analytical functional forms, moving interpolation or least-squares methods. The sampling method involves a tight integration of molecular dynamics calculations with neural networks that employ early stopping and regularization procedures to improve network performance and test for convergence. The procedure can be initiated using an empirical potential surface or direct dynamics. The accuracy and interpolation power of the method has been tested for two cases, the global potential surface for vinyl bromide undergoing unimolecular decomposition via four different reaction channels and nanometric cutting of silicon. The results show that the sampling methods permit the important regions of configuration space to be easily and rapidly identified, that convergence of the NN fit to the ab initio electronic structure database can be easily monitored, and that the interpolation accuracy of the NN fits is excellent, even for systems involving five atoms or more. The method permits a substantial computational speed and accuracy advantage over existing methods, is robust, and relatively easy to implement.
Hypersonic Vehicle Trajectory Optimization and Control
NASA Technical Reports Server (NTRS)
Balakrishnan, S. N.; Shen, J.; Grohs, J. R.
1997-01-01
Two classes of neural networks have been developed for the study of hypersonic vehicle trajectory optimization and control. The first one is called an 'adaptive critic'. The uniqueness and main features of this approach are that: (1) they need no external training; (2) they allow variability of initial conditions; and (3) they can serve as feedback control. This is used to solve a 'free final time' two-point boundary value problem that maximizes the mass at the rocket burn-out while satisfying the pre-specified burn-out conditions in velocity, flightpath angle, and altitude. The second neural network is a recurrent network. An interesting feature of this network formulation is that when its inputs are the coefficients of the dynamics and control matrices, the network outputs are the Kalman sequences (with a quadratic cost function); the same network is also used for identifying the coefficients of the dynamics and control matrices. Consequently, we can use it to control a system whose parameters are uncertain. Numerical results are presented which illustrate the potential of these methods.
A Markovian event-based framework for stochastic spiking neural networks.
Touboul, Jonathan D; Faugeras, Olivier D
2011-11-01
In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.
Multiplex visibility graphs to investigate recurrent neural network dynamics
NASA Astrophysics Data System (ADS)
Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert
2017-03-01
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods.
Multiplex visibility graphs to investigate recurrent neural network dynamics
Bianchi, Filippo Maria; Livi, Lorenzo; Alippi, Cesare; Jenssen, Robert
2017-01-01
A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods. PMID:28281563
Data collapse and critical dynamics in neuronal avalanche data
NASA Astrophysics Data System (ADS)
Butler, Thomas; Friedman, Nir; Dahmen, Karin; Beggs, John; Deville, Lee; Ito, Shinya
2012-02-01
The tasks of information processing, computation, and response to stimuli require neural computation to be remarkably flexible and diverse. To optimally satisfy the demands of neural computation, neuronal networks have been hypothesized to operate near a non-equilibrium critical point. In spite of their importance for neural dynamics, experimental evidence for critical dynamics has been primarily limited to power law statistics that can also emerge from non-critical mechanisms. By tracking the firing of large numbers of synaptically connected cortical neurons and comparing the resulting data to the predictions of critical phenomena, we show that cortical tissues in vitro can function near criticality. Among the most striking predictions of critical dynamics is that the mean temporal profiles of avalanches of widely varying durations are quantitatively described by a single universal scaling function (data collapse). We show for the first time that this prediction is confirmed in neuronal networks. We also show that the data have three additional features predicted by critical phenomena: approximate power law distributions of avalanche sizes and durations, samples in subcritical and supercritical phases, and scaling laws between anomalous exponents.
NASA Astrophysics Data System (ADS)
Mudigonda, Naga R.; Kacelenga, Ray; Edwards, Mark
2004-09-01
This paper evaluates the performance of a holographic neural network in comparison with a conventional feedforward backpropagation neural network for the classification of landmine targets in ground penetrating radar images. The data used in the study was acquired from four different test sites using the landmine detection system developed by General Dynamics Canada Ltd., in collaboration with the Defense Research and Development Canada, Suffield. A set of seven features extracted for each detected alarm is used as stimulus inputs for the networks. The recall responses of the networks are then evaluated against the ground truth to declare true or false detections. The area computed under the receiver operating characteristic curve is used for comparative purposes. With a large dataset comprising of data from multiple sites, both the holographic and conventional networks showed comparable trends in recall accuracies with area values of 0.88 and 0.87, respectively. By using independent validation datasets, the holographic network"s generalization performance was observed to be better (mean area = 0.86) as compared to the conventional network (mean area = 0.82). Despite the widely publicized theoretical advantages of the holographic technology, use of more than the required number of cortical memory elements resulted in an over-fitting phenomenon of the holographic network.
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks. PMID:26973503
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.
Death and rebirth of neural activity in sparse inhibitory networks
NASA Astrophysics Data System (ADS)
Angulo-Garcia, David; Luccioli, Stefano; Olmi, Simona; Torcini, Alessandro
2017-05-01
Inhibition is a key aspect of neural dynamics playing a fundamental role for the emergence of neural rhythms and the implementation of various information coding strategies. Inhibitory populations are present in several brain structures, and the comprehension of their dynamics is strategical for the understanding of neural processing. In this paper, we clarify the mechanisms underlying a general phenomenon present in pulse-coupled heterogeneous inhibitory networks: inhibition can induce not only suppression of neural activity, as expected, but can also promote neural re-activation. In particular, for globally coupled systems, the number of firing neurons monotonically reduces upon increasing the strength of inhibition (neuronal death). However, the random pruning of connections is able to reverse the action of inhibition, i.e. in a random sparse network a sufficiently strong synaptic strength can surprisingly promote, rather than depress, the activity of neurons (neuronal rebirth). Thus, the number of firing neurons reaches a minimum value at some intermediate synaptic strength. We show that this minimum signals a transition from a regime dominated by neurons with a higher firing activity to a phase where all neurons are effectively sub-threshold and their irregular firing is driven by current fluctuations. We explain the origin of the transition by deriving a mean field formulation of the problem able to provide the fraction of active neurons as well as the first two moments of their firing statistics. The introduction of a synaptic time scale does not modify the main aspects of the reported phenomenon. However, for sufficiently slow synapses the transition becomes dramatic, and the system passes from a perfectly regular evolution to irregular bursting dynamics. In this latter regime the model provides predictions consistent with experimental findings for a specific class of neurons, namely the medium spiny neurons in the striatum.
The Brain as an Efficient and Robust Adaptive Learner.
Denève, Sophie; Alemi, Alireza; Bourdoukan, Ralph
2017-06-07
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level. Copyright © 2017 Elsevier Inc. All rights reserved.
Dynamic range in the C. elegans brain network
NASA Astrophysics Data System (ADS)
Antonopoulos, Chris G.
2016-01-01
We study external electrical perturbations and their responses in the brain dynamic network of the Caenorhabditis elegans soil worm, given by the connectome of its large somatic nervous system. Our analysis is inspired by a realistic experiment where one stimulates externally specific parts of the brain and studies the persistent neural activity triggered in other cortical regions. In this work, we perturb groups of neurons that form communities, identified by the walktrap community detection method, by trains of stereotypical electrical Poissonian impulses and study the propagation of neural activity to other communities by measuring the corresponding dynamic ranges and Steven law exponents. We show that when one perturbs specific communities, keeping the rest unperturbed, the external stimulations are able to propagate to some of them but not to all. There are also perturbations that do not trigger any response. We found that this depends on the initially perturbed community. Finally, we relate our findings for the former cases with low neural synchronization, self-criticality, and large information flow capacity, and interpret them as the ability of the brain network to respond to external perturbations when it works at criticality and its information flow capacity becomes maximal.
Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.
Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming
2018-05-01
The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.
Computational modeling of neural plasticity for self-organization of neural networks.
Chrol-Cannon, Joseph; Jin, Yaochu
2014-11-01
Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Intrinsic and Extrinsic Neuromodulation of Olfactory Processing.
Lizbinski, Kristyn M; Dacks, Andrew M
2017-01-01
Neuromodulation is a ubiquitous feature of neural systems, allowing flexible, context specific control over network dynamics. Neuromodulation was first described in invertebrate motor systems and early work established a basic dichotomy for neuromodulation as having either an intrinsic origin (i.e., neurons that participate in network coding) or an extrinsic origin (i.e., neurons from independent networks). In this conceptual dichotomy, intrinsic sources of neuromodulation provide a "memory" by adjusting network dynamics based upon previous and ongoing activation of the network itself, while extrinsic neuromodulators provide the context of ongoing activity of other neural networks. Although this dichotomy has been thoroughly considered in motor systems, it has received far less attention in sensory systems. In this review, we discuss intrinsic and extrinsic modulation in the context of olfactory processing in invertebrate and vertebrate model systems. We begin by discussing presynaptic modulation of olfactory sensory neurons by local interneurons (LNs) as a mechanism for gain control based on ongoing network activation. We then discuss the cell-class specific effects of serotonergic centrifugal neurons on olfactory processing. Finally, we briefly discuss the integration of intrinsic and extrinsic neuromodulation (metamodulation) as an effective mechanism for exerting global control over olfactory network dynamics. The heterogeneous nature of neuromodulation is a recurring theme throughout this review as the effects of both intrinsic and extrinsic modulation are generally non-uniform.
Neuroscience of drug craving for addiction medicine: From circuits to therapies.
Ekhtiari, Hamed; Nasseri, Padideh; Yavari, Fatemeh; Mokri, Azarkhsh; Monterosso, John
2016-01-01
Drug craving is a dynamic neurocognitive emotional-motivational response to a wide range of cues, from internal to external environments and from drug-related to stressful or affective events. The subjective feeling of craving, as an appetitive or compulsive state, could be considered a part of this multidimensional process, with modules in different levels of consciousness and embodiment. The neural correspondence of this dynamic and complex phenomenon may be productively investigated in relation to regional, small-scale networks, large-scale networks, and brain states. Within cognitive neuroscience, this approach has provided a long list of neural and cognitive targets for craving modulations with different cognitive, electrical, or pharmacological interventions. There are new opportunities to integrate different approaches for carving management from environmental, behavioral, psychosocial, cognitive, and neural perspectives. By using cognitive neuroscience models that treat drug craving as a dynamic and multidimensional process, these approaches may yield more effective interventions for addiction medicine. © 2016 Elsevier B.V. All rights reserved.
Neural network fusion capabilities for efficient implementation of tracking algorithms
NASA Astrophysics Data System (ADS)
Sundareshan, Malur K.; Amoozegar, Farid
1997-03-01
The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.
Electronic neural network for solving traveling salesman and similar global optimization problems
NASA Technical Reports Server (NTRS)
Thakoor, Anilkumar P. (Inventor); Moopenn, Alexander W. (Inventor); Duong, Tuan A. (Inventor); Eberhardt, Silvio P. (Inventor)
1993-01-01
This invention is a novel high-speed neural network based processor for solving the 'traveling salesman' and other global optimization problems. It comprises a novel hybrid architecture employing a binary synaptic array whose embodiment incorporates the fixed rules of the problem, such as the number of cities to be visited. The array is prompted by analog voltages representing variables such as distances. The processor incorporates two interconnected feedback networks, each of which solves part of the problem independently and simultaneously, yet which exchange information dynamically.
Short-Term Memory in Orthogonal Neural Networks
NASA Astrophysics Data System (ADS)
White, Olivia L.; Lee, Daniel D.; Sompolinsky, Haim
2004-04-01
We study the ability of linear recurrent networks obeying discrete time dynamics to store long temporal sequences that are retrievable from the instantaneous state of the network. We calculate this temporal memory capacity for both distributed shift register and random orthogonal connectivity matrices. We show that the memory capacity of these networks scales with system size.
Passivity of Directed and Undirected Complex Dynamical Networks With Adaptive Coupling Weights.
Wang, Jin-Liang; Wu, Huai-Ning; Huang, Tingwen; Ren, Shun-Yan; Wu, Jigang
2017-08-01
A complex dynamical network consisting of N identical neural networks with reaction-diffusion terms is considered in this paper. First, several passivity definitions for the systems with different dimensions of input and output are given. By utilizing some inequality techniques, several criteria are presented, ensuring the passivity of the complex dynamical network under the designed adaptive law. Then, we discuss the relationship between the synchronization and output strict passivity of the proposed network model. Furthermore, these results are extended to the case when the topological structure of the network is undirected. Finally, two examples with numerical simulations are provided to illustrate the correctness and effectiveness of the proposed results.
Cortical Entropy, Mutual Information and Scale-Free Dynamics in Waking Mice.
Fagerholm, Erik D; Scott, Gregory; Shew, Woodrow L; Song, Chenchen; Leech, Robert; Knöpfel, Thomas; Sharp, David J
2016-10-01
Some neural circuits operate with simple dynamics characterized by one or a few well-defined spatiotemporal scales (e.g. central pattern generators). In contrast, cortical neuronal networks often exhibit richer activity patterns in which all spatiotemporal scales are represented. Such "scale-free" cortical dynamics manifest as cascades of activity with cascade sizes that are distributed according to a power-law. Theory and in vitro experiments suggest that information transmission among cortical circuits is optimized by scale-free dynamics. In vivo tests of this hypothesis have been limited by experimental techniques with insufficient spatial coverage and resolution, i.e., restricted access to a wide range of scales. We overcame these limitations by using genetically encoded voltage imaging to track neural activity in layer 2/3 pyramidal cells across the cortex in mice. As mice recovered from anesthesia, we observed three changes: (a) cortical information capacity increased, (b) information transmission among cortical regions increased and (c) neural activity became scale-free. Our results demonstrate that both information capacity and information transmission are maximized in the awake state in cortical regions with scale-free network dynamics. © The Author 2016. Published by Oxford University Press.
Auto-programmable impulse neural circuits
NASA Technical Reports Server (NTRS)
Watula, D.; Meador, J.
1990-01-01
Impulse neural networks use pulse trains to communicate neuron activation levels. Impulse neural circuits emulate natural neurons at a more detailed level than that typically employed by contemporary neural network implementation methods. An impulse neural circuit which realizes short term memory dynamics is presented. The operation of that circuit is then characterized in terms of pulse frequency modulated signals. Both fixed and programmable synapse circuits for realizing long term memory are also described. The implementation of a simple and useful unsupervised learning law is then presented. The implementation of a differential Hebbian learning rule for a specific mean-frequency signal interpretation is shown to have a straightforward implementation using digital combinational logic with a variation of a previously developed programmable synapse circuit. This circuit is expected to be exploited for simple and straightforward implementation of future auto-adaptive neural circuits.
Multi-layer neural networks for robot control
NASA Technical Reports Server (NTRS)
Pourboghrat, Farzad
1989-01-01
Two neural learning controller designs for manipulators are considered. The first design is based on a neural inverse-dynamics system. The second is the combination of the first one with a neural adaptive state feedback system. Both types of controllers enable the manipulator to perform any given task very well after a period of training and to do other untrained tasks satisfactorily. The second design also enables the manipulator to compensate for unpredictable perturbations.
Duong, D V; Reilly, K D
1995-10-01
This sociological simulation uses the ideas of semiotics and symbolic interactionism to demonstrate how an appropriately developed associative memory in the minds of individuals on the microlevel can self-organize into macrolevel dissipative structures of societies such as racial cultural/economic classes, status symbols and fads. The associative memory used is based on an extension of the IAC neural network (the Interactive Activation and Competition network). Several IAC networks act together to form a society by virtue of their human-like properties of intuition and creativity. These properties give them the ability to create and understand signs, which lead to the macrolevel structures of society. This system is implemented in hierarchical object oriented container classes which facilitate change in deep structure. Graphs of general trends and an historical account of a simulation run of this dynamical system are presented.
Elements of an algorithm for optimizing a parameter-structural neural network
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2016-06-01
The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.
Computational exploration of neuron and neural network models in neurobiology.
Prinz, Astrid A
2007-01-01
The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.
Evaluation and prediction of solar radiation for energy management based on neural networks
NASA Astrophysics Data System (ADS)
Aldoshina, O. V.; Van Tai, Dinh
2017-08-01
Currently, there is a high rate of distribution of renewable energy sources and distributed power generation based on intelligent networks; therefore, meteorological forecasts are particularly useful for planning and managing the energy system in order to increase its overall efficiency and productivity. The application of artificial neural networks (ANN) in the field of photovoltaic energy is presented in this article. Implemented in this study, two periodically repeating dynamic ANS, that are the concentration of the time delay of a neural network (CTDNN) and the non-linear autoregression of a network with exogenous inputs of the NAEI, are used in the development of a model for estimating and daily forecasting of solar radiation. ANN show good productivity, as reliable and accurate models of daily solar radiation are obtained. This allows to successfully predict the photovoltaic output power for this installation. The potential of the proposed method for controlling the energy of the electrical network is shown using the example of the application of the NAEI network for predicting the electric load.
Integration of Online Parameter Identification and Neural Network for In-Flight Adaptive Control
NASA Technical Reports Server (NTRS)
Hageman, Jacob J.; Smith, Mark S.; Stachowiak, Susan
2003-01-01
An indirect adaptive system has been constructed for robust control of an aircraft with uncertain aerodynamic characteristics. This system consists of a multilayer perceptron pre-trained neural network, online stability and control derivative identification, a dynamic cell structure online learning neural network, and a model following control system based on the stochastic optimal feedforward and feedback technique. The pre-trained neural network and model following control system have been flight-tested, but the online parameter identification and online learning neural network are new additions used for in-flight adaptation of the control system model. A description of the modification and integration of these two stand-alone software packages into the complete system in preparation for initial flight tests is presented. Open-loop results using both simulation and flight data, as well as closed-loop performance of the complete system in a nonlinear, six-degree-of-freedom, flight validated simulation, are analyzed. Results show that this online learning system, in contrast to the nonlearning system, has the ability to adapt to changes in aerodynamic characteristics in a real-time, closed-loop, piloted simulation, resulting in improved flying qualities.
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage. PMID:27468262
Multi-Temporal Land Cover Classification with Long Short-Term Memory Neural Networks
NASA Astrophysics Data System (ADS)
Rußwurm, M.; Körner, M.
2017-05-01
Land cover classification (LCC) is a central and wide field of research in earth observation and has already put forth a variety of classification techniques. Many approaches are based on classification techniques considering observation at certain points in time. However, some land cover classes, such as crops, change their spectral characteristics due to environmental influences and can thus not be monitored effectively with classical mono-temporal approaches. Nevertheless, these temporal observations should be utilized to benefit the classification process. After extensive research has been conducted on modeling temporal dynamics by spectro-temporal profiles using vegetation indices, we propose a deep learning approach to utilize these temporal characteristics for classification tasks. In this work, we show how long short-term memory (LSTM) neural networks can be employed for crop identification purposes with SENTINEL 2A observations from large study areas and label information provided by local authorities. We compare these temporal neural network models, i.e., LSTM and recurrent neural network (RNN), with a classical non-temporal convolutional neural network (CNN) model and an additional support vector machine (SVM) baseline. With our rather straightforward LSTM variant, we exceeded state-of-the-art classification performance, thus opening promising potential for further research.
Testolin, Alberto; Zorzi, Marco
2016-01-01
Connectionist models can be characterized within the more general framework of probabilistic graphical models, which allow to efficiently describe complex statistical distributions involving a large number of interacting variables. This integration allows building more realistic computational models of cognitive functions, which more faithfully reflect the underlying neural mechanisms at the same time providing a useful bridge to higher-level descriptions in terms of Bayesian computations. Here we discuss a powerful class of graphical models that can be implemented as stochastic, generative neural networks. These models overcome many limitations associated with classic connectionist models, for example by exploiting unsupervised learning in hierarchical architectures (deep networks) and by taking into account top-down, predictive processing supported by feedback loops. We review some recent cognitive models based on generative networks, and we point out promising research directions to investigate neuropsychological disorders within this approach. Though further efforts are required in order to fill the gap between structured Bayesian models and more realistic, biophysical models of neuronal dynamics, we argue that generative neural networks have the potential to bridge these levels of analysis, thereby improving our understanding of the neural bases of cognition and of pathologies caused by brain damage.
Zelić, B; Bolf, N; Vasić-Racki, D
2006-06-01
Three different models: the unstructured mechanistic black-box model, the input-output neural network-based model and the externally recurrent neural network model were used to describe the pyruvate production process from glucose and acetate using the genetically modified Escherichia coli YYC202 ldhA::Kan strain. The experimental data were used from the recently described batch and fed-batch experiments [ Zelić B, Study of the process development for Escherichia coli-based pyruvate production. PhD Thesis, University of Zagreb, Faculty of Chemical Engineering and Technology, Zagreb, Croatia, July 2003. (In English); Zelić et al. Bioproc Biosyst Eng 26:249-258 (2004); Zelić et al. Eng Life Sci 3:299-305 (2003); Zelić et al Biotechnol Bioeng 85:638-646 (2004)]. The neural networks were built out of the experimental data obtained in the fed-batch pyruvate production experiments with the constant glucose feed rate. The model validation was performed using the experimental results obtained from the batch and fed-batch pyruvate production experiments with the constant acetate feed rate. Dynamics of the substrate and product concentration changes was estimated using two neural network-based models for biomass and pyruvate. It was shown that neural networks could be used for the modeling of complex microbial fermentation processes, even in conditions in which mechanistic unstructured models cannot be applied.
Hybrid computing using a neural network with dynamic external memory.
Graves, Alex; Wayne, Greg; Reynolds, Malcolm; Harley, Tim; Danihelka, Ivo; Grabska-Barwińska, Agnieszka; Colmenarejo, Sergio Gómez; Grefenstette, Edward; Ramalho, Tiago; Agapiou, John; Badia, Adrià Puigdomènech; Hermann, Karl Moritz; Zwols, Yori; Ostrovski, Georg; Cain, Adam; King, Helen; Summerfield, Christopher; Blunsom, Phil; Kavukcuoglu, Koray; Hassabis, Demis
2016-10-27
Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory.
Bianconi, André; Zuben, Cláudio J. Von; Serapião, Adriane B. de S.; Govone, José S.
2010-01-01
Bionomic features of blowflies may be clarified and detailed by the deployment of appropriate modelling techniques such as artificial neural networks, which are mathematical tools widely applied to the resolution of complex biological problems. The principal aim of this work was to use three well-known neural networks, namely Multi-Layer Perceptron (MLP), Radial Basis Function (RBF), and Adaptive Neural Network-Based Fuzzy Inference System (ANFIS), to ascertain whether these tools would be able to outperform a classical statistical method (multiple linear regression) in the prediction of the number of resultant adults (survivors) of experimental populations of Chrysomya megacephala (F.) (Diptera: Calliphoridae), based on initial larval density (number of larvae), amount of available food, and duration of immature stages. The coefficient of determination (R2) derived from the RBF was the lowest in the testing subset in relation to the other neural networks, even though its R2 in the training subset exhibited virtually a maximum value. The ANFIS model permitted the achievement of the best testing performance. Hence this model was deemed to be more effective in relation to MLP and RBF for predicting the number of survivors. All three networks outperformed the multiple linear regression, indicating that neural models could be taken as feasible techniques for predicting bionomic variables concerning the nutritional dynamics of blowflies. PMID:20569135
Structurally Dynamic Spin Market Networks
NASA Astrophysics Data System (ADS)
Horváth, Denis; Kuscsik, Zoltán
The agent-based model of stock price dynamics on a directed evolving complex network is suggested and studied by direct simulation. The stationary regime is maintained as a result of the balance between the extremal dynamics, adaptivity of strategic variables and reconnection rules. The inherent structure of node agent "brain" is modeled by a recursive neural network with local and global inputs and feedback connections. For specific parametric combination the complex network displays small-world phenomenon combined with scale-free behavior. The identification of a local leader (network hub, agent whose strategies are frequently adapted by its neighbors) is carried out by repeated random walk process through network. The simulations show empirically relevant dynamics of price returns and volatility clustering. The additional emerging aspects of stylized market statistics are Zipfian distributions of fitness.
Gigante, Guido; Deco, Gustavo; Marom, Shimon; Del Giudice, Paolo
2015-01-01
Cortical networks, in-vitro as well as in-vivo, can spontaneously generate a variety of collective dynamical events such as network spikes, UP and DOWN states, global oscillations, and avalanches. Though each of them has been variously recognized in previous works as expression of the excitability of the cortical tissue and the associated nonlinear dynamics, a unified picture of the determinant factors (dynamical and architectural) is desirable and not yet available. Progress has also been partially hindered by the use of a variety of statistical measures to define the network events of interest. We propose here a common probabilistic definition of network events that, applied to the firing activity of cultured neural networks, highlights the co-occurrence of network spikes, power-law distributed avalanches, and exponentially distributed ‘quasi-orbits’, which offer a third type of collective behavior. A rate model, including synaptic excitation and inhibition with no imposed topology, synaptic short-term depression, and finite-size noise, accounts for all these different, coexisting phenomena. We find that their emergence is largely regulated by the proximity to an oscillatory instability of the dynamics, where the non-linear excitable behavior leads to a self-amplification of activity fluctuations over a wide range of scales in space and time. In this sense, the cultured network dynamics is compatible with an excitation-inhibition balance corresponding to a slightly sub-critical regime. Finally, we propose and test a method to infer the characteristic time of the fatigue process, from the observed time course of the network’s firing rate. Unlike the model, possessing a single fatigue mechanism, the cultured network appears to show multiple time scales, signalling the possible coexistence of different fatigue mechanisms. PMID:26558616
Diminished neural network dynamics in amnestic mild cognitive impairment.
Brenner, Einat K; Hampstead, Benjamin M; Grossner, Emily C; Bernier, Rachel A; Gilbert, Nicholas; Sathian, K; Hillary, Frank G
2018-05-05
Mild cognitive impairment (MCI) is widely regarded as an intermediate stage between typical aging and dementia, with nearly 50% of patients with amnestic MCI (aMCI) converting to Alzheimer's dementia (AD) within 30 months of follow-up (Fischer et al., 2007). The growing literature using resting-state functional magnetic resonance imaging reveals both increased and decreased connectivity in individuals with MCI and connectivity loss between the anterior and posterior components of the default mode network (DMN) throughout the course of the disease progression (Hillary et al., 2015; Sheline & Raichle, 2013; Tijms et al., 2013). In this paper, we use dynamic connectivity modeling and graph theory to identify unique brain "states," or temporal patterns of connectivity across distributed networks, to distinguish individuals with aMCI from healthy older adults (HOAs). We enrolled 44 individuals diagnosed with aMCI and 33 HOAs of comparable age and education. Our results indicated that individuals with aMCI spent significantly more time in one state in particular, whereas neural network analysis in the HOA sample revealed approximately equivalent representation across four distinct states. Among individuals with aMCI, spending a higher proportion of time in the dominant state relative to a state where participants exhibited high cost (a measure combining connectivity and distance), predicted better language performance and less perseveration. This is the first report to examine neural network dynamics in individuals with aMCI. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Moon, Byung-Young
2005-12-01
The hybrid neural-genetic multi-model parameter estimation algorithm was demonstrated. This method can be applied to structured system identification of electro-hydraulic servo system. This algorithms consist of a recurrent incremental credit assignment(ICRA) neural network and a genetic algorithm. The ICRA neural network evaluates each member of a generation of model and genetic algorithm produces new generation of model. To evaluate the proposed method, electro-hydraulic servo system was designed and manufactured. The experiment was carried out to figure out the hybrid neural-genetic multi-model parameter estimation algorithm. As a result, the dynamic characteristics were obtained such as the parameters(mass, damping coefficient, bulk modulus, spring coefficient), which minimize total square error. The result of this study can be applied to hydraulic systems in industrial fields.
Terai, Asuka; Nakagawa, Masanori
2007-08-01
The purpose of this paper is to construct a model that represents the human process of understanding metaphors, focusing specifically on similes of the form an "A like B". Generally speaking, human beings are able to generate and understand many sorts of metaphors. This study constructs the model based on a probabilistic knowledge structure for concepts which is computed from a statistical analysis of a large-scale corpus. Consequently, this model is able to cover the many kinds of metaphors that human beings can generate. Moreover, the model implements the dynamic process of metaphor understanding by using a neural network with dynamic interactions. Finally, the validity of the model is confirmed by comparing model simulations with the results from a psychological experiment.
Distributed formation control of nonholonomic autonomous vehicle via RBF neural network
NASA Astrophysics Data System (ADS)
Yang, Shichun; Cao, Yaoguang; Peng, Zhaoxia; Wen, Guoguang; Guo, Konghui
2017-03-01
In this paper, RBF neural network consensus-based distributed control scheme is proposed for nonholonomic autonomous vehicles in a pre-defined formation along the specified reference trajectory. A variable transformation is first designed to convert the formation control problem into a state consensus problem. Then, the complete dynamics of the vehicles including inertia, Coriolis, friction model and unmodeled bounded disturbances are considered, which lead to the formation unstable when the distributed kinematic controllers are proposed based on the kinematics. RBF neural network torque controllers are derived to compensate for them. Some sufficient conditions are derived to accomplish the asymptotically stability of the systems based on algebraic graph theory, matrix theory, and Lyapunov theory. Finally, simulation examples illustrate the effectiveness of the proposed controllers.
Hu, Jin; Wang, Jun
2015-06-01
In recent years, complex-valued recurrent neural networks have been developed and analysed in-depth in view of that they have good modelling performance for some applications involving complex-valued elements. In implementing continuous-time dynamical systems for simulation or computational purposes, it is quite necessary to utilize a discrete-time model which is an analogue of the continuous-time system. In this paper, we analyse a discrete-time complex-valued recurrent neural network model and obtain the sufficient conditions on its global exponential periodicity and exponential stability. Simulation results of several numerical examples are delineated to illustrate the theoretical results and an application on associative memory is also given. Copyright © 2015 Elsevier Ltd. All rights reserved.
Terminal attractors in neural networks
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
A new type of attractor (terminal attractors) for content-addressable memory, associative memory, and pattern recognition in artificial neural networks operating in continuous time is introduced. The idea of a terminal attractor is based upon a violation of the Lipschitz condition at a fixed point. As a result, the fixed point becomes a singular solution which envelopes the family of regular solutions, while each regular solution approaches such an attractor in finite time. It will be shown that terminal attractors can be incorporated into neural networks such that any desired set of these attractors with prescribed basins is provided by an appropriate selection of the synaptic weights. The applications of terminal attractors for content-addressable and associative memories, pattern recognition, self-organization, and for dynamical training are illustrated.
Aeroelasticity of morphing wings using neural networks
NASA Astrophysics Data System (ADS)
Natarajan, Anand
In this dissertation, neural networks are designed to effectively model static non-linear aeroelastic problems in adaptive structures and linear dynamic aeroelastic systems with time varying stiffness. The use of adaptive materials in aircraft wings allows for the change of the contour or the configuration of a wing (morphing) in flight. The use of smart materials, to accomplish these deformations, can imply that the stiffness of the wing with a morphing contour changes as the contour changes. For a rapidly oscillating body in a fluid field, continuously adapting structural parameters may render the wing to behave as a time variant system. Even the internal spars/ribs of the aircraft wing which define the wing stiffness can be made adaptive, that is, their stiffness can be made to vary with time. The immediate effect on the structural dynamics of the wing, is that, the wing motion is governed by a differential equation with time varying coefficients. The study of this concept of a time varying torsional stiffness, made possible by the use of active materials and adaptive spars, in the dynamic aeroelastic behavior of an adaptable airfoil is performed here. Another type of aeroelastic problem of an adaptive structure that is investigated here, is the shape control of an adaptive bump situated on the leading edge of an airfoil. Such a bump is useful in achieving flow separation control for lateral directional maneuverability of the aircraft. Since actuators are being used to create this bump on the wing surface, the energy required to do so needs to be minimized. The adverse pressure drag as a result of this bump needs to be controlled so that the loss in lift over the wing is made minimal. The design of such a "spoiler bump" on the surface of the airfoil is an optimization problem of maximizing pressure drag due to flow separation while minimizing the loss in lift and energy required to deform the bump. One neural network is trained using the CFD code FLUENT to represent the aerodynamic loading over the bump. A second neural network is trained for calculating the actuator loads, bump displacement and lift, drag forces over the airfoil using the finite element solver, ANSYS and the previously trained neural network. This non-linear aeroelastic model of the deforming bump on an airfoil surface using neural networks can serve as a fore-runner for other non-linear aeroelastic problems.
A stochastic-field description of finite-size spiking neural networks
Longtin, André
2017-01-01
Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics. PMID:28787447
Neural Networks and other Techniques for Fault Identification and Isolation of Aircraft Systems
NASA Technical Reports Server (NTRS)
Innocenti, M.; Napolitano, M.
2003-01-01
Fault identification, isolation, and accomodation have become critical issues in the overall performance of advanced aircraft systems. Neural Networks have shown to be a very attractive alternative to classic adaptation methods for identification and control of non-linear dynamic systems. The purpose of this paper is to show the improvements in neural network applications achievable through the use of learning algorithms more efficient than the classic Back-Propagation, and through the implementation of the neural schemes in parallel hardware. The results of the analysis of a scheme for Sensor Failure, Detection, Identification and Accommodation (SFDIA) using experimental flight data of a research aircraft model are presented. Conventional approaches to the problem are based on observers and Kalman Filters while more recent methods are based on neural approximators. The work described in this paper is based on the use of neural networks (NNs) as on-line learning non-linear approximators. The performances of two different neural architectures were compared. The first architecture is based on a Multi Layer Perceptron (MLP) NN trained with the Extended Back Propagation algorithm (EBPA). The second architecture is based on a Radial Basis Function (RBF) NN trained with the Extended-MRAN (EMRAN) algorithms. In addition, alternative methods for communications links fault detection and accomodation are presented, relative to multiple unmanned aircraft applications.
Large memory capacity in chaotic artificial neural networks: a view of the anti-integrable limit.
Lin, Wei; Chen, Guanrong
2009-08-01
In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models.
Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.
Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta
2012-01-01
Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.
Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System
Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L.; Wennekers, Thomas; Chicca, Elisabetta
2011-01-01
Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems. PMID:22347163
Connectomics and graph theory analyses: Novel insights into network abnormalities in epilepsy.
Gleichgerrcht, Ezequiel; Kocher, Madison; Bonilha, Leonardo
2015-11-01
The assessment of neural networks in epilepsy has become increasingly relevant in the context of translational research, given that localized forms of epilepsy are more likely to be related to abnormal function within specific brain networks, as opposed to isolated focal brain pathology. It is notable that variability in clinical outcomes from epilepsy treatment may be a reflection of individual patterns of network abnormalities. As such, network endophenotypes may be important biomarkers for the diagnosis and treatment of epilepsy. Despite its exceptional potential, measuring abnormal networks in translational research has been thus far constrained by methodologic limitations. Fortunately, recent advancements in neuroscience, particularly in the field of connectomics, permit a detailed assessment of network organization, dynamics, and function at an individual level. Data from the personal connectome can be assessed using principled forms of network analyses based on graph theory, which may disclose patterns of organization that are prone to abnormal dynamics and epileptogenesis. Although the field of connectomics is relatively new, there is already a rapidly growing body of evidence to suggest that it can elucidate several important and fundamental aspects of abnormal networks to epilepsy. In this article, we provide a review of the emerging evidence from connectomics research regarding neural network architecture, dynamics, and function related to epilepsy. We discuss how connectomics may bring together pathophysiologic hypotheses from conceptual and basic models of epilepsy and in vivo biomarkers for clinical translational research. By providing neural network information unique to each individual, the field of connectomics may help to elucidate variability in clinical outcomes and open opportunities for personalized medicine approaches to epilepsy. Connectomics involves complex and rich data from each subject, thus collaborative efforts to enable the systematic and rigorous evaluation of this form of "big data" are paramount to leverage the full potential of this new approach. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.
Chen, C P; Wan, J Z
1999-01-01
A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.
Using Fuzzy Logic for Performance Evaluation in Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap S.
1992-01-01
Current reinforcement learning algorithms require long training periods which generally limit their applicability to small size problems. A new architecture is described which uses fuzzy rules to initialize its two neural networks: a neural network for performance evaluation and another for action selection. This architecture is applied to control of dynamic systems and it is demonstrated that it is possible to start with an approximate prior knowledge and learn to refine it through experiments using reinforcement learning.
Neural network representation and learning of mappings and their derivatives
NASA Technical Reports Server (NTRS)
White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald
1991-01-01
Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.
Stromatias, Evangelos; Soto, Miguel; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabé
2017-01-01
This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips.
Yoo, Sung Jin; Park, Jin Bae; Choi, Yoon Ho
2006-12-01
A new method for the robust control of flexible-joint (FJ) robots with model uncertainties in both robot dynamics and actuator dynamics is proposed. The proposed control system is a combination of the adaptive dynamic surface control (DSC) technique and the self-recurrent wavelet neural network (SRWNN). The adaptive DSC technique provides the ability to overcome the "explosion of complexity" problem in backstepping controllers. The SRWNNs are used to observe the arbitrary model uncertainties of FJ robots, and all their weights are trained online. From the Lyapunov stability analysis, their adaptation laws are induced, and the uniformly ultimately boundedness of all signals in a closed-loop adaptive system is proved. Finally, simulation results for a three-link FJ robot are utilized to validate the good position tracking performance and robustness against payload uncertainties and external disturbances of the proposed control system.
Casey, M
1996-08-15
Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.
Cluster analysis of word frequency dynamics
NASA Astrophysics Data System (ADS)
Maslennikova, Yu S.; Bochkarev, V. V.; Belashova, I. A.
2015-01-01
This paper describes the analysis and modelling of word usage frequency time series. During one of previous studies, an assumption was put forward that all word usage frequencies have uniform dynamics approaching the shape of a Gaussian function. This assumption can be checked using the frequency dictionaries of the Google Books Ngram database. This database includes 5.2 million books published between 1500 and 2008. The corpus contains over 500 billion words in American English, British English, French, German, Spanish, Russian, Hebrew, and Chinese. We clustered time series of word usage frequencies using a Kohonen neural network. The similarity between input vectors was estimated using several algorithms. As a result of the neural network training procedure, more than ten different forms of time series were found. They describe the dynamics of word usage frequencies from birth to death of individual words. Different groups of word forms were found to have different dynamics of word usage frequency variations.
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude
2017-01-01
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703
Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.
Shen, Lin; Wu, Jingheng; Yang, Weitao
2016-10-11
Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.
Using big data to map the network organization of the brain.
Swain, James E; Sripada, Chandra; Swain, John D
2014-02-01
The past few years have shown a major rise in network analysis of "big data" sets in the social sciences, revealing non-obvious patterns of organization and dynamic principles. We speculate that the dependency dimension - individuality versus sociality - might offer important insights into the dynamics of neurons and neuronal ensembles. Connectomic neural analyses, informed by social network theory, may be helpful in understanding underlying fundamental principles of brain organization.
Using big data to map the network organization of the brain
Swain, James E.; Sripada, Chandra; Swain, John D.
2015-01-01
The past few years have shown a major rise in network analysis of “big data” sets in the social sciences, revealing non-obvious patterns of organization and dynamic principles. We speculate that the dependency dimension – individuality versus sociality – might offer important insights into the dynamics of neurons and neuronal ensembles. Connectomic neural analyses, informed by social network theory, may be helpful in understanding underlying fundamental principles of brain organization. PMID:24572243