Extracting functionally feedforward networks from a population of spiking neurons
Vincent, Kathleen; Tauskela, Joseph S.; Thivierge, Jean-Philippe
2012-01-01
Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABAA receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits. PMID:23091458
Extracting functionally feedforward networks from a population of spiking neurons.
Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe
2012-01-01
Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV/DNQX, consistent with fading feedforward activation. The observed alterations in FFCs provide experimental support for recent theoretical work linking power-law avalanches to the feedforward organization of functional connections in local neuronal circuits.
Noise Tolerance of Attractor and Feedforward Memory Models
Lim, Sukbin; Goldman, Mark S.
2017-01-01
In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can out-perform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli. PMID:22091664
NASA Astrophysics Data System (ADS)
Li, Jie; Yu, Wan-Qing; Xu, Ding; Liu, Feng; Wang, Wei
2009-12-01
Using numerical simulations, we explore the mechanism for propagation of rate signals through a 10-layer feedforward network composed of Hodgkin-Huxley (HH) neurons with sparse connectivity. When white noise is afferent to the input layer, neuronal firing becomes progressively more synchronous in successive layers and synchrony is well developed in deeper layers owing to the feedforward connections between neighboring layers. The synchrony ensures the successful propagation of rate signals through the network when the synaptic conductance is weak. As the synaptic time constant τsyn varies, coherence resonance is observed in the network activity due to the intrinsic property of HH neurons. This makes the output firing rate single-peaked as a function of τsyn, suggesting that the signal propagation can be modulated by the synaptic time constant. These results are consistent with experimental results and advance our understanding of how information is processed in feedforward networks.
Single-hidden-layer feed-forward quantum neural network based on Grover learning.
Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min
2013-09-01
In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sensitivity of feedforward neural networks to weight errors
NASA Technical Reports Server (NTRS)
Stevenson, Maryhelen; Widrow, Bernard; Winter, Rodney
1990-01-01
An analysis is made of the sensitivity of feedforward layered networks of Adaline elements (threshold logic units) to weight errors. An approximation is derived which expresses the probability of error for an output neuron of a large network (a network with many neurons per layer) as a function of the percentage change in the weights. As would be expected, the probability of error increases with the number of layers in the network and with the percentage change in the weights. The probability of error is essentially independent of the number of weights per neuron and of the number of neurons per layer, as long as these numbers are large (on the order of 100 or more).
Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks
2018-01-01
Abstract Brain computations depend on how neurons transform inputs to spike outputs. Here, to understand input-output transformations in cortical networks, we recorded spiking responses from visual cortex (V1) of awake mice of either sex while pairing sensory stimuli with optogenetic perturbation of excitatory and parvalbumin-positive inhibitory neurons. We found that V1 neurons’ average responses were primarily additive (linear). We used a recurrent cortical network model to determine whether these data, as well as past observations of nonlinearity, could be described by a common circuit architecture. Simulations showed that cortical input-output transformations can be changed from linear to sublinear with moderate (∼20%) strengthening of connections between inhibitory neurons, but this change away from linear scaling depends on the presence of feedforward inhibition. Simulating a variety of recurrent connection strengths showed that, compared with when input arrives only to excitatory neurons, networks produce a wider range of output spiking responses in the presence of feedforward inhibition. PMID:29682603
Stochastic resonance in feedforward acupuncture networks
NASA Astrophysics Data System (ADS)
Qin, Ying-Mei; Wang, Jiang; Men, Cong; Deng, Bin; Wei, Xi-Le; Yu, Hai-Tao; Chan, Wai-Lok
2014-10-01
Effects of noises and some other network properties on the weak signal propagation are studied systematically in feedforward acupuncture networks (FFN) based on FitzHugh-Nagumo neuron model. It is found that noises with medium intensity can enhance signal propagation and this effect can be further increased by the feedforward network structure. Resonant properties in the noisy network can also be altered by several network parameters, such as heterogeneity, synapse features, and feedback connections. These results may also provide a novel potential explanation for the propagation of acupuncture signal.
Local excitation-inhibition ratio for synfire chain propagation in feed-forward neuronal networks
NASA Astrophysics Data System (ADS)
Guo, Xinmeng; Yu, Haitao; Wang, Jiang; Liu, Jing; Cao, Yibin; Deng, Bin
2017-09-01
A leading hypothesis holds that spiking activity propagates along neuronal sub-populations which are connected in a feed-forward manner, and the propagation efficiency would be affected by the dynamics of sub-populations. In this paper, how the interaction between local excitation and inhibition effects on synfire chain propagation in feed-forward network (FFN) is investigated. The simulation results show that there is an appropriate excitation-inhibition (EI) ratio maximizing the performance of synfire chain propagation. The optimal EI ratio can significantly enhance the selectivity of FFN to synchronous signals, which thereby increases the stability to background noise. Moreover, the effect of network topology on synfire chain propagation is also investigated. It is found that synfire chain propagation can be maximized by an optimal interlayer linking probability. We also find that external noise is detrimental to synchrony propagation by inducing spiking jitter. The results presented in this paper may provide insights into the effects of network dynamics on neuronal computations.
Pulvinar thalamic nucleus allows for asynchronous spike propagation through the cortex
Cortes, Nelson; van Vreeswijk, Carl
2015-01-01
We create two multilayered feedforward networks composed of excitatory and inhibitory integrate-and-fire neurons in the balanced state to investigate the role of cortico-pulvino-cortical connections. The first network consists of ten feedforward levels where a Poisson spike train with varying firing rate is applied as an input in layer one. Although the balanced state partially avoids spike synchronization during the transmission, the average firing-rate in the last layer either decays or saturates depending on the feedforward pathway gain. The last layer activity is almost independent of the input even for a carefully chosen intermediate gain. Adding connections to the feedforward pathway by a nine areas Pulvinar structure improves the firing-rate propagation to become almost linear among layers. Incoming strong pulvinar spikes balance the low feedforward gain to have a unit input-output relation in the last layer. Pulvinar neurons evoke a bimodal activity depending on the magnitude input: synchronized spike bursts between 20 and 80 Hz and an asynchronous activity for very both low and high frequency inputs. In the first regime, spikes of last feedforward layer neurons are asynchronous with weak, low frequency, oscillations in the rate. Here, the uncorrelated incoming feedforward pathway washes out the synchronized thalamic bursts. In the second regime, spikes in the whole network are asynchronous. As the number of cortical layers increases, long-range pulvinar connections can link directly two or more cortical stages avoiding their either saturation or gradual activity falling. The Pulvinar acts as a shortcut that supplies the input-output firing-rate relationship of two separated cortical areas without changing the strength of connections in the feedforward pathway. PMID:26042026
Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.
Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R
2001-11-01
The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.
Quantum generalisation of feedforward neural networks
NASA Astrophysics Data System (ADS)
Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.
2017-09-01
We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.
Generalized Adaptive Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1993-01-01
Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.
A feedforward artificial neural network based on quantum effect vector-matrix multipliers.
Levy, H J; McGill, T C
1993-01-01
The vector-matrix multiplier is the engine of many artificial neural network implementations because it can simulate the way in which neurons collect weighted input signals from a dendritic arbor. A new technology for building analog weighting elements that is theoretically capable of densities and speeds far beyond anything that conventional VLSI in silicon could ever offer is presented. To illustrate the feasibility of such a technology, a small three-layer feedforward prototype network with five binary neurons and six tri-state synapses was built and used to perform all of the fundamental logic functions: XOR, AND, OR, and NOT.
Toward heterogeneity in feedforward network with synaptic delays based on FitzHugh-Nagumo model
NASA Astrophysics Data System (ADS)
Qin, Ying-Mei; Men, Cong; Zhao, Jia; Han, Chun-Xiao; Che, Yan-Qiu
2018-01-01
We focus on the role of heterogeneity on the propagation of firing patterns in feedforward network (FFN). Effects of heterogeneities both in parameters of neuronal excitability and synaptic delays are investigated systematically. Neuronal heterogeneity is found to modulate firing rates and spiking regularity by changing the excitability of the network. Synaptic delays are strongly related with desynchronized and synchronized firing patterns of the FFN, which indicate that synaptic delays may play a significant role in bridging rate coding and temporal coding. Furthermore, quasi-coherence resonance (quasi-CR) phenomenon is observed in the parameter domain of connection probability and delay-heterogeneity. All these phenomena above enable a detailed characterization of neuronal heterogeneity in FFN, which may play an indispensable role in reproducing the important properties of in vivo experiments.
Blur identification by multilayer neural network based on multivalued neurons.
Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T
2008-05-01
A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.
Connectomic constraints on computation in feedforward networks of spiking neurons.
Ramaswamy, Venkatakrishnan; Banerjee, Arunava
2014-10-01
Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints-that arise by virtue of the connectome-connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.
Propagation of spiking regularity and double coherence resonance in feedforward networks.
Men, Cong; Wang, Jiang; Qin, Ying-Mei; Deng, Bin; Tsang, Kai-Ming; Chan, Wai-Lok
2012-03-01
We investigate the propagation of spiking regularity in noisy feedforward networks (FFNs) based on FitzHugh-Nagumo neuron model systematically. It is found that noise could modulate the transmission of firing rate and spiking regularity. Noise-induced synchronization and synfire-enhanced coherence resonance are also observed when signals propagate in noisy multilayer networks. It is interesting that double coherence resonance (DCR) with the combination of synaptic input correlation and noise intensity is finally attained after the processing layer by layer in FFNs. Furthermore, inhibitory connections also play essential roles in shaping DCR phenomena. Several properties of the neuronal network such as noise intensity, correlation of synaptic inputs, and inhibitory connections can serve as control parameters in modulating both rate coding and the order of temporal coding.
Stimulus-specific adaptation in a recurrent network model of primary auditory cortex
2017-01-01
Stimulus-specific adaptation (SSA) occurs when neurons decrease their responses to frequently-presented (standard) stimuli but not, or not as much, to other, rare (deviant) stimuli. SSA is present in all mammalian species in which it has been tested as well as in birds. SSA confers short-term memory to neuronal responses, and may lie upstream of the generation of mismatch negativity (MMN), an important human event-related potential. Previously published models of SSA mostly rely on synaptic depression of the feedforward, thalamocortical input. Here we study SSA in a recurrent neural network model of primary auditory cortex. When the recurrent, intracortical synapses display synaptic depression, the network generates population spikes (PSs). SSA occurs in this network when deviants elicit a PS but standards do not, and we demarcate the regions in parameter space that allow SSA. While SSA based on PSs does not require feedforward depression, we identify feedforward depression as a mechanism for expanding the range of parameters that support SSA. We provide predictions for experiments that could help differentiate between SSA due to synaptic depression of feedforward connections and SSA due to synaptic depression of recurrent connections. Similar to experimental data, the magnitude of SSA in the model depends on the frequency difference between deviant and standard, probability of the deviant, inter-stimulus interval and input amplitude. In contrast to models based on feedforward depression, our model shows true deviance sensitivity as found in experiments. PMID:28288158
Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J
2016-11-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.
Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.
2016-01-01
Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647
Temporal neural networks and transient analysis of complex engineering systems
NASA Astrophysics Data System (ADS)
Uluyol, Onder
A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.
Theta phase precession and phase selectivity: a cognitive device description of neural coding
NASA Astrophysics Data System (ADS)
Zalay, Osbert C.; Bardakjian, Berj L.
2009-06-01
Information in neural systems is carried by way of phase and rate codes. Neuronal signals are processed through transformative biophysical mechanisms at the cellular and network levels. Neural coding transformations can be represented mathematically in a device called the cognitive rhythm generator (CRG). Incoming signals to the CRG are parsed through a bank of neuronal modes that orchestrate proportional, integrative and derivative transformations associated with neural coding. Mode outputs are then mixed through static nonlinearities to encode (spatio) temporal phase relationships. The static nonlinear outputs feed and modulate a ring device (limit cycle) encoding output dynamics. Small coupled CRG networks were created to investigate coding functionality associated with neuronal phase preference and theta precession in the hippocampus. Phase selectivity was found to be dependent on mode shape and polarity, while phase precession was a product of modal mixing (i.e. changes in the relative contribution or amplitude of mode outputs resulted in shifting phase preference). Nonlinear system identification was implemented to help validate the model and explain response characteristics associated with modal mixing; in particular, principal dynamic modes experimentally derived from a hippocampal neuron were inserted into a CRG and the neuron's dynamic response was successfully cloned. From our results, small CRG networks possessing disynaptic feedforward inhibition in combination with feedforward excitation exhibited frequency-dependent inhibitory-to-excitatory and excitatory-to-inhibitory transitions that were similar to transitions seen in a single CRG with quadratic modal mixing. This suggests nonlinear modal mixing to be a coding manifestation of the effect of network connectivity in shaping system dynamic behavior. We hypothesize that circuits containing disynaptic feedforward inhibition in the nervous system may be candidates for interpreting upstream rate codes to guide downstream processes such as phase precession, because of their demonstrated frequency-selective properties.
Willems, Janske G. P.; Wadman, Wytse J.
2018-01-01
Abstract The perirhinal (PER) and lateral entorhinal (LEC) cortex form an anatomical link between the neocortex and the hippocampus. However, neocortical activity is transmitted through the PER and LEC to the hippocampus with a low probability, suggesting the involvement of the inhibitory network. This study explored the role of interneuron mediated inhibition, activated by electrical stimulation in the agranular insular cortex (AiP), in the deep layers of the PER and LEC. Activated synaptic input by AiP stimulation rarely evoked action potentials in the PER‐LEC deep layer excitatory principal neurons, most probably because the evoked synaptic response consisted of a small excitatory and large inhibitory conductance. Furthermore, parvalbumin positive (PV) interneurons—a subset of interneurons projecting onto the axo‐somatic region of principal neurons—received synaptic input earlier than principal neurons, suggesting recruitment of feedforward inhibition. This synaptic input in PV interneurons evoked varying trains of action potentials, explaining the fast rising, long lasting synaptic inhibition received by deep layer principal neurons. Altogether, the excitatory input from the AiP onto deep layer principal neurons is overruled by strong feedforward inhibition. PV interneurons, with their fast, extensive stimulus‐evoked firing, are able to deliver this fast evoked inhibition in principal neurons. This indicates an essential role for PV interneurons in the gating mechanism of the PER‐LEC network. PMID:29341361
Ding, Weifu; Zhang, Jiangshe; Leung, Yee
2016-10-01
In this paper, we predict air pollutant concentration using a feedforward artificial neural network inspired by the mechanism of the human brain as a useful alternative to traditional statistical modeling techniques. The neural network is trained based on sparse response back-propagation in which only a small number of neurons respond to the specified stimulus simultaneously and provide a high convergence rate for the trained network, in addition to low energy consumption and greater generalization. Our method is evaluated on Hong Kong air monitoring station data and corresponding meteorological variables for which five air quality parameters were gathered at four monitoring stations in Hong Kong over 4 years (2012-2015). Our results show that our training method has more advantages in terms of the precision of the prediction, effectiveness, and generalization of traditional linear regression algorithms when compared with a feedforward artificial neural network trained using traditional back-propagation.
Raghavan, Mohan; Amrutur, Bharadwaj; Narayanan, Rishikesh; Sikdar, Sujit Kumar
2013-01-01
Synfire waves are propagating spike packets in synfire chains, which are feedforward chains embedded in random networks. Although synfire waves have proved to be effective quantification for network activity with clear relations to network structure, their utilities are largely limited to feedforward networks with low background activity. To overcome these shortcomings, we describe a novel generalisation of synfire waves, and define ‘synconset wave’ as a cascade of first spikes within a synchronisation event. Synconset waves would occur in ‘synconset chains’, which are feedforward chains embedded in possibly heavily recurrent networks with heavy background activity. We probed the utility of synconset waves using simulation of single compartment neuron network models with biophysically realistic conductances, and demonstrated that the spread of synconset waves directly follows from the network connectivity matrix and is modulated by top-down inputs and the resultant oscillations. Such synconset profiles lend intuitive insights into network organisation in terms of connection probabilities between various network regions rather than an adjacency matrix. To test this intuition, we develop a Bayesian likelihood function that quantifies the probability that an observed synfire wave was caused by a given network. Further, we demonstrate it's utility in the inverse problem of identifying the network that caused a given synfire wave. This method was effective even in highly subsampled networks where only a small subset of neurons were accessible, thus showing it's utility in experimental estimation of connectomes in real neuronal-networks. Together, we propose synconset chains/waves as an effective framework for understanding the impact of network structure on function, and as a step towards developing physiology-driven network identification methods. Finally, as synconset chains extend the utilities of synfire chains to arbitrary networks, we suggest utilities of our framework to several aspects of network physiology including cell assemblies, population codes, and oscillatory synchrony. PMID:24116018
Feedforward and feedback inhibition in neostriatal GABAergic spiny neurons.
Tepper, James M; Wilson, Charles J; Koós, Tibor
2008-08-01
There are two distinct inhibitory GABAergic circuits in the neostriatum. The feedforward circuit consists of a relatively small population of GABAergic interneurons that receives excitatory input from the neocortex and exerts monosynaptic inhibition onto striatal spiny projection neurons. The feedback circuit comprises the numerous spiny projection neurons and their interconnections via local axon collaterals. This network has long been assumed to provide the majority of striatal GABAergic inhibition and to sharpen and shape striatal output through lateral inhibition, producing increased activity in the most strongly excited spiny cells at the expense of their less strongly excited neighbors. Recent results, mostly from recording experiments of synaptically connected pairs of neurons, have revealed that the two GABAergic circuits differ markedly in terms of the total number of synapses made by each, the strength of the postsynaptic response detected at the soma, the extent of presynaptic convergence and divergence and the net effect of the activation of each circuit on the postsynaptic activity of the spiny neuron. These data have revealed that the feedforward inhibition is powerful and widespread, with spiking in a single interneuron being capable of significantly delaying or even blocking the generation of spikes in a large number of postsynaptic spiny neurons. In contrast, the postsynaptic effects of spiking in a single presynaptic spiny neuron on postsynaptic spiny neurons are weak when measured at the soma, and unable to significantly affect spike timing or generation. Further, reciprocity of synaptic connections between spiny neurons is only rarely observed. These results suggest that the bulk of the fast inhibition that has the strongest effects on spiny neuron spike timing comes from the feedforward interneuronal system whereas the axon collateral feedback system acts principally at the dendrites to control local excitability as well as the overall level of activity of the spiny neuron.
Propagating synchrony in feed-forward networks
Jahnke, Sven; Memmesheimer, Raoul-Martin; Timme, Marc
2013-01-01
Coordinated patterns of precisely timed action potentials (spikes) emerge in a variety of neural circuits but their dynamical origin is still not well understood. One hypothesis states that synchronous activity propagating through feed-forward chains of groups of neurons (synfire chains) may dynamically generate such spike patterns. Additionally, synfire chains offer the possibility to enable reliable signal transmission. So far, mostly densely connected chains, often with all-to-all connectivity between groups, have been theoretically and computationally studied. Yet, such prominent feed-forward structures have not been observed experimentally. Here we analytically and numerically investigate under which conditions diluted feed-forward chains may exhibit synchrony propagation. In addition to conventional linear input summation, we study the impact of non-linear, non-additive summation accounting for the effect of fast dendritic spikes. The non-linearities promote synchronous inputs to generate precisely timed spikes. We identify how non-additive coupling relaxes the conditions on connectivity such that it enables synchrony propagation at connectivities substantially lower than required for linearly coupled chains. Although the analytical treatment is based on a simple leaky integrate-and-fire neuron model, we show how to generalize our methods to biologically more detailed neuron models and verify our results by numerical simulations with, e.g., Hodgkin Huxley type neurons. PMID:24298251
Dong, Zhekang; Duan, Shukai; Hu, Xiaofang; Wang, Lidan; Li, Hai
2014-01-01
In this paper, we present an implementation scheme of memristor-based multilayer feedforward small-world neural network (MFSNN) inspirited by the lack of the hardware realization of the MFSNN on account of the need of a large number of electronic neurons and synapses. More specially, a mathematical closed-form charge-governed memristor model is presented with derivation procedures and the corresponding Simulink model is presented, which is an essential block for realizing the memristive synapse and the activation function in electronic neurons. Furthermore, we investigate a more intelligent memristive PID controller by incorporating the proposed MFSNN into intelligent PID control based on the advantages of the memristive MFSNN on computation speed and accuracy. Finally, numerical simulations have demonstrated the effectiveness of the proposed scheme.
Dong, Zhekang; Duan, Shukai; Hu, Xiaofang; Wang, Lidan
2014-01-01
In this paper, we present an implementation scheme of memristor-based multilayer feedforward small-world neural network (MFSNN) inspirited by the lack of the hardware realization of the MFSNN on account of the need of a large number of electronic neurons and synapses. More specially, a mathematical closed-form charge-governed memristor model is presented with derivation procedures and the corresponding Simulink model is presented, which is an essential block for realizing the memristive synapse and the activation function in electronic neurons. Furthermore, we investigate a more intelligent memristive PID controller by incorporating the proposed MFSNN into intelligent PID control based on the advantages of the memristive MFSNN on computation speed and accuracy. Finally, numerical simulations have demonstrated the effectiveness of the proposed scheme. PMID:25202723
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons
Burbank, Kendra S.
2015-01-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645
Effects of topologies on signal propagation in feedforward networks
NASA Astrophysics Data System (ADS)
Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu
2018-01-01
We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.
Effects of topologies on signal propagation in feedforward networks.
Zhao, Jia; Qin, Ying-Mei; Che, Yan-Qiu
2018-01-01
We systematically investigate the effects of topologies on signal propagation in feedforward networks (FFNs) based on the FitzHugh-Nagumo neuron model. FFNs with different topological structures are constructed with same number of both in-degrees and out-degrees in each layer and given the same input signal. The propagation of firing patterns and firing rates are found to be affected by the distribution of neuron connections in the FFNs. Synchronous firing patterns emerge in the later layers of FFNs with identical, uniform, and exponential degree distributions, but the number of synchronous spike trains in the output layers of the three topologies obviously differs from one another. The firing rates in the output layers of the three FFNs can be ordered from high to low according to their topological structures as exponential, uniform, and identical distributions, respectively. Interestingly, the sequence of spiking regularity in the output layers of the three FFNs is consistent with the firing rates, but their firing synchronization is in the opposite order. In summary, the node degree is an important factor that can dramatically influence the neuronal network activity.
Chua, Yansong; Morrison, Abigail
2016-01-01
The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level. PMID:27499740
Chua, Yansong; Morrison, Abigail
2016-01-01
The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level.
Carey, Ryan M.; Sherwood, William Erik; Shipley, Michael T.; Borisyuk, Alla
2015-01-01
Olfaction in mammals is a dynamic process driven by the inhalation of air through the nasal cavity. Inhalation determines the temporal structure of sensory neuron responses and shapes the neural dynamics underlying central olfactory processing. Inhalation-linked bursts of activity among olfactory bulb (OB) output neurons [mitral/tufted cells (MCs)] are temporally transformed relative to those of sensory neurons. We investigated how OB circuits shape inhalation-driven dynamics in MCs using a modeling approach that was highly constrained by experimental results. First, we constructed models of canonical OB circuits that included mono- and disynaptic feedforward excitation, recurrent inhibition and feedforward inhibition of the MC. We then used experimental data to drive inputs to the models and to tune parameters; inputs were derived from sensory neuron responses during natural odorant sampling (sniffing) in awake rats, and model output was compared with recordings of MC responses to odorants sampled with the same sniff waveforms. This approach allowed us to identify OB circuit features underlying the temporal transformation of sensory inputs into inhalation-linked patterns of MC spike output. We found that realistic input-output transformations can be achieved independently by multiple circuits, including feedforward inhibition with slow onset and decay kinetics and parallel feedforward MC excitation mediated by external tufted cells. We also found that recurrent and feedforward inhibition had differential impacts on MC firing rates and on inhalation-linked response dynamics. These results highlight the importance of investigating neural circuits in a naturalistic context and provide a framework for further explorations of signal processing by OB networks. PMID:25717156
Simulated mossy fiber associated feedforward circuit functioning as a highpass filter.
Zalay, Osbert C; Bardakjian, Berj L
2006-01-01
Learning and memory rely on the strict regulation of communication between neurons in the hippocampus. The mossy fiber (MF) pathway connects the dentate gyrus to the auto-associative CA3 network, and the information it carries is controlled by a feedforward circuit combining disynaptic inhibition with monosynaptic excitation. Analysis of the MF associated circuit using a mapped clock oscillator (MCO) model reveals the circuit to be a highpass filter.
Feedback enhances feedforward figure-ground segmentation by changing firing mode.
Supèr, Hans; Romeo, August
2011-01-01
In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforward spiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses with the responses to a homogenous texture. We propose that feedback controls figure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons.
A Novel Higher Order Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Xu, Shuxiang
2010-05-01
In this paper a new Higher Order Neural Network (HONN) model is introduced and applied in several data mining tasks. Data Mining extracts hidden patterns and valuable information from large databases. A hyperbolic tangent function is used as the neuron activation function for the new HONN model. Experiments are conducted to demonstrate the advantages and disadvantages of the new HONN model, when compared with several conventional Artificial Neural Network (ANN) models: Feedforward ANN with the sigmoid activation function; Feedforward ANN with the hyperbolic tangent activation function; and Radial Basis Function (RBF) ANN with the Gaussian activation function. The experimental results seem to suggest that the new HONN holds higher generalization capability as well as abilities in handling missing data.
Fan, Denggui; Duan, Lixia; Wang, Qian; Luan, Guoming
2017-01-01
The mechanisms underlying electrophysiologically observed two-way transitions between absence and tonic-clonic epileptic seizures in cerebral cortex remain unknown. The interplay within thalamocortical network is believed to give rise to these epileptic multiple modes of activity and transitions between them. In particular, it is thought that in some areas of cortex there exists feedforward inhibition from specific relay nucleus of thalamus (TC) to inhibitory neuronal population (IN) which has even more stronger functions on cortical activities than the known feedforward excitation from TC to excitatory neuronal population (EX). Inspired by this, we proposed a modified computational model by introducing feedforward inhibitory connectivity within thalamocortical circuit, to systematically investigate the combined effects of feedforward inhibition and excitation on transitions of epileptic seizures. We first found that the feedforward excitation can induce the transition from tonic oscillation to spike and wave discharges (SWD) in cortex, i.e., the epileptic tonic-absence seizures, with the fixed weak feedforward inhibition. Thereinto, the phase of absence seizures corresponding to strong feedforward excitation can be further transformed into the clonic oscillations with the increasing of feedforward inhibition, representing the epileptic absence-clonic seizures. We also observed the other fascinating dynamical states, such as periodic 2/3/4-spike and wave discharges, reversed SWD and clonic oscillations, as well as saturated firings. More importantly, we can identify the stable parameter regions representing the tonic-clonic oscillations and SWD discharges of epileptic seizures on the 2-D plane composed of feedforward inhibition and excitation, where the physiologically plausible transition pathways between tonic-clonic and absence seizures can be figured out. These results indicate the functional role of feedforward pathways in controlling epileptic seizures and the modified thalamocortical model may provide a guide for future efforts to mechanistically link feedforward pathways in the pathogenesis of epileptic seizures. PMID:28736520
Modular, Hierarchical Learning By Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Baldi, Pierre F.; Toomarian, Nikzad
1996-01-01
Modular and hierarchical approach to supervised learning by artificial neural networks leads to neural networks more structured than neural networks in which all neurons fully interconnected. These networks utilize general feedforward flow of information and sparse recurrent connections to achieve dynamical effects. The modular organization, sparsity of modular units and connections, and fact that learning is much more circumscribed are all attractive features for designing neural-network hardware. Learning streamlined by imitating some aspects of biological neural networks.
Encoding Time in Feedforward Trajectories of a Recurrent Neural Network Model.
Hardy, N F; Buonomano, Dean V
2018-02-01
Brain activity evolves through time, creating trajectories of activity that underlie sensorimotor processing, behavior, and learning and memory. Therefore, understanding the temporal nature of neural dynamics is essential to understanding brain function and behavior. In vivo studies have demonstrated that sequential transient activation of neurons can encode time. However, it remains unclear whether these patterns emerge from feedforward network architectures or from recurrent networks and, furthermore, what role network structure plays in timing. We address these issues using a recurrent neural network (RNN) model with distinct populations of excitatory and inhibitory units. Consistent with experimental data, a single RNN could autonomously produce multiple functionally feedforward trajectories, thus potentially encoding multiple timed motor patterns lasting up to several seconds. Importantly, the model accounted for Weber's law, a hallmark of timing behavior. Analysis of network connectivity revealed that efficiency-a measure of network interconnectedness-decreased as the number of stored trajectories increased. Additionally, the balance of excitation (E) and inhibition (I) shifted toward excitation during each unit's activation time, generating the prediction that observed sequential activity relies on dynamic control of the E/I balance. Our results establish for the first time that the same RNN can generate multiple functionally feedforward patterns of activity as a result of dynamic shifts in the E/I balance imposed by the connectome of the RNN. We conclude that recurrent network architectures account for sequential neural activity, as well as for a fundamental signature of timing behavior: Weber's law.
Catic, Aida; Gurbeta, Lejla; Kurtovic-Kozaric, Amina; Mehmedbasic, Senad; Badnjevic, Almir
2018-02-13
The usage of Artificial Neural Networks (ANNs) for genome-enabled classifications and establishing genome-phenotype correlations have been investigated more extensively over the past few years. The reason for this is that ANNs are good approximates of complex functions, so classification can be performed without the need for explicitly defined input-output model. This engineering tool can be applied for optimization of existing methods for disease/syndrome classification. Cytogenetic and molecular analyses are the most frequent tests used in prenatal diagnostic for the early detection of Turner, Klinefelter, Patau, Edwards and Down syndrome. These procedures can be lengthy, repetitive; and often employ invasive techniques so a robust automated method for classifying and reporting prenatal diagnostics would greatly help the clinicians with their routine work. The database consisted of data collected from 2500 pregnant woman that came to the Institute of Gynecology, Infertility and Perinatology "Mehmedbasic" for routine antenatal care between January 2000 and December 2016. During first trimester all women were subject to screening test where values of maternal serum pregnancy-associated plasma protein A (PAPP-A) and free beta human chorionic gonadotropin (β-hCG) were measured. Also, fetal nuchal translucency thickness and the presence or absence of the nasal bone was observed using ultrasound. The architectures of linear feedforward and feedback neural networks were investigated for various training data distributions and number of neurons in hidden layer. Feedback neural network architecture out performed feedforward neural network architecture in predictive ability for all five aneuploidy prenatal syndrome classes. Feedforward neural network with 15 neurons in hidden layer achieved classification sensitivity of 92.00%. Classification sensitivity of feedback (Elman's) neural network was 99.00%. Average accuracy of feedforward neural network was 89.6% and for feedback was 98.8%. The results presented in this paper prove that an expert diagnostic system based on neural networks can be efficiently used for classification of five aneuploidy syndromes, covered with this study, based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics. Developed Expert System proved to be simple, robust, and powerful in properly classifying prenatal aneuploidy syndromes.
Computational properties of networks of synchronous groups of spiking neurons.
Dayhoff, Judith E
2007-09-01
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.
Long-range population dynamics of anatomically defined neocortical networks
Chen, Jerry L; Voigt, Fabian F; Javadzadeh, Mitra; Krueppel, Roland; Helmchen, Fritjof
2016-01-01
The coordination of activity across neocortical areas is essential for mammalian brain function. Understanding this process requires simultaneous functional measurements across the cortex. In order to dissociate direct cortico-cortical interactions from other sources of neuronal correlations, it is furthermore desirable to target cross-areal recordings to neuronal subpopulations that anatomically project between areas. Here, we combined anatomical tracers with a novel multi-area two-photon microscope to perform simultaneous calcium imaging across mouse primary (S1) and secondary (S2) somatosensory whisker cortex during texture discrimination behavior, specifically identifying feedforward and feedback neurons. We find that coordination of S1-S2 activity increases during motor behaviors such as goal-directed whisking and licking. This effect was not specific to identified feedforward and feedback neurons. However, these mutually projecting neurons especially participated in inter-areal coordination when motor behavior was paired with whisker-texture touches, suggesting that direct S1-S2 interactions are sensory-dependent. Our results demonstrate specific functional coordination of anatomically-identified projection neurons across sensory cortices. DOI: http://dx.doi.org/10.7554/eLife.14679.001 PMID:27218452
Neural networks with local receptive fields and superlinear VC dimension.
Schmitt, Michael
2002-04-01
Local receptive field neurons comprise such well-known and widely used unit types as radial basis function (RBF) neurons and neurons with center-surround receptive field. We study the Vapnik-Chervonenkis (VC) dimension of feedforward neural networks with one hidden layer of these units. For several variants of local receptive field neurons, we show that the VC dimension of these networks is superlinear. In particular, we establish the bound Omega(W log k) for any reasonably sized network with W parameters and k hidden nodes. This bound is shown to hold for discrete center-surround receptive field neurons, which are physiologically relevant models of cells in the mammalian visual system, for neurons computing a difference of gaussians, which are popular in computational vision, and for standard RBF neurons, a major alternative to sigmoidal neurons in artificial neural networks. The result for RBF neural networks is of particular interest since it answers a question that has been open for several years. The results also give rise to lower bounds for networks with fixed input dimension. Regarding constants, all bounds are larger than those known thus far for similar architectures with sigmoidal neurons. The superlinear lower bounds contrast with linear upper bounds for single local receptive field neurons also derived here.
Gilson, Matthieu; Burkitt, Anthony N; Grayden, David B; Thomas, Doreen A; van Hemmen, J Leo
2009-12-01
In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.
High pressure air compressor valve fault diagnosis using feedforward neural networks
NASA Astrophysics Data System (ADS)
James Li, C.; Yu, Xueli
1995-09-01
Feedforward neural networks (FNNs) are developed and implemented to classify a four-stage high pressure air compressor into one of the following conditions: baseline, suction or exhaust valve faults. These FNNs are used for the compressor's automatic condition monitoring and fault diagnosis. Measurements of 39 variables are obtained under different baseline conditions and third-stage suction and exhaust valve faults. These variables include pressures and temperatures at all stages, voltage between phase aand phase b, voltage between phase band phase c, total three-phase real power, cooling water flow rate, etc. To reduce the number of variables, the amount of their discriminatory information is quantified by scattering matrices to identify statistical significant ones. Measurements of the selected variables are then used by a fully automatic structural and weight learning algorithm to construct three-layer FNNs to classify the compressor's condition. This learning algorithm requires neither guesses of initial weight values nor number of neurons in the hidden layer of an FNN. It takes an incremental approach in which a hidden neuron is trained by exemplars and then augmented to the existing network. These exemplars are then made orthogonal to the newly identified hidden neuron. They are subsequently used for the training of the next hidden neuron. The betterment continues until a desired accuracy is reached. After the neural networks are established, novel measurements from various conditions that haven't been previously seen by the FNNs are then used to evaluate their ability in fault diagnosis. The trained neural networks provide very accurate diagnosis for suction and discharge valve defects.
Moyer, Jason T.; Halterman, Benjamin L.; Finkel, Leif H.; Wolf, John A.
2014-01-01
Striatal medium spiny neurons (MSNs) receive lateral inhibitory projections from other MSNs and feedforward inhibitory projections from fast-spiking, parvalbumin-containing striatal interneurons (FSIs). The functional roles of these connections are unknown, and difficult to study in an experimental preparation. We therefore investigated the functionality of both lateral (MSN-MSN) and feedforward (FSI-MSN) inhibition using a large-scale computational model of the striatal network. The model consists of 2744 MSNs comprised of 189 compartments each and 121 FSIs comprised of 148 compartments each, with dendrites explicitly represented and almost all known ionic currents included and strictly constrained by biological data as appropriate. Our analysis of the model indicates that both lateral inhibition and feedforward inhibition function at the population level to limit non-ensemble MSN spiking while preserving ensemble MSN spiking. Specifically, lateral inhibition enables large ensembles of MSNs firing synchronously to strongly suppress non-ensemble MSNs over a short time-scale (10–30 ms). Feedforward inhibition enables FSIs to strongly inhibit weakly activated, non-ensemble MSNs while moderately inhibiting activated ensemble MSNs. Importantly, FSIs appear to more effectively inhibit MSNs when FSIs fire asynchronously. Both types of inhibition would increase the signal-to-noise ratio of responding MSN ensembles and contribute to the formation and dissolution of MSN ensembles in the striatal network. PMID:25505406
Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks
Maca, Petr; Pech, Pavel
2016-01-01
The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948–2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons. PMID:26880875
Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks.
Maca, Petr; Pech, Pavel
2016-01-01
The presented paper compares forecast of drought indices based on two different models of artificial neural networks. The first model is based on feedforward multilayer perceptron, sANN, and the second one is the integrated neural network model, hANN. The analyzed drought indices are the standardized precipitation index (SPI) and the standardized precipitation evaporation index (SPEI) and were derived for the period of 1948-2002 on two US catchments. The meteorological and hydrological data were obtained from MOPEX experiment. The training of both neural network models was made by the adaptive version of differential evolution, JADE. The comparison of models was based on six model performance measures. The results of drought indices forecast, explained by the values of four model performance indices, show that the integrated neural network model was superior to the feedforward multilayer perceptron with one hidden layer of neurons.
Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong
2013-11-01
In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.
Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F
2018-01-12
In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.
Cusps enable line attractors for neural computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.
Here, line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyzemore » system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.« less
Cusps enable line attractors for neural computation
NASA Astrophysics Data System (ADS)
Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.; Tao, Louis
2017-11-01
Line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyze system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.
Cusps enable line attractors for neural computation
Xiao, Zhuocheng; Zhang, Jiwei; Sornborger, Andrew T.; ...
2017-11-07
Here, line attractors in neuronal networks have been suggested to be the basis of many brain functions, such as working memory, oculomotor control, head movement, locomotion, and sensory processing. In this paper, we make the connection between line attractors and pulse gating in feed-forward neuronal networks. In this context, because of their neutral stability along a one-dimensional manifold, line attractors are associated with a time-translational invariance that allows graded information to be propagated from one neuronal population to the next. To understand how pulse-gating manifests itself in a high-dimensional, nonlinear, feedforward integrate-and-fire network, we use a Fokker-Planck approach to analyzemore » system dynamics. We make a connection between pulse-gated propagation in the Fokker-Planck and population-averaged mean-field (firing rate) models, and then identify an approximate line attractor in state space as the essential structure underlying graded information propagation. An analysis of the line attractor shows that it consists of three fixed points: a central saddle with an unstable manifold along the line and stable manifolds orthogonal to the line, which is surrounded on either side by stable fixed points. Along the manifold defined by the fixed points, slow dynamics give rise to a ghost. We show that this line attractor arises at a cusp catastrophe, where a fold bifurcation develops as a function of synaptic noise; and that the ghost dynamics near the fold of the cusp underly the robustness of the line attractor. Understanding the dynamical aspects of this cusp catastrophe allows us to show how line attractors can persist in biologically realistic neuronal networks and how the interplay of pulse gating, synaptic coupling, and neuronal stochasticity can be used to enable attracting one-dimensional manifolds and, thus, dynamically control the processing of graded information.« less
Mechanisms underlying a thalamocortical transformation during active tactile sensation
Gutnisky, Diego Adrian; Yu, Jianing; Hires, Samuel Andrew; To, Minh-Son; Svoboda, Karel
2017-01-01
During active somatosensation, neural signals expected from movement of the sensors are suppressed in the cortex, whereas information related to touch is enhanced. This tactile suppression underlies low-noise encoding of relevant tactile features and the brain’s ability to make fine tactile discriminations. Layer (L) 4 excitatory neurons in the barrel cortex, the major target of the somatosensory thalamus (VPM), respond to touch, but have low spike rates and low sensitivity to the movement of whiskers. Most neurons in VPM respond to touch and also show an increase in spike rate with whisker movement. Therefore, signals related to self-movement are suppressed in L4. Fast-spiking (FS) interneurons in L4 show similar dynamics to VPM neurons. Stimulation of halorhodopsin in FS interneurons causes a reduction in FS neuron activity and an increase in L4 excitatory neuron activity. This decrease of activity of L4 FS neurons contradicts the "paradoxical effect" predicted in networks stabilized by inhibition and in strongly-coupled networks. To explain these observations, we constructed a model of the L4 circuit, with connectivity constrained by in vitro measurements. The model explores the various synaptic conductance strengths for which L4 FS neurons actively suppress baseline and movement-related activity in layer 4 excitatory neurons. Feedforward inhibition, in concert with recurrent intracortical circuitry, produces tactile suppression. Synaptic delays in feedforward inhibition allow transmission of temporally brief volleys of activity associated with touch. Our model provides a mechanistic explanation of a behavior-related computation implemented by the thalamocortical circuit. PMID:28591219
Central Control of Brown Adipose Tissue Thermogenesis
Morrison, Shaun F.; Madden, Christopher J.; Tupone, Domenico
2011-01-01
Thermogenesis, the production of heat energy, is an essential component of the homeostatic repertoire to maintain body temperature during the challenge of low environmental temperature and plays a key role in elevating body temperature during the febrile response to infection. Mitochondrial oxidation in brown adipose tissue (BAT) is a significant source of neurally regulated metabolic heat production in many species from mouse to man. BAT thermogenesis is regulated by neural networks in the central nervous system which responds to feedforward afferent signals from cutaneous and core body thermoreceptors and to feedback signals from brain thermosensitive neurons to activate BAT sympathetic nerve activity. This review summarizes the research leading to a model of the feedforward reflex pathway through which environmental cold stimulates BAT thermogenesis and includes the influence on this thermoregulatory network of the pyrogenic mediator, prostaglandin E2, to increase body temperature during fever. The cold thermal afferent circuit from cutaneous thermal receptors, through second-order thermosensory neurons in the dorsal horn of the spinal cord ascends to activate neurons in the lateral parabrachial nucleus which drive GABAergic interneurons in the preoptic area (POA) to inhibit warm-sensitive, inhibitory output neurons of the POA. The resulting disinhibition of BAT thermogenesis-promoting neurons in the dorsomedial hypothalamus activates BAT sympathetic premotor neurons in the rostral ventromedial medulla, including the rostral raphe pallidus, which provide excitatory, and possibly disinhibitory, inputs to spinal sympathetic circuits to drive BAT thermogenesis. Other recently recognized central sites influencing BAT thermogenesis and energy expenditure are also described. PMID:22389645
Central control of thermogenesis in mammals
Morrison, Shaun F.; Nakamura, Kazuhiro; Madden, Christopher J.
2008-01-01
Thermogenesis, the production of heat energy, is an essential component of the homeostatic repertoire to maintain body temperature in mammals and birds during the challenge of low environmental temperature and plays a key role in elevating body temperature during the febrile response to infection. The primary sources of neurally regulated metabolic heat production are mitochondrial oxidation in brown adipose tissue, increases in heart rate and shivering in skeletal muscle. Thermogenesis is regulated in each of these tissues by parallel networks in the central nervous system, which respond to feedforward afferent signals from cutaneous and core body thermoreceptors and to feedback signals from brain thermosensitive neurons to activate the appropriate sympathetic and somatic efferents. This review summarizes the research leading to a model of the feedforward reflex pathway through which environmental cold stimulates thermogenesis and discusses the influence on this thermoregulatory network of the pyrogenic mediator, prostaglandin E2, to increase body temperature. The cold thermal afferent circuit from cutaneous thermal receptors ascends via second-order thermosensory neurons in the dorsal horn of the spinal cord to activate neurons in the lateral parabrachial nucleus, which drive GABAergic interneurons in the preoptic area to inhibit warm-sensitive, inhibitory output neurons of the preoptic area. The resulting disinhibition of thermogenesis-promoting neurons in the dorsomedial hypothalamus and possibly of sympathetic and somatic premotor neurons in the rostral ventromedial medulla, including the raphe pallidus, activates excitatory inputs to spinal sympathetic and somatic motor circuits to drive thermogenesis. PMID:18469069
Sengupta, Ranit
2015-01-01
Despite recent progress in our understanding of sensorimotor integration in speech learning, a comprehensive framework to investigate its neural basis is lacking at behaviorally relevant timescales. Structural and functional imaging studies in humans have helped us identify brain networks that support speech but fail to capture the precise spatiotemporal coordination within the networks that takes place during speech learning. Here we use neuronal oscillations to investigate interactions within speech motor networks in a paradigm of speech motor adaptation under altered feedback with continuous recording of EEG in which subjects adapted to the real-time auditory perturbation of a target vowel sound. As subjects adapted to the task, concurrent changes were observed in the theta-gamma phase coherence during speech planning at several distinct scalp regions that is consistent with the establishment of a feedforward map. In particular, there was an increase in coherence over the central region and a decrease over the fronto-temporal regions, revealing a redistribution of coherence over an interacting network of brain regions that could be a general feature of error-based motor learning in general. Our findings have implications for understanding the neural basis of speech motor learning and could elucidate how transient breakdown of neuronal communication within speech networks relates to speech disorders. PMID:25632078
Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen
2015-01-01
Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794
Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.
Ly, Cheng; Marsat, Gary
2018-02-01
Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.
Kriegeskorte, Nikolaus
2015-11-24
Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.
Neural networks within multi-core optic fibers
Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael
2016-01-01
Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks. PMID:27383911
Neural networks within multi-core optic fibers.
Cohen, Eyal; Malka, Dror; Shemer, Amir; Shahmoon, Asaf; Zalevsky, Zeev; London, Michael
2016-07-07
Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.
Pesavento, Michael J; Pinto, David J
2012-11-01
Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.
A case for spiking neural network simulation based on configurable multiple-FPGA systems.
Yang, Shufan; Wu, Qiang; Li, Renfa
2011-09-01
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.
NASA Technical Reports Server (NTRS)
Ross, Muriel D.
1991-01-01
The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.
How does the brain solve visual object recognition?
Zoccolan, Davide; Rust, Nicole C.
2012-01-01
Mounting evidence suggests that “core object recognition,” the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains little-understood. Here we review evidence ranging from individual neurons, to neuronal populations, to behavior, to computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical sub-networks with a common functional goal. PMID:22325196
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391
Mutual Information and Information Gating in Synfire Chains
Xiao, Zhuocheng; Wang, Binxu; Sornborger, Andrew Tyler; ...
2018-02-01
Here, coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other sensory and cognitive functions. In the theoretical context, synfire chains and the transfer of transient activity packets in feedforward networks have been appealed to in order to describe coherent spiking and information transfer. Recently, it has been demonstrated that the classical synfire chain architecture, with the addition of suitably timed gating currents, can support the gradedmore » transfer of mean firing rates in feedforward networks (called synfire-gated synfire chains—SGSCs). Here we study information propagation in SGSCs by examining mutual information as a function of layer number in a feedforward network. We explore the effects of gating and noise on information transfer in synfire chains and demonstrate that asymptotically, two main regions exist in parameter space where information may be propagated and its propagation is controlled by pulse-gating: a large region where binary codes may be propagated, and a smaller region near a cusp in parameter space that supports graded propagation across many layers.« less
Mutual Information and Information Gating in Synfire Chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Zhuocheng; Wang, Binxu; Sornborger, Andrew Tyler
Here, coherent neuronal activity is believed to underlie the transfer and processing of information in the brain. Coherent activity in the form of synchronous firing and oscillations has been measured in many brain regions and has been correlated with enhanced feature processing and other sensory and cognitive functions. In the theoretical context, synfire chains and the transfer of transient activity packets in feedforward networks have been appealed to in order to describe coherent spiking and information transfer. Recently, it has been demonstrated that the classical synfire chain architecture, with the addition of suitably timed gating currents, can support the gradedmore » transfer of mean firing rates in feedforward networks (called synfire-gated synfire chains—SGSCs). Here we study information propagation in SGSCs by examining mutual information as a function of layer number in a feedforward network. We explore the effects of gating and noise on information transfer in synfire chains and demonstrate that asymptotically, two main regions exist in parameter space where information may be propagated and its propagation is controlled by pulse-gating: a large region where binary codes may be propagated, and a smaller region near a cusp in parameter space that supports graded propagation across many layers.« less
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.
DeMarse, Thomas B; Pan, Liangbin; Alagapan, Sankaraleengam; Brewer, Gregory J; Wheeler, Bruce C
2016-01-01
Transient propagation of information across neuronal assembles is thought to underlie many cognitive processes. However, the nature of the neural code that is embedded within these transmissions remains uncertain. Much of our understanding of how information is transmitted among these assemblies has been derived from computational models. While these models have been instrumental in understanding these processes they often make simplifying assumptions about the biophysical properties of neurons that may influence the nature and properties expressed. To address this issue we created an in vitro analog of a feed-forward network composed of two small populations (also referred to as assemblies or layers) of living dissociated rat cortical neurons. The populations were separated by, and communicated through, a microelectromechanical systems (MEMS) device containing a strip of microscale tunnels. Delayed culturing of one population in the first layer followed by the second a few days later induced the unidirectional growth of axons through the microtunnels resulting in a primarily feed-forward communication between these two small neural populations. In this study we systematically manipulated the number of tunnels that connected each layer and hence, the number of axons providing communication between those populations. We then assess the effect of reducing the number of tunnels has upon the properties of between-layer communication capacity and fidelity of neural transmission among spike trains transmitted across and within layers. We show evidence based on Victor-Purpura's and van Rossum's spike train similarity metrics supporting the presence of both rate and temporal information embedded within these transmissions whose fidelity increased during communication both between and within layers when the number of tunnels are increased. We also provide evidence reinforcing the role of synchronized activity upon transmission fidelity during the spontaneous synchronized network burst events that propagated between layers and highlight the potential applications of these MEMs devices as a tool for further investigation of structure and functional dynamics among neural populations.
DeMarse, Thomas B.; Pan, Liangbin; Alagapan, Sankaraleengam; Brewer, Gregory J.; Wheeler, Bruce C.
2016-01-01
Transient propagation of information across neuronal assembles is thought to underlie many cognitive processes. However, the nature of the neural code that is embedded within these transmissions remains uncertain. Much of our understanding of how information is transmitted among these assemblies has been derived from computational models. While these models have been instrumental in understanding these processes they often make simplifying assumptions about the biophysical properties of neurons that may influence the nature and properties expressed. To address this issue we created an in vitro analog of a feed-forward network composed of two small populations (also referred to as assemblies or layers) of living dissociated rat cortical neurons. The populations were separated by, and communicated through, a microelectromechanical systems (MEMS) device containing a strip of microscale tunnels. Delayed culturing of one population in the first layer followed by the second a few days later induced the unidirectional growth of axons through the microtunnels resulting in a primarily feed-forward communication between these two small neural populations. In this study we systematically manipulated the number of tunnels that connected each layer and hence, the number of axons providing communication between those populations. We then assess the effect of reducing the number of tunnels has upon the properties of between-layer communication capacity and fidelity of neural transmission among spike trains transmitted across and within layers. We show evidence based on Victor-Purpura’s and van Rossum’s spike train similarity metrics supporting the presence of both rate and temporal information embedded within these transmissions whose fidelity increased during communication both between and within layers when the number of tunnels are increased. We also provide evidence reinforcing the role of synchronized activity upon transmission fidelity during the spontaneous synchronized network burst events that propagated between layers and highlight the potential applications of these MEMs devices as a tool for further investigation of structure and functional dynamics among neural populations. PMID:27147977
Analog hardware for learning neural networks
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P. (Inventor)
1991-01-01
This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.
Patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks.
Aguiar, Manuela A D; Dias, Ana Paula S; Ferreira, Flora
2017-01-01
We consider feed-forward and auto-regulation feed-forward neural (weighted) coupled cell networks. In feed-forward neural networks, cells are arranged in layers such that the cells of the first layer have empty input set and cells of each other layer receive only inputs from cells of the previous layer. An auto-regulation feed-forward neural coupled cell network is a feed-forward neural network where additionally some cells of the first layer have auto-regulation, that is, they have a self-loop. Given a network structure, a robust pattern of synchrony is a space defined in terms of equalities of cell coordinates that is flow-invariant for any coupled cell system (with additive input structure) associated with the network. In this paper, we describe the robust patterns of synchrony for feed-forward and auto-regulation feed-forward neural networks. Regarding feed-forward neural networks, we show that only cells in the same layer can synchronize. On the other hand, in the presence of auto-regulation, we prove that cells in different layers can synchronize in a robust way and we give a characterization of the possible patterns of synchrony that can occur for auto-regulation feed-forward neural networks.
Balanced feedforward inhibition and dominant recurrent inhibition in olfactory cortex
Large, Adam M.; Vogler, Nathan W.; Mielo, Samantha; Oswald, Anne-Marie M.
2016-01-01
Throughout the brain, the recruitment of feedforward and recurrent inhibition shapes neural responses. However, disentangling the relative contributions of these often-overlapping cortical circuits is challenging. The piriform cortex provides an ideal system to address this issue because the interneurons responsible for feedforward and recurrent inhibition are anatomically segregated in layer (L) 1 and L2/3 respectively. Here we use a combination of optical and electrical activation of interneurons to profile the inhibitory input received by three classes of principal excitatory neuron in the anterior piriform cortex. In all classes, we find that L1 interneurons provide weaker inhibition than L2/3 interneurons. Nonetheless, feedforward inhibitory strength covaries with the amount of afferent excitation received by each class of principal neuron. In contrast, intracortical stimulation of L2/3 evokes strong inhibition that dominates recurrent excitation in all classes. Finally, we find that the relative contributions of feedforward and recurrent pathways differ between principal neuron classes. Specifically, L2 neurons receive more reliable afferent drive and less overall inhibition than L3 neurons. Alternatively, L3 neurons receive substantially more intracortical inhibition. These three features—balanced afferent drive, dominant recurrent inhibition, and differential recruitment by afferent vs. intracortical circuits, dependent on cell class—suggest mechanisms for olfactory processing that may extend to other sensory cortices. PMID:26858458
Feedback Enhances Feedforward Figure-Ground Segmentation by Changing Firing Mode
Supèr, Hans; Romeo, August
2011-01-01
In the visual cortex, feedback projections are conjectured to be crucial in figure-ground segregation. However, the precise function of feedback herein is unclear. Here we tested a hypothetical model of reentrant feedback. We used a previous developed 2-layered feedforwardspiking network that is able to segregate figure from ground and included feedback connections. Our computer model data show that without feedback, neurons respond with regular low-frequency (∼9 Hz) bursting to a figure-ground stimulus. After including feedback the firing pattern changed into a regular (tonic) spiking pattern. In this state, we found an extra enhancement of figure responses and a further suppression of background responses resulting in a stronger figure-ground signal. Such push-pull effect was confirmed by comparing the figure-ground responses withthe responses to a homogenous texture. We propose that feedback controlsfigure-ground segregation by influencing the neural firing patterns of feedforward projecting neurons. PMID:21738747
Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.
Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus
2017-01-01
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Park, Kellie A; Ribic, Adema; Laage Gaupp, Fabian M; Coman, Daniel; Huang, Yuegao; Dulla, Chris G; Hyder, Fahmeed; Biederer, Thomas
2016-07-13
Select adhesion proteins control the development of synapses and modulate their structural and functional properties. Despite these important roles, the extent to which different synapse-organizing mechanisms act across brain regions to establish connectivity and regulate network properties is incompletely understood. Further, their functional roles in different neuronal populations remain to be defined. Here, we applied diffusion tensor imaging (DTI), a modality of magnetic resonance imaging (MRI), to map connectivity changes in knock-out (KO) mice lacking the synaptogenic cell adhesion protein SynCAM 1. This identified reduced fractional anisotropy in the hippocampal CA3 area in absence of SynCAM 1. In agreement, mossy fiber refinement in CA3 was impaired in SynCAM 1 KO mice. Mossy fibers make excitatory inputs onto postsynaptic specializations of CA3 pyramidal neurons termed thorny excrescences and these structures were smaller in the absence of SynCAM 1. However, the most prevalent targets of mossy fibers are GABAergic interneurons and SynCAM 1 loss unexpectedly reduced the number of excitatory terminals onto parvalbumin (PV)-positive interneurons in CA3. SynCAM 1 KO mice additionally exhibited lower postsynaptic GluA1 expression in these PV-positive interneurons. These synaptic imbalances in SynCAM 1 KO mice resulted in CA3 disinhibition, in agreement with reduced feedforward inhibition in this network in the absence of SynCAM 1-dependent excitatory drive onto interneurons. In turn, mice lacking SynCAM 1 were impaired in memory tasks involving CA3. Our results support that SynCAM 1 modulates excitatory mossy fiber inputs onto both interneurons and principal neurons in the hippocampal CA3 area to balance network excitability. This study advances our understanding of synapse-organizing mechanisms on two levels. First, the data support that synaptogenic proteins guide connectivity and can function in distinct brain regions even if they are expressed broadly. Second, the results demonstrate that a synaptogenic process that controls excitatory inputs to both pyramidal neurons and interneurons can balance excitation and inhibition. Specifically, the study reveals that hippocampal CA3 connectivity is modulated by the synapse-organizing adhesion protein SynCAM 1 and identifies a novel, SynCAM 1-dependent mechanism that controls excitatory inputs onto parvalbumin-positive interneurons. This enables SynCAM 1 to regulate feedforward inhibition and set network excitability. Further, we show that diffusion tensor imaging is sensitive to these cellular refinements affecting neuronal connectivity. Copyright © 2016 the authors 0270-6474/16/367465-12$15.00/0.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.
Gilra, Aditya; Gerstner, Wulfram
2017-11-27
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network
Gerstner, Wulfram
2017-01-01
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280
Emergent spatial synaptic structure from diffusive plasticity.
Sweeney, Yann; Clopath, Claudia
2017-04-01
Some neurotransmitters can diffuse freely across cell membranes, influencing neighbouring neurons regardless of their synaptic coupling. This provides a means of neural communication, alternative to synaptic transmission, which can influence the way in which neural networks process information. Here, we ask whether diffusive neurotransmission can also influence the structure of synaptic connectivity in a network undergoing plasticity. We propose a form of Hebbian synaptic plasticity which is mediated by a diffusive neurotransmitter. Whenever a synapse is modified at an individual neuron through our proposed mechanism, similar but smaller modifications occur in synapses connecting to neighbouring neurons. The effects of this diffusive plasticity are explored in networks of rate-based neurons. This leads to the emergence of spatial structure in the synaptic connectivity of the network. We show that this spatial structure can coexist with other forms of structure in the synaptic connectivity, such as with groups of strongly interconnected neurons that form in response to correlated external drive. Finally, we explore diffusive plasticity in a simple feedforward network model of receptive field development. We show that, as widely observed across sensory cortex, the preferred stimulus identity of neurons in our network become spatially correlated due to diffusion. Our proposed mechanism of diffusive plasticity provides an efficient mechanism for generating these spatial correlations in stimulus preference which can flexibly interact with other forms of synaptic organisation. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics
Buice, Michael; Koch, Christof; Mihalas, Stefan
2013-01-01
The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations. PMID:24204219
Temporal integration at consecutive processing stages in the auditory pathway of the grasshopper.
Wirtssohn, Sarah; Ronacher, Bernhard
2015-04-01
Temporal integration in the auditory system of locusts was quantified by presenting single clicks and click pairs while performing intracellular recordings. Auditory neurons were studied at three processing stages, which form a feed-forward network in the metathoracic ganglion. Receptor neurons and most first-order interneurons ("local neurons") encode the signal envelope, while second-order interneurons ("ascending neurons") tend to extract more complex, behaviorally relevant sound features. In different neuron types of the auditory pathway we found three response types: no significant temporal integration (some ascending neurons), leaky energy integration (receptor neurons and some local neurons), and facilitatory processes (some local and ascending neurons). The receptor neurons integrated input over very short time windows (<2 ms). Temporal integration on longer time scales was found at subsequent processing stages, indicative of within-neuron computations and network activity. These different strategies, realized at separate processing stages and in parallel neuronal pathways within one processing stage, could enable the grasshopper's auditory system to evaluate longer time windows and thus to implement temporal filters, while at the same time maintaining a high temporal resolution. Copyright © 2015 the American Physiological Society.
Feature to prototype transition in neural networks
NASA Astrophysics Data System (ADS)
Krotov, Dmitry; Hopfield, John
Models of associative memory with higher order (higher than quadratic) interactions, and their relationship to neural networks used in deep learning are discussed. Associative memory is conventionally described by recurrent neural networks with dynamical convergence to stable points. Deep learning typically uses feedforward neural nets without dynamics. However, a simple duality relates these two different views when applied to problems of pattern classification. From the perspective of associative memory such models deserve attention because they make it possible to store a much larger number of memories, compared to the quadratic case. In the dual description, these models correspond to feedforward neural networks with one hidden layer and unusual activation functions transmitting the activities of the visible neurons to the hidden layer. These activation functions are rectified polynomials of a higher degree rather than the rectified linear functions used in deep learning. The network learns representations of the data in terms of features for rectified linear functions, but as the power in the activation function is increased there is a gradual shift to a prototype-based representation, the two extreme regimes of pattern recognition known in cognitive psychology. Simons Center for Systems Biology.
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
Gating-signal propagation by a feed-forward neural motif
NASA Astrophysics Data System (ADS)
Liang, Xiaoming; Yanchuk, Serhiy; Zhao, Liang
2013-07-01
We study the signal propagation in a feed-forward motif consisting of three bistable neurons: Two input neurons receive input signals and the third output neuron generates the output. We find that a weak input signal can be propagated from the input neurons to the output neuron without amplitude attenuation. We further reveal that the initial states of the input neurons and the coupling strength act as signal gates and determine whether the propagation is enhanced or not. We also investigate the effect of the input signal frequency on enhanced signal propagation.
Perisse, Emmanuel; Owald, David; Barnstedt, Oliver; Talbot, Clifford B; Huetteroth, Wolf; Waddell, Scott
2016-06-01
In Drosophila, negatively reinforcing dopaminergic neurons also provide the inhibitory control of satiety over appetitive memory expression. Here we show that aversive learning causes a persistent depression of the conditioned odor drive to two downstream feed-forward inhibitory GABAergic interneurons of the mushroom body, called MVP2, or mushroom body output neuron (MBON)-γ1pedc>α/β. However, MVP2 neuron output is only essential for expression of short-term aversive memory. Stimulating MVP2 neurons preferentially inhibits the odor-evoked activity of avoidance-directing MBONs and odor-driven avoidance behavior, whereas their inhibition enhances odor avoidance. In contrast, odor-evoked activity of MVP2 neurons is elevated in hungry flies, and their feed-forward inhibition is required for expression of appetitive memory at all times. Moreover, imposing MVP2 activity promotes inappropriate appetitive memory expression in food-satiated flies. Aversive learning and appetitive motivation therefore toggle alternate modes of a common feed-forward inhibitory MVP2 pathway to promote conditioned odor avoidance or approach. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Romeo, August; Arall, Marina; Supèr, Hans
2012-01-01
Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception.
Romeo, August; Arall, Marina; Supèr, Hans
2012-01-01
Figure-ground (FG) segmentation is the separation of visual information into background and foreground objects. In the visual cortex, FG responses are observed in the late stimulus response period, when neurons fire in tonic mode, and are accompanied by a switch in cortical state. When such a switch does not occur, FG segmentation fails. Currently, it is not known what happens in the brain on such occasions. A biologically plausible feedforward spiking neuron model was previously devised that performed FG segmentation successfully. After incorporating feedback the FG signal was enhanced, which was accompanied by a change in spiking regime. In a feedforward model neurons respond in a bursting mode whereas in the feedback model neurons fired in tonic mode. It is known that bursts can overcome noise, while tonic firing appears to be much more sensitive to noise. In the present study, we try to elucidate how the presence of noise can impair FG segmentation, and to what extent the feedforward and feedback pathways can overcome noise. We show that noise specifically destroys the feedback enhanced FG segmentation and leaves the feedforward FG segmentation largely intact. Our results predict that noise produces failure in FG perception. PMID:22934028
Adaptive regulation of sparseness by feedforward inhibition
Assisi, Collins; Stopfer, Mark; Laurent, Gilles; Bazhenov, Maxim
2014-01-01
In the mushroom body of insects, odors are represented by very few spikes in a small number of neurons, a highly efficient strategy known as sparse coding. Physiological studies of these neurons have shown that sparseness is maintained across thousand-fold changes in odor concentration. Using a realistic computational model, we propose that sparseness in the olfactory system is regulated by adaptive feedforward inhibition. When odor concentration changes, feedforward inhibition modulates the duration of the temporal window over which the mushroom body neurons may integrate excitatory presynaptic input. This simple adaptive mechanism could maintain the sparseness of sensory representations across wide ranges of stimulus conditions. PMID:17660812
Specific excitatory connectivity for feature integration in mouse primary visual cortex
Molina-Luna, Patricia; Roth, Morgane M.
2017-01-01
Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769
Inhibitory Gating of Basolateral Amygdala Inputs to the Prefrontal Cortex
McGarry, Laura M.
2016-01-01
Interactions between the prefrontal cortex (PFC) and basolateral amygdala (BLA) regulate emotional behaviors. However, a circuit-level understanding of functional connections between these brain regions remains incomplete. The BLA sends prominent glutamatergic projections to the PFC, but the overall influence of these inputs is predominantly inhibitory. Here we combine targeted recordings and optogenetics to examine the synaptic underpinnings of this inhibition in the mouse infralimbic PFC. We find that BLA inputs preferentially target layer 2 corticoamygdala over neighboring corticostriatal neurons. However, these inputs make even stronger connections onto neighboring parvalbumin and somatostatin expressing interneurons. Inhibitory connections from these two populations of interneurons are also much stronger onto corticoamygdala neurons. Consequently, BLA inputs are able to drive robust feedforward inhibition via two parallel interneuron pathways. Moreover, the contributions of these interneurons shift during repetitive activity, due to differences in short-term synaptic dynamics. Thus, parvalbumin interneurons are activated at the start of stimulus trains, whereas somatostatin interneuron activation builds during these trains. Together, these results reveal how the BLA impacts the PFC through a complex interplay of direct excitation and feedforward inhibition. They also highlight the roles of targeted connections onto multiple projection neurons and interneurons in this cortical circuit. Our findings provide a mechanistic understanding for how the BLA can influence the PFC circuit, with important implications for how this circuit participates in the regulation of emotion. SIGNIFICANCE STATEMENT The prefrontal cortex (PFC) and basolateral amygdala (BLA) interact to control emotional behaviors. Here we show that BLA inputs elicit direct excitation and feedforward inhibition of layer 2 projection neurons in infralimbic PFC. BLA inputs are much stronger at corticoamygdala neurons compared with nearby corticostriatal neurons. However, these inputs are even more powerful at parvalbumin and somatostatin expressing interneurons. BLA inputs thus activate two parallel inhibitory networks, whose contributions change during repetitive activity. Finally, connections from these interneurons are also more powerful at corticoamygdala neurons compared with corticostriatal neurons. Together, our results demonstrate how the BLA predominantly inhibits the PFC via a complex sequence involving multiple cell-type and input-specific connections. PMID:27605614
Oran, Yael; Bar-Gad, Izhar
2018-02-14
Fast-spiking interneurons (FSIs) exert powerful inhibitory control over the striatum and are hypothesized to balance the massive excitatory cortical and thalamic input to this structure. We recorded neuronal activity in the dorsolateral striatum and globus pallidus (GP) concurrently with the detailed movement kinematics of freely behaving female rats before and after selective inhibition of FSI activity using IEM-1460 microinjections. The inhibition led to the appearance of episodic rest tremor in the body part that depended on the somatotopic location of the injection within the striatum. The tremor was accompanied by coherent oscillations in the local field potential (LFP). Individual neuron activity patterns became oscillatory and coherent in the tremor frequency. Striatal neurons, but not GP neurons, displayed additional temporal, nonoscillatory correlations. The subsequent reduction in the corticostriatal input following muscimol injection to the corresponding somatotopic location in the primary motor cortex led to disruption of the tremor and a reduction of the LFP oscillations and individual neuron's phase-locked activity. The breakdown of the normal balance of excitation and inhibition in the striatum has been shown previously to be related to different motor abnormalities. Our results further indicate that the balance between excitatory corticostriatal input and feedforward FSI inhibition is sufficient to break down the striatal decorrelation process and generate oscillations resulting in rest tremor typical of multiple basal ganglia disorders. SIGNIFICANCE STATEMENT Fast-spiking interneurons (FSIs) play a key role in normal striatal processing by exerting powerful inhibitory control over the network. FSI malfunctions have been associated with abnormal processing of information within the striatum that leads to multiple movement disorders. Here, we study the changes in neuronal activity and movement kinematics following selective inhibition of these neurons. The injections led to the appearance of episodic rest tremor, accompanied by coherent oscillations in neuronal activity, which was reversed following corticostriatal inhibition. These results suggest that the balance between corticostriatal excitation and feedforward FSI inhibition is crucial for maintaining the striatal decorrelation process, and that its breakdown leads to the formation of oscillations resulting in rest tremor typical of multiple basal ganglia disorders. Copyright © 2018 the authors 0270-6474/18/381699-12$15.00/0.
Inhibitory Gating of Basolateral Amygdala Inputs to the Prefrontal Cortex.
McGarry, Laura M; Carter, Adam G
2016-09-07
Interactions between the prefrontal cortex (PFC) and basolateral amygdala (BLA) regulate emotional behaviors. However, a circuit-level understanding of functional connections between these brain regions remains incomplete. The BLA sends prominent glutamatergic projections to the PFC, but the overall influence of these inputs is predominantly inhibitory. Here we combine targeted recordings and optogenetics to examine the synaptic underpinnings of this inhibition in the mouse infralimbic PFC. We find that BLA inputs preferentially target layer 2 corticoamygdala over neighboring corticostriatal neurons. However, these inputs make even stronger connections onto neighboring parvalbumin and somatostatin expressing interneurons. Inhibitory connections from these two populations of interneurons are also much stronger onto corticoamygdala neurons. Consequently, BLA inputs are able to drive robust feedforward inhibition via two parallel interneuron pathways. Moreover, the contributions of these interneurons shift during repetitive activity, due to differences in short-term synaptic dynamics. Thus, parvalbumin interneurons are activated at the start of stimulus trains, whereas somatostatin interneuron activation builds during these trains. Together, these results reveal how the BLA impacts the PFC through a complex interplay of direct excitation and feedforward inhibition. They also highlight the roles of targeted connections onto multiple projection neurons and interneurons in this cortical circuit. Our findings provide a mechanistic understanding for how the BLA can influence the PFC circuit, with important implications for how this circuit participates in the regulation of emotion. The prefrontal cortex (PFC) and basolateral amygdala (BLA) interact to control emotional behaviors. Here we show that BLA inputs elicit direct excitation and feedforward inhibition of layer 2 projection neurons in infralimbic PFC. BLA inputs are much stronger at corticoamygdala neurons compared with nearby corticostriatal neurons. However, these inputs are even more powerful at parvalbumin and somatostatin expressing interneurons. BLA inputs thus activate two parallel inhibitory networks, whose contributions change during repetitive activity. Finally, connections from these interneurons are also more powerful at corticoamygdala neurons compared with corticostriatal neurons. Together, our results demonstrate how the BLA predominantly inhibits the PFC via a complex sequence involving multiple cell-type and input-specific connections. Copyright © 2016 the authors 0270-6474/16/369391-16$15.00/0.
Delevich, Kristen; Tucciarone, Jason; Huang, Z. Josh
2015-01-01
Although the medial prefrontal cortex (mPFC) is classically defined by its reciprocal connections with the mediodorsal thalamic nucleus (MD), the nature of information transfer between MD and mPFC is poorly understood. In sensory thalamocortical pathways, thalamic recruitment of feedforward inhibition mediated by fast-spiking, putative parvalbumin-expressing (PV) interneurons is a key feature that enables cortical neurons to represent sensory stimuli with high temporal fidelity. Whether a similar circuit mechanism is in place for the projection from the MD (a higher-order thalamic nucleus that does not receive direct input from the periphery) to the mPFC is unknown. Here we show in mice that inputs from the MD drive disynaptic feedforward inhibition in the dorsal anterior cingulate cortex (dACC) subregion of the mPFC. In particular, we demonstrate that axons arising from MD neurons directly synapse onto and excite PV interneurons that in turn mediate feedforward inhibition of pyramidal neurons in layer 3 of the dACC. This feedforward inhibition in the dACC limits the time window during which pyramidal neurons integrate excitatory synaptic inputs and fire action potentials, but in a manner that allows for greater flexibility than in sensory cortex. These findings provide a foundation for understanding the role of MD-PFC circuit function in cognition. PMID:25855185
Intrinsic Neuronal Properties Switch the Mode of Information Transmission in Networks
Gjorgjieva, Julijana; Mease, Rebecca A.; Moody, William J.; Fairhall, Adrienne L.
2014-01-01
Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission. PMID:25474701
Synchrony detection and amplification by silicon neurons with STDP synapses.
Bofill-i-petit, Adria; Murray, Alan F
2004-09-01
Spike-timing dependent synaptic plasticity (STDP) is a form of plasticity driven by precise spike-timing differences between presynaptic and postsynaptic spikes. Thus, the learning rules underlying STDP are suitable for learning neuronal temporal phenomena such as spike-timing synchrony. It is well known that weight-independent STDP creates unstable learning processes resulting in balanced bimodal weight distributions. In this paper, we present a neuromorphic analog very large scale integration (VLSI) circuit that contains a feedforward network of silicon neurons with STDP synapses. The learning rule implemented can be tuned to have a moderate level of weight dependence. This helps stabilise the learning process and still generates binary weight distributions. From on-chip learning experiments we show that the chip can detect and amplify hierarchical spike-timing synchrony structures embedded in noisy spike trains. The weight distributions of the network emerging from learning are bimodal.
The mechanics of state dependent neural correlations
Doiron, Brent; Litwin-Kumar, Ashok; Rosenbaum, Robert; Ocker, Gabriel K.; Josić, Krešimir
2016-01-01
Simultaneous recordings from large neural populations are becoming increasingly common. An important feature of the population activity are the trial-to-trial correlated fluctuations of the spike train outputs of recorded neuron pairs. Like the firing rate of single neurons, correlated activity can be modulated by a number of factors, from changes in arousal and attentional state to learning and task engagement. However, the network mechanisms that underlie these changes are not fully understood. We review recent theoretical results that identify three separate biophysical mechanisms that modulate spike train correlations: changes in input correlations, internal fluctuations, and the transfer function of single neurons. We first examine these mechanisms in feedforward pathways, and then show how the same approach can explain the modulation of correlations in recurrent networks. Such mechanistic constraints on the modulation of population activity will be important in statistical analyses of high dimensional neural data. PMID:26906505
Feed-forward and feedback projections of midbrain reticular formation neurons in the cat
Perkins, Eddie; May, Paul J.; Warren, Susan
2014-01-01
Gaze changes involving the eyes and head are orchestrated by brainstem gaze centers found within the superior colliculus (SC), paramedian pontine reticular formation (PPRF), and medullary reticular formation (MdRF). The mesencephalic reticular formation (MRF) also plays a role in gaze. It receives a major input from the ipsilateral SC and contains cells that fire in relation to gaze changes. Moreover, it provides a feedback projection to the SC and feed-forward projections to the PPRF and MdRF. We sought to determine whether these MRF feedback and feed-forward projections originate from the same or different neuronal populations by utilizing paired fluorescent retrograde tracers in cats. Specifically, we tested: 1. whether MRF neurons that control eye movements form a single population by injecting the SC and PPRF with different tracers, and 2. whether MRF neurons that control head movements form a single population by injecting the SC and MdRF with different tracers. In neither case were double labeled neurons observed, indicating that feedback and feed-forward projections originate from separate MRF populations. In both cases, the labeled reticulotectal and reticuloreticular neurons were distributed bilaterally in the MRF. However, neurons projecting to the MdRF were generally constrained to the medial half of the MRF, while those projecting to the PPRF, like MRF reticulotectal neurons, were spread throughout the mediolateral axis. Thus, the medial MRF may be specialized for control of head movements, with control of eye movements being more widespread in this structure. PMID:24454280
Feed-forward and feedback projections of midbrain reticular formation neurons in the cat.
Perkins, Eddie; May, Paul J; Warren, Susan
2014-01-10
Gaze changes involving the eyes and head are orchestrated by brainstem gaze centers found within the superior colliculus (SC), paramedian pontine reticular formation (PPRF), and medullary reticular formation (MdRF). The mesencephalic reticular formation (MRF) also plays a role in gaze. It receives a major input from the ipsilateral SC and contains cells that fire in relation to gaze changes. Moreover, it provides a feedback projection to the SC and feed-forward projections to the PPRF and MdRF. We sought to determine whether these MRF feedback and feed-forward projections originate from the same or different neuronal populations by utilizing paired fluorescent retrograde tracers in cats. Specifically, we tested: 1. whether MRF neurons that control eye movements form a single population by injecting the SC and PPRF with different tracers, and 2. whether MRF neurons that control head movements form a single population by injecting the SC and MdRF with different tracers. In neither case were double labeled neurons observed, indicating that feedback and feed-forward projections originate from separate MRF populations. In both cases, the labeled reticulotectal and reticuloreticular neurons were distributed bilaterally in the MRF. However, neurons projecting to the MdRF were generally constrained to the medial half of the MRF, while those projecting to the PPRF, like MRF reticulotectal neurons, were spread throughout the mediolateral axis. Thus, the medial MRF may be specialized for control of head movements, with control of eye movements being more widespread in this structure.
Cellular automata simulation of topological effects on the dynamics of feed-forward motifs
Apte, Advait A; Cain, John W; Bonchev, Danail G; Fong, Stephen S
2008-01-01
Background Feed-forward motifs are important functional modules in biological and other complex networks. The functionality of feed-forward motifs and other network motifs is largely dictated by the connectivity of the individual network components. While studies on the dynamics of motifs and networks are usually devoted to the temporal or spatial description of processes, this study focuses on the relationship between the specific architecture and the overall rate of the processes of the feed-forward family of motifs, including double and triple feed-forward loops. The search for the most efficient network architecture could be of particular interest for regulatory or signaling pathways in biology, as well as in computational and communication systems. Results Feed-forward motif dynamics were studied using cellular automata and compared with differential equation modeling. The number of cellular automata iterations needed for a 100% conversion of a substrate into a target product was used as an inverse measure of the transformation rate. Several basic topological patterns were identified that order the specific feed-forward constructions according to the rate of dynamics they enable. At the same number of network nodes and constant other parameters, the bi-parallel and tri-parallel motifs provide higher network efficacy than single feed-forward motifs. Additionally, a topological property of isodynamicity was identified for feed-forward motifs where different network architectures resulted in the same overall rate of the target production. Conclusion It was shown for classes of structural motifs with feed-forward architecture that network topology affects the overall rate of a process in a quantitatively predictable manner. These fundamental results can be used as a basis for simulating larger networks as combinations of smaller network modules with implications on studying synthetic gene circuits, small regulatory systems, and eventually dynamic whole-cell models. PMID:18304325
Morris, Kendall F.; Segers, Lauren S.; Poliacek, Ivan; Rose, Melanie J.; Lindsey, Bruce G.; Davenport, Paul W.; Howland, Dena R.; Bolser, Donald C.
2016-01-01
We investigated the hypothesis, motivated in part by a coordinated computational cough network model, that second-order neurons in the nucleus tractus solitarius (NTS) act as a filter and shape afferent input to the respiratory network during the production of cough. In vivo experiments were conducted on anesthetized spontaneously breathing cats. Cough was elicited by mechanical stimulation of the intrathoracic airways. Electromyograms of the parasternal (inspiratory) and rectus abdominis (expiratory) muscles and esophageal pressure were recorded. In vivo data revealed that expiratory motor drive during bouts of repetitive coughs is variable: peak expulsive amplitude increases from the first cough, peaks about the eighth or ninth cough, and then decreases through the remainder of the bout. Model simulations indicated that feed-forward inhibition of a single second-order neuron population is not sufficient to account for this dynamic feature of a repetitive cough bout. When a single second-order population was split into two subpopulations (inspiratory and expiratory), the resultant model produced simulated expiratory motor bursts that were comparable to in vivo data. However, expiratory phase durations during these simulations of repetitive coughing had less variance than those in vivo. Simulations in which reciprocal inhibitory processes between inspiratory-decrementing and expiratory-augmenting-late neurons were introduced exhibited increased variance in the expiratory phase durations. These results support the prediction that serial and parallel processing of airway afferent signals in the NTS play a role in generation of the motor pattern for cough. PMID:27283917
Feedforward, high density, programmable read only neural network based memory system
NASA Technical Reports Server (NTRS)
Daud, Taher; Moopenn, Alex; Lamb, James; Thakoor, Anil; Khanna, Satish
1988-01-01
Neural network-inspired, nonvolatile, programmable associative memory using thin-film technology is demonstrated. The details of the architecture, which uses programmable resistive connection matrices in synaptic arrays and current summing and thresholding amplifiers as neurons, are described. Several synapse configurations for a high-density array of a binary connection matrix are also described. Test circuits are evaluated for operational feasibility and to demonstrate the speed of the read operation. The results are discussed to highlight the potential for a read data rate exceeding 10 megabits/sec.
Supervised learning of probability distributions by neural networks
NASA Technical Reports Server (NTRS)
Baum, Eric B.; Wilczek, Frank
1988-01-01
Supervised learning algorithms for feedforward neural networks are investigated analytically. The back-propagation algorithm described by Werbos (1974), Parker (1985), and Rumelhart et al. (1986) is generalized by redefining the values of the input and output neurons as probabilities. The synaptic weights are then varied to follow gradients in the logarithm of likelihood rather than in the error. This modification is shown to provide a more rigorous theoretical basis for the algorithm and to permit more accurate predictions. A typical application involving a medical-diagnosis expert system is discussed.
Schmitt, Michael
2004-09-01
We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.
Efficient self-organizing multilayer neural network for nonlinear system modeling.
Han, Hong-Gui; Wang, Li-Dan; Qiao, Jun-Fei
2013-07-01
It has been shown extensively that the dynamic behaviors of a neural system are strongly influenced by the network architecture and learning process. To establish an artificial neural network (ANN) with self-organizing architecture and suitable learning algorithm for nonlinear system modeling, an automatic axon-neural network (AANN) is investigated in the following respects. First, the network architecture is constructed automatically to change both the number of hidden neurons and topologies of the neural network during the training process. The approach introduced in adaptive connecting-and-pruning algorithm (ACP) is a type of mixed mode operation, which is equivalent to pruning or adding the connecting of the neurons, as well as inserting some required neurons directly. Secondly, the weights are adjusted, using a feedforward computation (FC) to obtain the information for the gradient during learning computation. Unlike most of the previous studies, AANN is able to self-organize the architecture and weights, and to improve the network performances. Also, the proposed AANN has been tested on a number of benchmark problems, ranging from nonlinear function approximating to nonlinear systems modeling. The experimental results show that AANN can have better performances than that of some existing neural networks. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Two papers on feed-forward networks
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Weigend, Andreas S.
1991-01-01
Connectionist feed-forward networks, trained with back-propagation, can be used both for nonlinear regression and for (discrete one-of-C) classification, depending on the form of training. This report contains two papers on feed-forward networks. The papers can be read independently. They are intended for the theoretically-aware practitioner or algorithm-designer; however, they also contain a review and comparison of several learning theories so they provide a perspective for the theoretician. The first paper works through Bayesian methods to complement back-propagation in the training of feed-forward networks. The second paper addresses a problem raised by the first: how to efficiently calculate second derivatives on feed-forward networks.
Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios
2018-06-21
Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.
Signal propagation and logic gating in networks of integrate-and-fire neurons.
Vogels, Tim P; Abbott, L F
2005-11-16
Transmission of signals within the brain is essential for cognitive function, but it is not clear how neural circuits support reliable and accurate signal propagation over a sufficiently large dynamic range. Two modes of propagation have been studied: synfire chains, in which synchronous activity travels through feedforward layers of a neuronal network, and the propagation of fluctuations in firing rate across these layers. In both cases, a sufficient amount of noise, which was added to previous models from an external source, had to be included to support stable propagation. Sparse, randomly connected networks of spiking model neurons can generate chaotic patterns of activity. We investigate whether this activity, which is a more realistic noise source, is sufficient to allow for signal transmission. We find that, for rate-coded signals but not for synfire chains, such networks support robust and accurate signal reproduction through up to six layers if appropriate adjustments are made in synaptic strengths. We investigate the factors affecting transmission and show that multiple signals can propagate simultaneously along different pathways. Using this feature, we show how different types of logic gates can arise within the architecture of the random network through the strengthening of specific synapses.
Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition
Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A.
2016-01-01
The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. PMID:26209846
A morphological basis for orientation tuning in primary visual cortex.
Mooser, François; Bosking, William H; Fitzpatrick, David
2004-08-01
Feedforward connections are thought to be important in the generation of orientation-selective responses in visual cortex by establishing a bias in the sampling of information from regions of visual space that lie along a neuron's axis of preferred orientation. It remains unclear, however, which structural elements-dendrites or axons-are ultimately responsible for conveying this sampling bias. To explore this question, we have examined the spatial arrangement of feedforward axonal connections that link non-oriented neurons in layer 4 and orientation-selective neurons in layer 2/3 of visual cortex in the tree shrew. Target sites of labeled boutons in layer 2/3 resulting from focal injections of biocytin in layer 4 show an orientation-specific axial bias that is sufficient to confer orientation tuning to layer 2/3 neurons. We conclude that the anisotropic arrangement of axon terminals is the principal source of the orientation bias contributed by feedforward connections.
Neural computation of arithmetic functions
NASA Technical Reports Server (NTRS)
Siu, Kai-Yeung; Bruck, Jehoshua
1990-01-01
An area of application of neural networks is considered. A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions.
Stratmann, Johannes
2017-01-01
The extensive genetic regulatory flows underlying specification of different neuronal subtypes are not well understood at the molecular level. The Nplp1 neuropeptide neurons in the developing Drosophila nerve cord belong to two sub-classes; Tv1 and dAp neurons, generated by two distinct progenitors. Nplp1 neurons are specified by spatial cues; the Hox homeotic network and GATA factor grn, and temporal cues; the hb -> Kr -> Pdm -> cas -> grh temporal cascade. These spatio-temporal cues combine into two distinct codes; one for Tv1 and one for dAp neurons that activate a common terminal selector feedforward cascade of col -> ap/eya -> dimm -> Nplp1. Here, we molecularly decode the specification of Nplp1 neurons, and find that the cis-regulatory organization of col functions as an integratory node for the different spatio-temporal combinatorial codes. These findings may provide a logical framework for addressing spatio-temporal control of neuronal sub-type specification in other systems. PMID:28414802
On the stability, storage capacity, and design of nonlinear continuous neural networks
NASA Technical Reports Server (NTRS)
Guez, Allon; Protopopsecu, Vladimir; Barhen, Jacob
1988-01-01
The stability, capacity, and design of a nonlinear continuous neural network are analyzed. Sufficient conditions for existence and asymptotic stability of the network's equilibria are reduced to a set of piecewise-linear inequality relations that can be solved by a feedforward binary network, or by methods such as Fourier elimination. The stability and capacity of the network is characterized by the post synaptic firing rate function. An N-neuron network with sigmoidal firing function is shown to have up to 3N equilibrium points. This offers a higher capacity than the (0.1-0.2)N obtained in the binary Hopfield network. Moreover, it is shown that by a proper selection of the postsynaptic firing rate function, one can significantly extend the capacity storage of the network.
Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M
2017-03-01
We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Owen, Scott F; Berke, Joshua D; Kreitzer, Anatol C
2018-02-08
Fast-spiking interneurons (FSIs) are a prominent class of forebrain GABAergic cells implicated in two seemingly independent network functions: gain control and network plasticity. Little is known, however, about how these roles interact. Here, we use a combination of cell-type-specific ablation, optogenetics, electrophysiology, imaging, and behavior to describe a unified mechanism by which striatal FSIs control burst firing, calcium influx, and synaptic plasticity in neighboring medium spiny projection neurons (MSNs). In vivo silencing of FSIs increased bursting, calcium transients, and AMPA/NMDA ratios in MSNs. In a motor sequence task, FSI silencing increased the frequency of calcium transients but reduced the specificity with which transients aligned to individual task events. Consistent with this, ablation of FSIs disrupted the acquisition of striatum-dependent egocentric learning strategies. Together, our data support a model in which feedforward inhibition from FSIs temporally restricts MSN bursting and calcium-dependent synaptic plasticity to facilitate striatum-dependent sequence learning. Copyright © 2018 Elsevier Inc. All rights reserved.
A quantum-implementable neural network model
NASA Astrophysics Data System (ADS)
Chen, Jialin; Wang, Lingli; Charbon, Edoardo
2017-10-01
A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.
Significance of Input Correlations in Striatal Function
Yim, Man Yi; Aertsen, Ad; Kumar, Arvind
2011-01-01
The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. PMID:22125480
Pitts, Teresa; Morris, Kendall F; Segers, Lauren S; Poliacek, Ivan; Rose, Melanie J; Lindsey, Bruce G; Davenport, Paul W; Howland, Dena R; Bolser, Donald C
2016-07-01
We investigated the hypothesis, motivated in part by a coordinated computational cough network model, that second-order neurons in the nucleus tractus solitarius (NTS) act as a filter and shape afferent input to the respiratory network during the production of cough. In vivo experiments were conducted on anesthetized spontaneously breathing cats. Cough was elicited by mechanical stimulation of the intrathoracic airways. Electromyograms of the parasternal (inspiratory) and rectus abdominis (expiratory) muscles and esophageal pressure were recorded. In vivo data revealed that expiratory motor drive during bouts of repetitive coughs is variable: peak expulsive amplitude increases from the first cough, peaks about the eighth or ninth cough, and then decreases through the remainder of the bout. Model simulations indicated that feed-forward inhibition of a single second-order neuron population is not sufficient to account for this dynamic feature of a repetitive cough bout. When a single second-order population was split into two subpopulations (inspiratory and expiratory), the resultant model produced simulated expiratory motor bursts that were comparable to in vivo data. However, expiratory phase durations during these simulations of repetitive coughing had less variance than those in vivo. Simulations in which reciprocal inhibitory processes between inspiratory-decrementing and expiratory-augmenting-late neurons were introduced exhibited increased variance in the expiratory phase durations. These results support the prediction that serial and parallel processing of airway afferent signals in the NTS play a role in generation of the motor pattern for cough. Copyright © 2016 the American Physiological Society.
Regulation of Cortical Dynamic Range by Background Synaptic Noise and Feedforward Inhibition.
Khubieh, Ayah; Ratté, Stéphanie; Lankarany, Milad; Prescott, Steven A
2016-08-01
The cortex encodes a broad range of inputs. This breadth of operation requires sensitivity to weak inputs yet non-saturating responses to strong inputs. If individual pyramidal neurons were to have a narrow dynamic range, as previously claimed, then staggered all-or-none recruitment of those neurons would be necessary for the population to achieve a broad dynamic range. Contrary to this explanation, we show here through dynamic clamp experiments in vitro and computer simulations that pyramidal neurons have a broad dynamic range under the noisy conditions that exist in the intact brain due to background synaptic input. Feedforward inhibition capitalizes on those noise effects to control neuronal gain and thereby regulates the population dynamic range. Importantly, noise allows neurons to be recruited gradually and occludes the staggered recruitment previously attributed to heterogeneous excitation. Feedforward inhibition protects spike timing against the disruptive effects of noise, meaning noise can enable the gain control required for rate coding without compromising the precise spike timing required for temporal coding. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Orientation-selective aVLSI spiking neurons.
Liu, S C; Kramer, J; Indiveri, G; Delbrück, T; Burg, T; Douglas, R
2001-01-01
We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
Mathalon, Daniel H; Sohal, Vikaas S
2015-08-01
Neural oscillations are rhythmic fluctuations over time in the activity or excitability of single neurons, local neuronal populations or "assemblies," and/or multiple regionally distributed neuronal assemblies. Synchronized oscillations among large numbers of neurons are evident in electrocorticographic, electroencephalographic, magnetoencephalographic, and local field potential recordings and are generally understood to depend on inhibition that paces assemblies of excitatory neurons to produce alternating temporal windows of reduced and increased excitability. Synchronization of neural oscillations is supported by the extensive networks of local and long-range feedforward and feedback bidirectional connections between neurons. Here, we review some of the major methods and measures used to characterize neural oscillations, with a focus on gamma oscillations. Distinctions are drawn between stimulus-independent oscillations recorded during resting states or intervals between task events, stimulus-induced oscillations that are time locked but not phase locked to stimuli, and stimulus-evoked oscillations that are both time and phase locked to stimuli. Synchrony of oscillations between recording sites, and between the amplitudes and phases of oscillations of different frequencies (cross-frequency coupling), is described and illustrated. Molecular mechanisms underlying gamma oscillations are also reviewed. Ultimately, understanding the temporal organization of neuronal network activity, including interactions between neural oscillations, is critical for elucidating brain dysfunction in neuropsychiatric disorders.
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
Kerr, Robert R; Burkitt, Anthony N; Thomas, Doreen A; Gilson, Matthieu; Grayden, David B
2013-01-01
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Kerr, Robert R.; Burkitt, Anthony N.; Thomas, Doreen A.; Gilson, Matthieu; Grayden, David B.
2013-01-01
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. PMID:23408878
High-Degree Neurons Feed Cortical Computations
Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.
2016-01-01
Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network. PMID:27159884
Wang, Cheng-Te; Lee, Chung-Ting; Wang, Xiao-Jing; Lo, Chung-Chuan
2013-01-01
Recent physiological studies have shown that neurons in various regions of the central nervous systems continuously receive noisy excitatory and inhibitory synaptic inputs in a balanced and covaried fashion. While this balanced synaptic input (BSI) is typically described in terms of maintaining the stability of neural circuits, a number of experimental and theoretical studies have suggested that BSI plays a proactive role in brain functions such as top-down modulation for executive control. Two issues have remained unclear in this picture. First, given the noisy nature of neuronal activities in neural circuits, how do the modulatory effects change if the top-down control implements BSI with different ratios between inhibition and excitation? Second, how is a top-down BSI realized via only excitatory long-range projections in the neocortex? To address the first issue, we systematically tested how the inhibition/excitation ratio affects the accuracy and reaction times of a spiking neural circuit model of perceptual decision. We defined an energy function to characterize the network dynamics, and found that different ratios modulate the energy function of the circuit differently and form two distinct functional modes. To address the second issue, we tested BSI with long-distance projection to inhibitory neurons that are either feedforward or feedback, depending on whether these inhibitory neurons do or do not receive inputs from local excitatory cells, respectively. We found that BSI occurs in both cases. Furthermore, when relying on feedback inhibitory neurons, through the recurrent interactions inside the circuit, BSI dynamically and automatically speeds up the decision by gradually reducing its inhibitory component in the course of a trial when a decision process takes too long. PMID:23626812
Wang, Cheng-Te; Lee, Chung-Ting; Wang, Xiao-Jing; Lo, Chung-Chuan
2013-01-01
Recent physiological studies have shown that neurons in various regions of the central nervous systems continuously receive noisy excitatory and inhibitory synaptic inputs in a balanced and covaried fashion. While this balanced synaptic input (BSI) is typically described in terms of maintaining the stability of neural circuits, a number of experimental and theoretical studies have suggested that BSI plays a proactive role in brain functions such as top-down modulation for executive control. Two issues have remained unclear in this picture. First, given the noisy nature of neuronal activities in neural circuits, how do the modulatory effects change if the top-down control implements BSI with different ratios between inhibition and excitation? Second, how is a top-down BSI realized via only excitatory long-range projections in the neocortex? To address the first issue, we systematically tested how the inhibition/excitation ratio affects the accuracy and reaction times of a spiking neural circuit model of perceptual decision. We defined an energy function to characterize the network dynamics, and found that different ratios modulate the energy function of the circuit differently and form two distinct functional modes. To address the second issue, we tested BSI with long-distance projection to inhibitory neurons that are either feedforward or feedback, depending on whether these inhibitory neurons do or do not receive inputs from local excitatory cells, respectively. We found that BSI occurs in both cases. Furthermore, when relying on feedback inhibitory neurons, through the recurrent interactions inside the circuit, BSI dynamically and automatically speeds up the decision by gradually reducing its inhibitory component in the course of a trial when a decision process takes too long.
Feed-forward segmentation of figure-ground and assignment of border-ownership.
Supèr, Hans; Romeo, August; Keil, Matthias
2010-05-19
Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.
Feed-Forward Segmentation of Figure-Ground and Assignment of Border-Ownership
Supèr, Hans; Romeo, August; Keil, Matthias
2010-01-01
Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment. PMID:20502718
Directed functional connectivity matures with motor learning in a cortical pattern generator.
Day, Nancy F; Terleski, Kyle L; Nykamp, Duane Q; Nick, Teresa A
2013-02-01
Sequential motor skills may be encoded by feedforward networks that consist of groups of neurons that fire in sequence (Abeles 1991; Long et al. 2010). However, there has been no evidence of an anatomic map of activation sequence in motor control circuits, which would be potentially detectable as directed functional connectivity of coactive neuron groups. The proposed pattern generator for birdsong, the HVC (Long and Fee 2008; Vu et al. 1994), contains axons that are preferentially oriented in the rostrocaudal axis (Nottebohm et al. 1982; Stauffer et al. 2012). We used four-tetrode recordings to assess the activity of ensembles of single neurons along the rostrocaudal HVC axis in anesthetized zebra finches. We found an axial, polarized neural network in which sequential activity is directionally organized along the rostrocaudal axis in adult males, who produce a stereotyped song. Principal neurons fired in rostrocaudal order and with interneurons that were rostral to them, suggesting that groups of excitatory neurons fire at the leading edge of travelling waves of inhibition. Consistent with the synchronization of neurons by caudally travelling waves of inhibition, the activity of interneurons was more coherent in the orthogonal mediolateral axis than in the rostrocaudal axis. If directed functional connectivity within the HVC is important for stereotyped, learned song, then it may be lacking in juveniles, which sing a highly variable song. Indeed, we found little evidence for network directionality in juveniles. These data indicate that a functionally directed network within the HVC matures during sensorimotor learning and may underlie vocal patterning.
Directed functional connectivity matures with motor learning in a cortical pattern generator
Day, Nancy F.; Terleski, Kyle L.; Nykamp, Duane Q.
2013-01-01
Sequential motor skills may be encoded by feedforward networks that consist of groups of neurons that fire in sequence (Abeles 1991; Long et al. 2010). However, there has been no evidence of an anatomic map of activation sequence in motor control circuits, which would be potentially detectable as directed functional connectivity of coactive neuron groups. The proposed pattern generator for birdsong, the HVC (Long and Fee 2008; Vu et al. 1994), contains axons that are preferentially oriented in the rostrocaudal axis (Nottebohm et al. 1982; Stauffer et al. 2012). We used four-tetrode recordings to assess the activity of ensembles of single neurons along the rostrocaudal HVC axis in anesthetized zebra finches. We found an axial, polarized neural network in which sequential activity is directionally organized along the rostrocaudal axis in adult males, who produce a stereotyped song. Principal neurons fired in rostrocaudal order and with interneurons that were rostral to them, suggesting that groups of excitatory neurons fire at the leading edge of travelling waves of inhibition. Consistent with the synchronization of neurons by caudally travelling waves of inhibition, the activity of interneurons was more coherent in the orthogonal mediolateral axis than in the rostrocaudal axis. If directed functional connectivity within the HVC is important for stereotyped, learned song, then it may be lacking in juveniles, which sing a highly variable song. Indeed, we found little evidence for network directionality in juveniles. These data indicate that a functionally directed network within the HVC matures during sensorimotor learning and may underlie vocal patterning. PMID:23175804
Hübner, Cora; Bosch, Daniel; Gall, Andrea; Lüthi, Andreas; Ehrlich, Ingrid
2014-01-01
Many lines of evidence suggest that a reciprocally interconnected network comprising the amygdala, ventral hippocampus (vHC), and medial prefrontal cortex (mPFC) participates in different aspects of the acquisition and extinction of conditioned fear responses and fear behavior. This could at least in part be mediated by direct connections from mPFC or vHC to amygdala to control amygdala activity and output. However, currently the interactions between mPFC and vHC afferents and their specific targets in the amygdala are still poorly understood. Here, we use an ex-vivo optogenetic approach to dissect synaptic properties of inputs from mPFC and vHC to defined neuronal populations in the basal amygdala (BA), the area that we identify as a major target of these projections. We find that BA principal neurons (PNs) and local BA interneurons (INs) receive monosynaptic excitatory inputs from mPFC and vHC. In addition, both these inputs also recruit GABAergic feedforward inhibition in a substantial fraction of PNs, in some neurons this also comprises a slow GABAB-component. Amongst the innervated PNs we identify neurons that project back to subregions of the mPFC, indicating a loop between neurons in mPFC and BA, and a pathway from vHC to mPFC via BA. Interestingly, mPFC inputs also recruit feedforward inhibition in a fraction of INs, suggesting that these inputs can activate dis-inhibitory circuits in the BA. A general feature of both mPFC and vHC inputs to local INs is that excitatory inputs display faster rise and decay kinetics than in PNs, which would enable temporally precise signaling. However, mPFC and vHC inputs to both PNs and INs differ in their presynaptic release properties, in that vHC inputs are more depressing. In summary, our data describe novel wiring, and features of synaptic connections from mPFC and vHC to amygdala that could help to interpret functions of these interconnected brain areas at the network level. PMID:24634648
SVM-based tree-type neural networks as a critic in adaptive critic designs for control.
Deb, Alok Kanti; Jayadeva; Gopal, Madan; Chandra, Suresh
2007-07-01
In this paper, we use the approach of adaptive critic design (ACD) for control, specifically, the action-dependent heuristic dynamic programming (ADHDP) method. A least squares support vector machine (SVM) regressor has been used for generating the control actions, while an SVM-based tree-type neural network (NN) is used as the critic. After a failure occurs, the critic and action are retrained in tandem using the failure data. Failure data is binary classification data, where the number of failure states are very few as compared to the number of no-failure states. The difficulty of conventional multilayer feedforward NNs in learning this type of classification data has been overcome by using the SVM-based tree-type NN, which due to its feature to add neurons to learn misclassified data, has the capability to learn any binary classification data without a priori choice of the number of neurons or the structure of the network. The capability of the trained controller to handle unforeseen situations is demonstrated.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-08-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-01-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure. PMID:26291697
Locking of correlated neural activity to ongoing oscillations
Helias, Moritz
2017-01-01
Population-wide oscillations are ubiquitously observed in mesoscopic signals of cortical activity. In these network states a global oscillatory cycle modulates the propensity of neurons to fire. Synchronous activation of neurons has been hypothesized to be a separate channel of signal processing information in the brain. A salient question is therefore if and how oscillations interact with spike synchrony and in how far these channels can be considered separate. Experiments indeed showed that correlated spiking co-modulates with the static firing rate and is also tightly locked to the phase of beta-oscillations. While the dependence of correlations on the mean rate is well understood in feed-forward networks, it remains unclear why and by which mechanisms correlations tightly lock to an oscillatory cycle. We here demonstrate that such correlated activation of pairs of neurons is qualitatively explained by periodically-driven random networks. We identify the mechanisms by which covariances depend on a driving periodic stimulus. Mean-field theory combined with linear response theory yields closed-form expressions for the cyclostationary mean activities and pairwise zero-time-lag covariances of binary recurrent random networks. Two distinct mechanisms cause time-dependent covariances: the modulation of the susceptibility of single neurons (via the external input and network feedback) and the time-varying variances of single unit activities. For some parameters, the effectively inhibitory recurrent feedback leads to resonant covariances even if mean activities show non-resonant behavior. Our analytical results open the question of time-modulated synchronous activity to a quantitative analysis. PMID:28604771
A regulatory network to segregate the identity of neuronal subtypes.
Lee, Seunghee; Lee, Bora; Joshi, Kaumudi; Pfaff, Samuel L; Lee, Jae W; Lee, Soo-Kyung
2008-06-01
Spinal motor neurons (MNs) and V2 interneurons (V2-INs) are specified by two related LIM-complexes, MN-hexamer and V2-tetramer, respectively. Here we show how multiple parallel and complementary feedback loops are integrated to assign these two cell fates accurately. While MN-hexamer response elements (REs) are specific to MN-hexamer, V2-tetramer-REs can bind both LIM-complexes. In embryonic MNs, however, two factors cooperatively suppress the aberrant activation of V2-tetramer-REs. First, LMO4 blocks V2-tetramer assembly. Second, MN-hexamer induces a repressor, Hb9, which binds V2-tetramer-REs and suppresses their activation. V2-INs use a similar approach; V2-tetramer induces a repressor, Chx10, which binds MN-hexamer-REs and blocks their activation. Thus, our study uncovers a regulatory network to segregate related cell fates, which involves reciprocal feedforward gene regulatory loops.
The neuronal architecture of the mushroom body provides a logic for associative learning
Aso, Yoshinori; Hattori, Daisuke; Yu, Yang; Johnston, Rebecca M; Iyer, Nirmala A; Ngo, Teri-TB; Dionne, Heather; Abbott, LF; Axel, Richard; Tanimoto, Hiromu; Rubin, Gerald M
2014-01-01
We identified the neurons comprising the Drosophila mushroom body (MB), an associative center in invertebrate brains, and provide a comprehensive map describing their potential connections. Each of the 21 MB output neuron (MBON) types elaborates segregated dendritic arbors along the parallel axons of ∼2000 Kenyon cells, forming 15 compartments that collectively tile the MB lobes. MBON axons project to five discrete neuropils outside of the MB and three MBON types form a feedforward network in the lobes. Each of the 20 dopaminergic neuron (DAN) types projects axons to one, or at most two, of the MBON compartments. Convergence of DAN axons on compartmentalized Kenyon cell–MBON synapses creates a highly ordered unit that can support learning to impose valence on sensory representations. The elucidation of the complement of neurons of the MB provides a comprehensive anatomical substrate from which one can infer a functional logic of associative olfactory learning and memory. DOI: http://dx.doi.org/10.7554/eLife.04577.001 PMID:25535793
Visual sensation during pecking in pigeons.
Ostheim, J
1997-10-01
During the final down-thrust of a pigeon's head, the eyes are closed gradually, a response that was thought to block visual input. This phase of pecking was therefore assumed to be under feed-forward control exclusively. Analysis of high resolution video-recordings showed that visual information collected during the down-thrust of the head could be used for 'on-line' modulations of pecks in progress. We thus concluded that the final down-thrust of the head is not exclusively controlled by feed-forward mechanisms but also by visual feedback components. We could further establish that as a rule the eyes are never closed completely but instead the eyelids form a slit which leaves a part of the pupil uncovered. The width of the slit between the pigeon' eyelids is highly sensitive to both, ambient luminance and the visual background against which seeds are offered. It was concluded that eyelid slits increase the focal depth of retinal images at extreme near-field viewing-conditions. Applying pharmacological methods we could confirm that pupil size and eyelid slit width are controlled through conjoint neuronal mechanisms. This shared neuronal network is particularly sensitive to drugs that affect dopamine receptors.
Kleberg, Florence I.; Fukai, Tomoki; Gilson, Matthieu
2014-01-01
Spike-timing-dependent plasticity (STDP) has been well established between excitatory neurons and several computational functions have been proposed in various neural systems. Despite some recent efforts, however, there is a significant lack of functional understanding of inhibitory STDP (iSTDP) and its interplay with excitatory STDP (eSTDP). Here, we demonstrate by analytical and numerical methods that iSTDP contributes crucially to the balance of excitatory and inhibitory weights for the selection of a specific signaling pathway among other pathways in a feedforward circuit. This pathway selection is based on the high sensitivity of STDP to correlations in spike times, which complements a recent proposal for the role of iSTDP in firing-rate based selection. Our model predicts that asymmetric anti-Hebbian iSTDP exceeds asymmetric Hebbian iSTDP for supporting pathway-specific balance, which we show is useful for propagating transient neuronal responses. Furthermore, we demonstrate how STDPs at excitatory–excitatory, excitatory–inhibitory, and inhibitory–excitatory synapses cooperate to improve the pathway selection. We propose that iSTDP is crucial for shaping the network structure that achieves efficient processing of synchronous spikes. PMID:24847242
Gainey, Melanie A; Aman, Joseph W; Feldman, Daniel E
2018-04-20
Rapid plasticity of layer (L) 2/3 inhibitory circuits is an early step in sensory cortical map plasticity, but its cellular basis is unclear. We show that, in mice of either sex, 1 day whisker deprivation drives rapid loss of L4-evoked feedforward inhibition and more modest loss of feedforward excitation in L2/3 pyramidal (PYR) cells, increasing E-I conductance ratio. Rapid disinhibition was due to reduced L4-evoked spiking by L2/3 parvalbumin (PV) interneurons, caused by reduced PV intrinsic excitability. This included elevated PV spike threshold, associated with an increase in low-threshold, voltage activated delayed rectifier (presumed Kv1) and A-type potassium currents. Excitatory synaptic input and unitary inhibitory output of PV cells were unaffected. Functionally, the loss of feedforward inhibition and excitation were precisely coordinated in L2/3 PYR cells, so that peak feedforward synaptic depolarization remained stable. Thus, rapid plasticity of PV intrinsic excitability offsets early weakening of excitatory circuits to homeostatically stabilize synaptic potentials in PYR cells of sensory cortex. SIGNIFICANCE STATEMENT Inhibitory circuits in cerebral cortex are highly plastic, but the cellular mechanisms and functional importance of this plasticity are incompletely understood. We show that brief (1-day) sensory deprivation rapidly weakens parvalbumin (PV) inhibitory circuits by reducing the intrinsic excitability of PV neurons. This involved a rapid increase in voltage-gated potassium conductances that control near-threshold spiking excitability. Functionally, the loss of PV-mediated feedforward inhibition in L2/3 pyramidal cells was precisely balanced with the separate loss of feedforward excitation, resulting in a net homeostatic stabilization of synaptic potentials. Thus, rapid plasticity of PV intrinsic excitability implements network-level homeostasis to stabilize synaptic potentials in sensory cortex. Copyright © 2018 the authors.
An Investigation of the Application of Artificial Neural Networks to Adaptive Optics Imaging Systems
1991-12-01
neural network and the feedforward neural network studied is the single layer perceptron artificial neural network . The recurrent artificial neural network input...features are the wavefront sensor slope outputs and neighboring actuator feedback commands. The feedforward artificial neural network input
Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception.
Kutschireiter, Anna; Surace, Simone Carlo; Sprekeler, Henning; Pfister, Jean-Pascal
2017-08-18
The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.
Talking back: Development of the olivocochlear efferent system.
Frank, Michelle M; Goodrich, Lisa V
2018-06-26
Developing sensory systems must coordinate the growth of neural circuitry spanning from receptors in the peripheral nervous system (PNS) to multilayered networks within the central nervous system (CNS). This breadth presents particular challenges, as nascent processes must navigate across the CNS-PNS boundary and coalesce into a tightly intermingled wiring pattern, thereby enabling reliable integration from the PNS to the CNS and back. In the auditory system, feedforward spiral ganglion neurons (SGNs) from the periphery collect sound information via tonotopically organized connections in the cochlea and transmit this information to the brainstem for processing via the VIII cranial nerve. In turn, feedback olivocochlear neurons (OCNs) housed in the auditory brainstem send projections into the periphery, also through the VIII nerve. OCNs are motor neuron-like efferent cells that influence auditory processing within the cochlea and protect against noise damage in adult animals. These aligned feedforward and feedback systems develop in parallel, with SGN central axons reaching the developing auditory brainstem around the same time that the OCN axons extend out toward the developing inner ear. Recent findings have begun to unravel the genetic and molecular mechanisms that guide OCN development, from their origins in a generic pool of motor neuron precursors to their specialized roles as modulators of cochlear activity. One recurrent theme is the importance of efferent-afferent interactions, as afferent SGNs guide OCNs to their final locations within the sensory epithelium, and efferent OCNs shape the activity of the developing auditory system. This article is categorized under: Nervous System Development > Vertebrates: Regional Development. © 2018 Wiley Periodicals, Inc.
A neural network architecture for implementation of expert systems for real time monitoring
NASA Technical Reports Server (NTRS)
Ramamoorthy, P. A.
1991-01-01
Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.
Recurrent Network models of sequence generation and memory
Rajan, Kanaka; Harvey, Christopher D; Tank, David W
2016-01-01
SUMMARY Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here, we demonstrate that starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network training (PINning), to model and match cellular-resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced choice task [Harvey, Coen and Tank, 2012]. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures. PMID:26971945
Spin orbit torque based electronic neuron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Abhronil, E-mail: asengup@purdue.edu; Choday, Sri Harsha; Kim, Yusung
2015-04-06
A device based on current-induced spin-orbit torque (SOT) that functions as an electronic neuron is proposed in this work. The SOT device implements an artificial neuron's thresholding (transfer) function. In the first step of a two-step switching scheme, a charge current places the magnetization of a nano-magnet along the hard-axis, i.e., an unstable point for the magnet. In the second step, the SOT device (neuron) receives a current (from the synapses) which moves the magnetization from the unstable point to one of the two stable states. The polarity of the synaptic current encodes the excitatory and inhibitory nature of themore » neuron input and determines the final orientation of the magnetization. A resistive crossbar array, functioning as synapses, generates a bipolar current that is a weighted sum of the inputs. The simulation of a two layer feed-forward artificial neural network based on the SOT electronic neuron shows that it consumes ∼3× lower power than a 45 nm digital CMOS implementation, while reaching ∼80% accuracy in the classification of 100 images of handwritten digits from the MNIST dataset.« less
Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?
Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610
Feedforward inhibition and synaptic scaling--two sides of the same coin?
Keck, Christian; Savin, Cristina; Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
NASA Astrophysics Data System (ADS)
Wismüller, Axel; DSouza, Adora M.; Abidin, Anas Z.; Wang, Xixi; Hobbs, Susan K.; Nagarajan, Mahesh B.
2015-03-01
Echo state networks (ESN) are recurrent neural networks where the hidden layer is replaced with a fixed reservoir of neurons. Unlike feed-forward networks, neuron training in ESN is restricted to the output neurons alone thereby providing a computational advantage. We demonstrate the use of such ESNs in our mutual connectivity analysis (MCA) framework for recovering the primary motor cortex network associated with hand movement from resting state functional MRI (fMRI) data. Such a framework consists of two steps - (1) defining a pair-wise affinity matrix between different pixel time series within the brain to characterize network activity and (2) recovering network components from the affinity matrix with non-metric clustering. Here, ESNs are used to evaluate pair-wise cross-estimation performance between pixel time series to create the affinity matrix, which is subsequently subject to non-metric clustering with the Louvain method. For comparison, the ground truth of the motor cortex network structure is established with a task-based fMRI sequence. Overlap between the primary motor cortex network recovered with our model free MCA approach and the ground truth was measured with the Dice coefficient. Our results show that network recovery with our proposed MCA approach is in close agreement with the ground truth. Such network recovery is achieved without requiring low-pass filtering of the time series ensembles prior to analysis, an fMRI preprocessing step that has courted controversy in recent years. Thus, we conclude our MCA framework can allow recovery and visualization of the underlying functionally connected networks in the brain on resting state fMRI.
Gentili, Pier Luigi; Giubila, Maria Sole; Germani, Raimondo; Romani, Aldo; Nicoziani, Andrea; Spalletti, Anna; Heron, B Mark
2017-06-19
Neuromorphic engineering promises to have a revolutionary impact in our societies. A strategy to develop artificial neurons (ANs) is to use oscillatory and excitable chemical systems. Herein, we use UV and visible radiation as both excitatory and inhibitory signals for the communication among oscillatory reactions, such as the Belousov-Zhabotinsky and the chemiluminescent Orban transformations, and photo-excitable photochromic and fluorescent species. We present the experimental results and the simulations regarding pairs of ANs communicating by either one or two optical signals, and triads of ANs arranged in both feed-forward and recurrent networks. We find that the ANs, powered chemically and/or by the energy of electromagnetic radiation, can give rise to the emergent properties of in-phase, out-of-phase, anti-phase synchronizations and phase-locking, dynamically mimicking the communication among real neurons. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Decorrelation of Neural-Network Activity by Inhibitory Feedback
Einevoll, Gaute T.; Diesmann, Markus
2012-01-01
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II). PMID:23133368
Heikkinen, Hanna; Sharifian, Fariba; Vigario, Ricardo; Vanni, Simo
2015-07-01
The blood oxygenation level-dependent (BOLD) response has been strongly associated with neuronal activity in the brain. However, some neuronal tuning properties are consistently different from the BOLD response. We studied the spatial extent of neural and hemodynamic responses in the primary visual cortex, where the BOLD responses spread and interact over much longer distances than the small receptive fields of individual neurons would predict. Our model shows that a feedforward-feedback loop between V1 and a higher visual area can account for the observed spread of the BOLD response. In particular, anisotropic landing of inputs to compartmental neurons were necessary to account for the BOLD signal spread, while retaining realistic spiking responses. Our work shows that simple dendrites can separate tuning at the synapses and at the action potential output, thus bridging the BOLD signal to the neural receptive fields with high fidelity. Copyright © 2015 the American Physiological Society.
Tuckwell, Henry C
2006-01-01
The circuitry of cortical networks involves interacting populations of excitatory (E) and inhibitory (I) neurons whose relationships are now known to a large extent. Inputs to E- and I-cells may have their origins in remote or local cortical areas. We consider a rudimentary model involving E- and I-cells. One of our goals is to test an analytic approach to finding firing rates in neural networks without using a diffusion approximation and to this end we consider in detail networks of excitatory neurons with leaky integrate-and-fire (LIF) dynamics. A simple measure of synchronization, denoted by S(q), where q is between 0 and 100 is introduced. Fully connected E-networks have a large tendency to become dominated by synchronously firing groups of cells, except when inputs are relatively weak. We observed random or asynchronous firing in such networks with diverse sets of parameter values. When such firing patterns were found, the analytical approach was often able to accurately predict average neuronal firing rates. We also considered several properties of E-E networks, distinguishing several kinds of firing pattern. Included were those with silences before or after periods of intense activity or with periodic synchronization. We investigated the occurrence of synchronized firing with respect to changes in the internal excitatory postsynaptic potential (EPSP) magnitude in a network of 100 neurons with fixed values of the remaining parameters. When the internal EPSP size was less than a certain value, synchronization was absent. The amount of synchronization then increased slowly as the EPSP amplitude increased until at a particular EPSP size the amount of synchronization abruptly increased, with S(5) attaining the maximum value of 100%. We also found network frequency transfer characteristics for various network sizes and found a linear dependence of firing frequency over wide ranges of the external afferent frequency, with non-linear effects at lower input frequencies. The theory may also be applied to sparsely connected networks, whose firing behaviour was found to change abruptly as the probability of a connection passed through a critical value. The analytical method was also found to be useful for a feed-forward excitatory network and a network of excitatory and inhibitory neurons.
Slow oscillations orchestrating fast oscillations and memory consolidation.
Mölle, Matthias; Born, Jan
2011-01-01
Slow-wave sleep (SWS) facilitates the consolidation of hippocampus-dependent declarative memory. Based on the standard two-stage memory model, we propose that memory consolidation during SWS represents a process of system consolidation which is orchestrated by the neocortical <1Hz electroencephalogram (EEG) slow oscillation and involves the reactivation of newly encoded representations and their subsequent redistribution from temporary hippocampal to neocortical long-term storage sites. Indeed, experimental induction of slow oscillations during non-rapid eye movement (non-REM) sleep by slowly alternating transcranial current stimulation distinctly improves consolidation of declarative memory. The slow oscillations temporally group neuronal activity into up-states of strongly enhanced neuronal activity and down-states of neuronal silence. In a feed-forward efferent action, this grouping is induced not only in the neocortex but also in other structures relevant to consolidation, namely the thalamus generating 10-15Hz spindles, and the hippocampus generating sharp wave-ripples, with the latter well known to accompany a replay of newly encoded memories taking place in hippocampal circuitries. The feed-forward synchronizing effect of the slow oscillation enables the formation of spindle-ripple events where ripples and accompanying reactivated hippocampal memory information become nested into the single troughs of spindles. Spindle-ripple events thus enable reactivated memory-related hippocampal information to be fed back to neocortical networks in the excitable slow oscillation up-state where they can induce enduring plastic synaptic changes underlying the effective formation of long-term memories. Copyright © 2011 Elsevier B.V. All rights reserved.
Decoding small surface codes with feedforward neural networks
NASA Astrophysics Data System (ADS)
Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen
2018-01-01
Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.
Zhao, Bo; Ding, Ruoxi; Chen, Shoushun; Linares-Barranco, Bernabe; Tang, Huajin
2015-09-01
This paper introduces an event-driven feedforward categorization system, which takes data from a temporal contrast address event representation (AER) sensor. The proposed system extracts bio-inspired cortex-like features and discriminates different patterns using an AER based tempotron classifier (a network of leaky integrate-and-fire spiking neurons). One of the system's most appealing characteristics is its event-driven processing, with both input and features taking the form of address events (spikes). The system was evaluated on an AER posture dataset and compared with two recently developed bio-inspired models. Experimental results have shown that it consumes much less simulation time while still maintaining comparable performance. In addition, experiments on the Mixed National Institute of Standards and Technology (MNIST) image dataset have demonstrated that the proposed system can work not only on raw AER data but also on images (with a preprocessing step to convert images into AER events) and that it can maintain competitive accuracy even when noise is added. The system was further evaluated on the MNIST dynamic vision sensor dataset (in which data is recorded using an AER dynamic vision sensor), with testing accuracy of 88.14%.
Chen, C L; Kaber, D B; Dempsey, P G
2000-06-01
A new and improved method to feedforward neural network (FNN) development for application to data classification problems, such as the prediction of levels of low-back disorder (LBD) risk associated with industrial jobs, is presented. Background on FNN development for data classification is provided along with discussions of previous research and neighborhood (local) solution search methods for hard combinatorial problems. An analytical study is presented which compared prediction accuracy of a FNN based on an error-back propagation (EBP) algorithm with the accuracy of a FNN developed by considering results of local solution search (simulated annealing) for classifying industrial jobs as posing low or high risk for LBDs. The comparison demonstrated superior performance of the FNN generated using the new method. The architecture of this FNN included fewer input (predictor) variables and hidden neurons than the FNN developed based on the EBP algorithm. Independent variable selection methods and the phenomenon of 'overfitting' in FNN (and statistical model) generation for data classification are discussed. The results are supportive of the use of the new approach to FNN development for applications to musculoskeletal disorders and risk forecasting in other domains.
Exercise-induced neuronal plasticity in central autonomic networks: role in cardiovascular control.
Michelini, Lisete C; Stern, Javier E
2009-09-01
It is now well established that brain plasticity is an inherent property not only of the developing but also of the adult brain. Numerous beneficial effects of exercise, including improved memory, cognitive function and neuroprotection, have been shown to involve an important neuroplastic component. However, whether major adaptive cardiovascular adjustments during exercise, needed to ensure proper blood perfusion of peripheral tissues, also require brain neuroplasticity, is presently unknown. This review will critically evaluate current knowledge on proposed mechanisms that are likely to underlie the continuous resetting of baroreflex control of heart rate during/after exercise and following exercise training. Accumulating evidence indicates that not only somatosensory afferents (conveyed by skeletal muscle receptors, baroreceptors and/or cardiopulmonary receptors) but also projections arising from central command neurons (in particular, peptidergic hypothalamic pre-autonomic neurons) converge into the nucleus tractus solitarii (NTS) in the dorsal brainstem, to co-ordinate complex cardiovascular adaptations during dynamic exercise. This review focuses in particular on a reciprocally interconnected network between the NTS and the hypothalamic paraventricular nucleus (PVN), which is proposed to act as a pivotal anatomical and functional substrate underlying integrative feedforward and feedback cardiovascular adjustments during exercise. Recent findings supporting neuroplastic adaptive changes within the NTS-PVN reciprocal network (e.g. remodelling of afferent inputs, structural and functional neuronal plasticity and changes in neurotransmitter content) will be discussed within the context of their role as important underlying cellular mechanisms supporting the tonic activation and improved efficacy of these central pathways in response to circulatory demand at rest and during exercise, both in sedentary and in trained individuals. We hope this review will stimulate more comprehensive studies aimed at understanding cellular and molecular mechanisms within CNS neuronal networks that contribute to exercise-induced neuroplasticity and cardiovascular adjustments.
Collective Dynamics for Heterogeneous Networks of Theta Neurons
NASA Astrophysics Data System (ADS)
Luke, Tanushree
Collective behavior in neural networks has often been used as an indicator of communication between different brain areas. These collective synchronization and desynchronization patterns are also considered an important feature in understanding normal and abnormal brain function. To understand the emergence of these collective patterns, I create an analytic model that identifies all such macroscopic steady-states attainable by a network of Type-I neurons. This network, whose basic unit is the model "theta'' neuron, contains a mixture of excitable and spiking neurons coupled via a smooth pulse-like synapse. Applying the Ott-Antonsen reduction method in the thermodynamic limit, I obtain a low-dimensional evolution equation that describes the asymptotic dynamics of the macroscopic mean field of the network. This model can be used as the basis in understanding more complicated neuronal networks when additional dynamical features are included. From this reduced dynamical equation for the mean field, I show that the network exhibits three collective attracting steady-states. The first two are equilibrium states that both reflect partial synchronization in the network, whereas the third is a limit cycle in which the degree of network synchronization oscillates in time. In addition to a comprehensive identification of all possible attracting macro-states, this analytic model permits a complete bifurcation analysis of the collective behavior of the network with respect to three key network features: the degree of excitability of the neurons, the heterogeneity of the population, and the overall coupling strength. The network typically tends towards the two macroscopic equilibrium states when the neuron's intrinsic dynamics and the network interactions reinforce each other. In contrast, the limit cycle state, bifurcations, and multistability tend to occur when there is competition between these network features. I also outline here an extension of the above model where the neurons' excitability now varies in time sinuosoidally, thus simulating a parabolic bursting network. This time-varying excitability can lead to the emergence of macroscopic chaos and multistability in the collective behavior of the network. Finally, I expand the single population model described above to examine a two-population neuronal network where each population has its own unique mixture of excitable and spiking neurons, as well as its own coupling strength (either excitatory or inhibitory in nature). Specifically, I consider the situation where the first population is only allowed to influence the second population without any feedback, thus effectively creating a feed-forward "driver-response" system. In this special arrangement, the driver's asymptotic macroscopic dynamics are fully explored in the comprehensive analysis of the single population. Then, in the presence of an influence from the driver, the modified dynamics of the second population, which now acts as a response population, can also be fully analyzed. As in the time-varying model, these modifications give rise to richer dynamics to the response population than those found from the single population formalism, including multi-periodicity and chaos.
Aliabadi, Mohsen; Farhadian, Maryam; Darvishi, Ebrahim
2015-08-01
Prediction of hearing loss in noisy workplaces is considered to be an important aspect of hearing conservation program. Artificial intelligence, as a new approach, can be used to predict the complex phenomenon such as hearing loss. Using artificial neural networks, this study aims to present an empirical model for the prediction of the hearing loss threshold among noise-exposed workers. Two hundred and ten workers employed in a steel factory were chosen, and their occupational exposure histories were collected. To determine the hearing loss threshold, the audiometric test was carried out using a calibrated audiometer. The personal noise exposure was also measured using a noise dosimeter in the workstations of workers. Finally, data obtained five variables, which can influence the hearing loss, were used for the development of the prediction model. Multilayer feed-forward neural networks with different structures were developed using MATLAB software. Neural network structures had one hidden layer with the number of neurons being approximately between 5 and 15 neurons. The best developed neural networks with one hidden layer and ten neurons could accurately predict the hearing loss threshold with RMSE = 2.6 dB and R(2) = 0.89. The results also confirmed that neural networks could provide more accurate predictions than multiple regressions. Since occupational hearing loss is frequently non-curable, results of accurate prediction can be used by occupational health experts to modify and improve noise exposure conditions.
Neural Correlation Is Stimulus Modulated by Feedforward Inhibitory Circuitry
Middleton, Jason W.; Omar, Cyrus; Doiron, Brent; Simons, Daniel J.
2012-01-01
Correlated variability of neural spiking activity has important consequences for signal processing. How incoming sensory signals shape correlations of population responses remains unclear. Cross-correlations between spiking of different neurons may be particularly consequential in sparsely firing neural populations such as those found in layer 2/3 of sensory cortex. In rat whisker barrel cortex, we found that pairs of excitatory layer 2/3 neurons exhibit similarly low levels of spike count correlation during both spontaneous and sensory-evoked states. The spontaneous activity of excitatory–inhibitory neuron pairs is positively correlated, while sensory stimuli actively decorrelate joint responses. Computational modeling shows how threshold nonlinearities and local inhibition form the basis of a general decorrelating mechanism. We show that inhibitory population activity maintains low correlations in excitatory populations, especially during periods of sensory-evoked coactivation. The role of feedforward inhibition has been previously described in the context of trial-averaged phenomena. Our findings reveal a novel role for inhibition to shape correlations of neural variability and thereby prevent excessive correlations in the face of feedforward sensory-evoked activation. PMID:22238086
Extraction of texture features with a multiresolution neural network
NASA Astrophysics Data System (ADS)
Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.
1992-09-01
Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.
Dynamic extreme learning machine and its approximation capability.
Zhang, Rui; Lan, Yuan; Huang, Guang-Bin; Xu, Zong-Ben; Soh, Yeng Chai
2013-12-01
Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron alike and perform well in both regression and classification applications. The problem of determining the suitable network architectures is recognized to be crucial in the successful application of ELMs. This paper first proposes a dynamic ELM (D-ELM) where the hidden nodes can be recruited or deleted dynamically according to their significance to network performance, so that not only the parameters can be adjusted but also the architecture can be self-adapted simultaneously. Then, this paper proves in theory that such D-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results obtained over various test problems demonstrate and verify that the proposed D-ELM does a good job reducing the network size while preserving good generalization performance.
Thalamic synchrony and dynamic regulation of global forebrain oscillations.
Huguenard, John R; McCormick, David A
2007-07-01
The circuitry within the thalamus creates an intrinsic oscillatory unit whose function depends critically on reciprocal synaptic connectivity between excitatory thalamocortical relay neurons and inhibitory thalamic reticular neurons along with a robust post-inhibitory rebound mechanism in relay neurons. Feedforward and feedback connections between cortex and thalamus reinforce the thalamic oscillatory activity into larger thalamocortical networks to generate sleep spindles and spike-wave discharge of generalized absence epilepsy. The degree of synchrony within the thalamic network seems to be crucial in determining whether normal (spindle) or pathological (spike-wave) oscillations occur, and recent studies show that regulation of excitability in the reticular nucleus leads to dynamical modulation of the state of the thalamic circuit and provide a basis for explaining how a variety of unrelated genetic alterations might lead to the spike-wave phenotype. In addition, given the central role of the reticular nucleus in generating spike-wave discharge, these studies have suggested specific interventions that would prevent seizures while still allowing normal spindle generation to occur. This review is part of the INMED/TINS special issue Physiogenic and pathogenic oscillations: the beauty and the beast, based on presentations at the annual INMED/TINS symposium (http://inmednet.com).
Kerr, Robert R; Grayden, David B; Thomas, Doreen A; Gilson, Matthieu; Burkitt, Anthony N
2014-01-01
The brain is able to flexibly select behaviors that adapt to both its environment and its present goals. This cognitive control is understood to occur within the hierarchy of the cortex and relies strongly on the prefrontal and premotor cortices, which sit at the top of this hierarchy. Pyramidal neurons, the principal neurons in the cortex, have been observed to exhibit much stronger responses when they receive inputs at their soma/basal dendrites that are coincident with inputs at their apical dendrites. This corresponds to inputs from both lower-order regions (feedforward) and higher-order regions (feedback), respectively. In addition to this, coherence between oscillations, such as gamma oscillations, in different neuronal groups has been proposed to modulate and route communication in the brain. In this paper, we develop a simple, but novel, neural mass model in which cortical units (or ensembles) exhibit gamma oscillations when they receive coherent oscillatory inputs from both feedforward and feedback connections. By forming these units into circuits that can perform logic operations, we identify the different ways in which operations can be initiated and manipulated by top-down feedback. We demonstrate that more sophisticated and flexible top-down control is possible when the gain of units is modulated by not only top-down feedback but by coherence between the activities of the oscillating units. With these types of units, it is possible to not only add units to, or remove units from, a higher-level unit's logic operation using top-down feedback, but also to modify the type of role that a unit plays in the operation. Finally, we explore how different network properties affect top-down control and processing in large networks. Based on this, we make predictions about the likely connectivities between certain brain regions that have been experimentally observed to be involved in goal-directed behavior and top-down attention.
Kerr, Robert R.; Grayden, David B.; Thomas, Doreen A.; Gilson, Matthieu; Burkitt, Anthony N.
2014-01-01
The brain is able to flexibly select behaviors that adapt to both its environment and its present goals. This cognitive control is understood to occur within the hierarchy of the cortex and relies strongly on the prefrontal and premotor cortices, which sit at the top of this hierarchy. Pyramidal neurons, the principal neurons in the cortex, have been observed to exhibit much stronger responses when they receive inputs at their soma/basal dendrites that are coincident with inputs at their apical dendrites. This corresponds to inputs from both lower-order regions (feedforward) and higher-order regions (feedback), respectively. In addition to this, coherence between oscillations, such as gamma oscillations, in different neuronal groups has been proposed to modulate and route communication in the brain. In this paper, we develop a simple, but novel, neural mass model in which cortical units (or ensembles) exhibit gamma oscillations when they receive coherent oscillatory inputs from both feedforward and feedback connections. By forming these units into circuits that can perform logic operations, we identify the different ways in which operations can be initiated and manipulated by top-down feedback. We demonstrate that more sophisticated and flexible top-down control is possible when the gain of units is modulated by not only top-down feedback but by coherence between the activities of the oscillating units. With these types of units, it is possible to not only add units to, or remove units from, a higher-level unit's logic operation using top-down feedback, but also to modify the type of role that a unit plays in the operation. Finally, we explore how different network properties affect top-down control and processing in large networks. Based on this, we make predictions about the likely connectivities between certain brain regions that have been experimentally observed to be involved in goal-directed behavior and top-down attention. PMID:25152715
NASA Astrophysics Data System (ADS)
Grytskyy, Dmytro; Diesmann, Markus; Helias, Moritz
2016-06-01
Self-organized structures in networks with spike-timing dependent synaptic plasticity (STDP) are likely to play a central role for information processing in the brain. In the present study we derive a reaction-diffusion-like formalism for plastic feed-forward networks of nonlinear rate-based model neurons with a correlation sensitive learning rule inspired by and being qualitatively similar to STDP. After obtaining equations that describe the change of the spatial shape of the signal from layer to layer, we derive a criterion for the nonlinearity necessary to obtain stable dynamics for arbitrary input. We classify the possible scenarios of signal evolution and find that close to the transition to the unstable regime metastable solutions appear. The form of these dissipative solitons is determined analytically and the evolution and interaction of several such coexistent objects is investigated.
Practical training framework for fitting a function and its derivatives.
Pukrittayakamee, Arjpolson; Hagan, Martin; Raff, Lionel; Bukkapatnam, Satish T S; Komanduri, Ranga
2011-06-01
This paper describes a practical framework for using multilayer feedforward neural networks to simultaneously fit both a function and its first derivatives. This framework involves two steps. The first step is to train the network to optimize a performance index, which includes both the error in fitting the function and the error in fitting the derivatives. The second step is to prune the network by removing neurons that cause overfitting and then to retrain it. This paper describes two novel types of overfitting that are only observed when simultaneously fitting both a function and its first derivatives. A new pruning algorithm is proposed to eliminate these types of overfitting. Experimental results show that the pruning algorithm successfully eliminates the overfitting and produces the smoothest responses and the best generalization among all the training algorithms that we have tested.
NASA Astrophysics Data System (ADS)
Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan
2018-02-01
Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.
A hybrid linear/nonlinear training algorithm for feedforward neural networks.
McLoone, S; Brown, M D; Irwin, G; Lightbody, A
1998-01-01
This paper presents a new hybrid optimization strategy for training feedforward neural networks. The algorithm combines gradient-based optimization of nonlinear weights with singular value decomposition (SVD) computation of linear weights in one integrated routine. It is described for the multilayer perceptron (MLP) and radial basis function (RBF) networks and then extended to the local model network (LMN), a new feedforward structure in which a global nonlinear model is constructed from a set of locally valid submodels. Simulation results are presented demonstrating the superiority of the new hybrid training scheme compared to second-order gradient methods. It is particularly effective for the LMN architecture where the linear to nonlinear parameter ratio is large.
Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks
Brosch, Tobias; Neumann, Heiko; Roelfsema, Pieter R.
2015-01-01
The processing of a visual stimulus can be subdivided into a number of stages. Upon stimulus presentation there is an early phase of feedforward processing where the visual information is propagated from lower to higher visual areas for the extraction of basic and complex stimulus features. This is followed by a later phase where horizontal connections within areas and feedback connections from higher areas back to lower areas come into play. In this later phase, image elements that are behaviorally relevant are grouped by Gestalt grouping rules and are labeled in the cortex with enhanced neuronal activity (object-based attention in psychology). Recent neurophysiological studies revealed that reward-based learning influences these recurrent grouping processes, but it is not well understood how rewards train recurrent circuits for perceptual organization. This paper examines the mechanisms for reward-based learning of new grouping rules. We derive a learning rule that can explain how rewards influence the information flow through feedforward, horizontal and feedback connections. We illustrate the efficiency with two tasks that have been used to study the neuronal correlates of perceptual organization in early visual cortex. The first task is called contour-integration and demands the integration of collinear contour elements into an elongated curve. We show how reward-based learning causes an enhancement of the representation of the to-be-grouped elements at early levels of a recurrent neural network, just as is observed in the visual cortex of monkeys. The second task is curve-tracing where the aim is to determine the endpoint of an elongated curve composed of connected image elements. If trained with the new learning rule, neural networks learn to propagate enhanced activity over the curve, in accordance with neurophysiological data. We close the paper with a number of model predictions that can be tested in future neurophysiological and computational studies. PMID:26496502
Recurrent Coupling Improves Discrimination of Temporal Spike Patterns
Yuan, Chun-Wei; Leibold, Christian
2012-01-01
Despite the ubiquitous presence of recurrent synaptic connections in sensory neuronal systems, their general functional purpose is not well understood. A recent conceptual advance has been achieved by theories of reservoir computing in which recurrent networks have been proposed to generate short-term memory as well as to improve neuronal representation of the sensory input for subsequent computations. Here, we present a numerical study on the distinct effects of inhibitory and excitatory recurrence in a canonical linear classification task. It is found that both types of coupling improve the ability to discriminate temporal spike patterns as compared to a purely feed-forward system, although in different ways. For a large class of inhibitory networks, the network’s performance is optimal as long as a fraction of roughly 50% of neurons per stimulus is active in the resulting population code. Thereby the contribution of inactive neurons to the neural code is found to be even more informative than that of the active neurons, generating an inherent robustness of classification performance against temporal jitter of the input spikes. Excitatory couplings are found to not only produce a short-term memory buffer but also to improve linear separability of the population patterns by evoking more irregular firing as compared to the purely inhibitory case. As the excitatory connectivity becomes more sparse, firing becomes more variable, and pattern separability improves. We argue that the proposed paradigm is particularly well-suited as a conceptual framework for processing of sensory information in the auditory pathway. PMID:22586392
Efficient transformation of an auditory population code in a small sensory system.
Clemens, Jan; Kutzki, Olaf; Ronacher, Bernhard; Schreiber, Susanne; Wohlgemuth, Sandra
2011-08-16
Optimal coding principles are implemented in many large sensory systems. They include the systematic transformation of external stimuli into a sparse and decorrelated neuronal representation, enabling a flexible readout of stimulus properties. Are these principles also applicable to size-constrained systems, which have to rely on a limited number of neurons and may only have to fulfill specific and restricted tasks? We studied this question in an insect system--the early auditory pathway of grasshoppers. Grasshoppers use genetically fixed songs to recognize mates. The first steps of neural processing of songs take place in a small three-layer feed-forward network comprising only a few dozen neurons. We analyzed the transformation of the neural code within this network. Indeed, grasshoppers create a decorrelated and sparse representation, in accordance with optimal coding theory. Whereas the neuronal input layer is best read out as a summed population, a labeled-line population code for temporal features of the song is established after only two processing steps. At this stage, information about song identity is maximal for a population decoder that preserves neuronal identity. We conclude that optimal coding principles do apply to the early auditory system of the grasshopper, despite its size constraints. The inputs, however, are not encoded in a systematic, map-like fashion as in many larger sensory systems. Already at its periphery, part of the grasshopper auditory system seems to focus on behaviorally relevant features, and is in this property more reminiscent of higher sensory areas in vertebrates.
Brain-state invariant thalamo-cortical coordination revealed by non-linear encoders.
Viejo, Guillaume; Cortier, Thomas; Peyrache, Adrien
2018-03-01
Understanding how neurons cooperate to integrate sensory inputs and guide behavior is a fundamental problem in neuroscience. A large body of methods have been developed to study neuronal firing at the single cell and population levels, generally seeking interpretability as well as predictivity. However, these methods are usually confronted with the lack of ground-truth necessary to validate the approach. Here, using neuronal data from the head-direction (HD) system, we present evidence demonstrating how gradient boosted trees, a non-linear and supervised Machine Learning tool, can learn the relationship between behavioral parameters and neuronal responses with high accuracy by optimizing the information rate. Interestingly, and unlike other classes of Machine Learning methods, the intrinsic structure of the trees can be interpreted in relation to behavior (e.g. to recover the tuning curves) or to study how neurons cooperate with their peers in the network. We show how the method, unlike linear analysis, reveals that the coordination in thalamo-cortical circuits is qualitatively the same during wakefulness and sleep, indicating a brain-state independent feed-forward circuit. Machine Learning tools thus open new avenues for benchmarking model-based characterization of spike trains.
Brain-state invariant thalamo-cortical coordination revealed by non-linear encoders
Cortier, Thomas; Peyrache, Adrien
2018-01-01
Understanding how neurons cooperate to integrate sensory inputs and guide behavior is a fundamental problem in neuroscience. A large body of methods have been developed to study neuronal firing at the single cell and population levels, generally seeking interpretability as well as predictivity. However, these methods are usually confronted with the lack of ground-truth necessary to validate the approach. Here, using neuronal data from the head-direction (HD) system, we present evidence demonstrating how gradient boosted trees, a non-linear and supervised Machine Learning tool, can learn the relationship between behavioral parameters and neuronal responses with high accuracy by optimizing the information rate. Interestingly, and unlike other classes of Machine Learning methods, the intrinsic structure of the trees can be interpreted in relation to behavior (e.g. to recover the tuning curves) or to study how neurons cooperate with their peers in the network. We show how the method, unlike linear analysis, reveals that the coordination in thalamo-cortical circuits is qualitatively the same during wakefulness and sleep, indicating a brain-state independent feed-forward circuit. Machine Learning tools thus open new avenues for benchmarking model-based characterization of spike trains. PMID:29565979
Integrated plasticity at inhibitory and excitatory synapses in the cerebellar circuit.
Mapelli, Lisa; Pagani, Martina; Garrido, Jesus A; D'Angelo, Egidio
2015-01-01
The way long-term potentiation (LTP) and depression (LTD) are integrated within the different synapses of brain neuronal circuits is poorly understood. In order to progress beyond the identification of specific molecular mechanisms, a system in which multiple forms of plasticity can be correlated with large-scale neural processing is required. In this paper we take as an example the cerebellar network, in which extensive investigations have revealed LTP and LTD at several excitatory and inhibitory synapses. Cerebellar LTP and LTD occur in all three main cerebellar subcircuits (granular layer, molecular layer, deep cerebellar nuclei) and correspondingly regulate the function of their three main neurons: granule cells (GrCs), Purkinje cells (PCs) and deep cerebellar nuclear (DCN) cells. All these neurons, in addition to be excited, are reached by feed-forward and feed-back inhibitory connections, in which LTP and LTD may either operate synergistically or homeostatically in order to control information flow through the circuit. Although the investigation of individual synaptic plasticities in vitro is essential to prove their existence and mechanisms, it is insufficient to generate a coherent view of their impact on network functioning in vivo. Recent computational models and cell-specific genetic mutations in mice are shedding light on how plasticity at multiple excitatory and inhibitory synapses might regulate neuronal activities in the cerebellar circuit and contribute to learning and memory and behavioral control.
Sun, Yuwen; Cheng, Allen C
2012-07-01
Artificial neural networks (ANNs) are a promising machine learning technique in classifying non-linear electrocardiogram (ECG) signals and recognizing abnormal patterns suggesting risks of cardiovascular diseases (CVDs). In this paper, we propose a new reusable neuron architecture (RNA) enabling a performance-efficient and cost-effective silicon implementation for ANN. The RNA architecture consists of a single layer of physical RNA neurons, each of which is designed to use minimal hardware resource (e.g., a single 2-input multiplier-accumulator is used to compute the dot product of two vectors). By carefully applying the principal of time sharing, RNA can multiplexs this single layer of physical neurons to efficiently execute both feed-forward and back-propagation computations of an ANN while conserving the area and reducing the power dissipation of the silicon. A three-layer 51-30-12 ANN is implemented in RNA to perform the ECG classification for CVD detection. This RNA hardware also allows on-chip automatic training update. A quantitative design space exploration in area, power dissipation, and execution speed between RNA and three other implementations representative of different reusable hardware strategies is presented and discussed. Compared with an equivalent software implementation in C executed on an embedded microprocessor, the RNA ASIC achieves three orders of magnitude improvements in both the execution speed and the energy efficiency. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spontaneously emerging direction selectivity maps in visual cortex through STDP.
Wenisch, Oliver G; Noll, Joachim; Hemmen, J Leo van
2005-10-01
It is still an open question as to whether, and how, direction-selective neuronal responses in primary visual cortex are generated by feedforward thalamocortical or recurrent intracortical connections, or a combination of both. Here we present an investigation that concentrates on and, only for the sake of simplicity, restricts itself to intracortical circuits, in particular, with respect to the developmental aspects of direction selectivity through spike-timing-dependent synaptic plasticity. We show that directional responses can emerge in a recurrent network model of visual cortex with spiking neurons that integrate inputs mainly from a particular direction, thus giving rise to an asymmetrically shaped receptive field. A moving stimulus that enters the receptive field from this (preferred) direction will activate a neuron most strongly because of the increased number and/or strength of inputs from this direction and since delayed isotropic inhibition will neither overlap with, nor cancel excitation, as would be the case for other stimulus directions. It is demonstrated how direction-selective responses result from spatial asymmetries in the distribution of synaptic contacts or weights of inputs delivered to a neuron by slowly conducting intracortical axonal delay lines. By means of spike-timing-dependent synaptic plasticity with an asymmetric learning window this kind of coupling asymmetry develops naturally in a recurrent network of stochastically spiking neurons in a scenario where the neurons are activated by unidirectionally moving bar stimuli and even when only intrinsic spontaneous activity drives the learning process. We also present simulation results to show the ability of this model to produce direction preference maps similar to experimental findings.
Coding the presence of visual objects in a recurrent neural network of visual cortex.
Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard
2007-01-01
Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.
Shen, Xu; Tian, Xinmei; Liu, Tongliang; Xu, Fang; Tao, Dacheng
2017-10-03
Dropout has been proven to be an effective algorithm for training robust deep networks because of its ability to prevent overfitting by avoiding the co-adaptation of feature detectors. Current explanations of dropout include bagging, naive Bayes, regularization, and sex in evolution. According to the activation patterns of neurons in the human brain, when faced with different situations, the firing rates of neurons are random and continuous, not binary as current dropout does. Inspired by this phenomenon, we extend the traditional binary dropout to continuous dropout. On the one hand, continuous dropout is considerably closer to the activation characteristics of neurons in the human brain than traditional binary dropout. On the other hand, we demonstrate that continuous dropout has the property of avoiding the co-adaptation of feature detectors, which suggests that we can extract more independent feature detectors for model averaging in the test stage. We introduce the proposed continuous dropout to a feedforward neural network and comprehensively compare it with binary dropout, adaptive dropout, and DropConnect on Modified National Institute of Standards and Technology, Canadian Institute for Advanced Research-10, Street View House Numbers, NORB, and ImageNet large scale visual recognition competition-12. Thorough experiments demonstrate that our method performs better in preventing the co-adaptation of feature detectors and improves test performance.
Mejias, Jorge F; Payeur, Alexandre; Selin, Erik; Maler, Leonard; Longtin, André
2014-01-01
The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry-also known as "open-loop feedback"-, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain.
Mejias, Jorge F.; Payeur, Alexandre; Selin, Erik; Maler, Leonard; Longtin, André
2014-01-01
The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry—also known as “open-loop feedback”—, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain. PMID:24616694
Hendrickson, Phillip J; Yu, Gene J; Song, Dong; Berger, Theodore W
2016-01-01
This paper describes a million-plus granule cell compartmental model of the rat hippocampal dentate gyrus, including excitatory, perforant path input from the entorhinal cortex, and feedforward and feedback inhibitory input from dentate interneurons. The model includes experimentally determined morphological and biophysical properties of granule cells, together with glutamatergic AMPA-like EPSP and GABAergic GABAA-like IPSP synaptic excitatory and inhibitory inputs, respectively. Each granule cell was composed of approximately 200 compartments having passive and active conductances distributed throughout the somatic and dendritic regions. Modeling excitatory input from the entorhinal cortex was guided by axonal transport studies documenting the topographical organization of projections from subregions of the medial and lateral entorhinal cortex, plus other important details of the distribution of glutamatergic inputs to the dentate gyrus. Information contained within previously published maps of this major hippocampal afferent were systematically converted to scales that allowed the topographical distribution and relative synaptic densities of perforant path inputs to be quantitatively estimated for inclusion in the current model. Results showed that when medial and lateral entorhinal cortical neurons maintained Poisson random firing, dentate granule cells expressed, throughout the million-cell network, a robust nonrandom pattern of spiking best described as a spatiotemporal "clustering." To identify the network property or properties responsible for generating such firing "clusters," we progressively eliminated from the model key mechanisms, such as feedforward and feedback inhibition, intrinsic membrane properties underlying rhythmic burst firing, and/or topographical organization of entorhinal afferents. Findings conclusively identified topographical organization of inputs as the key element responsible for generating a spatiotemporal distribution of clustered firing. These results uncover a functional organization of perforant path afferents to the dentate gyrus not previously recognized: topography-dependent clusters of granule cell activity as "functional units" or "channels" that organize the processing of entorhinal signals. This modeling study also reveals for the first time how a global signal processing feature of a neural network can evolve from one of its underlying structural characteristics.
Hendrickson, Phillip J.; Yu, Gene J.; Song, Dong; Berger, Theodore W.
2016-01-01
Goal This manuscript describes a million-plus granule cell compartmental model of the rat hippocampal dentate gyrus, including excitatory, perforant path input from the entorhinal cortex, and feedforward and feedback inhibitory input from dentate interneurons. Methods The model includes experimentally determined morphological and biophysical properties of granule cells, together with glutamatergic AMPA-like EPSP and GABAergic GABAA-like IPSP synaptic excitatory and inhibitory inputs, respectively. Each granule cell was composed of approximately 200 compartments having passive and active conductances distributed throughout the somatic and dendritic regions. Modeling excitatory input from the entorhinal cortex was guided by axonal transport studies documenting the topographical organization of projections from subregions of the medial and lateral entorhinal cortex, plus other important details of the distribution of glutamatergic inputs to the dentate gyrus. Information contained within previously published maps of this major hippocampal afferent were systematically converted to scales that allowed the topographical distribution and relative synaptic densities of perforant path inputs to be quantitatively estimated for inclusion in the current model. Results Results showed that when medial and lateral entorhinal cortical neurons maintained Poisson random firing, dentate granule cells expressed, throughout the million-cell network, a robust, non-random pattern of spiking best described as spatio-temporal “clustering”. To identify the network property or properties responsible for generating such firing “clusters”, we progressively eliminated from the model key mechanisms such as feedforward and feedback inhibition, intrinsic membrane properties underlying rhythmic burst firing, and/or topographical organization of entorhinal afferents. Conclusion Findings conclusively identified topographical organization of inputs as the key element responsible for generating a spatio-temporal distribution of clustered firing. These results uncover a functional organization of perforant path afferents to the dentate gyrus not previously recognized: topography-dependent clusters of granule cell activity as “functional units” or “channels” that organize the processing of entorhinal signals. This modeling study also reveals for the first time how a global signal processing feature of a neural network can evolve from one of its underlying structural characteristics. PMID:26087482
Mank, Nils N; Berghoff, Bork A; Klug, Gabriele
2013-03-01
Living cells use a variety of regulatory network motifs for accurate gene expression in response to changes in their environment or during differentiation processes. In Rhodobacter sphaeroides, a complex regulatory network controls expression of photosynthesis genes to guarantee optimal energy supply on one hand and to avoid photooxidative stress on the other hand. Recently, we identified a mixed incoherent feed-forward loop comprising the transcription factor PrrA, the sRNA PcrZ and photosynthesis target genes as part of this regulatory network. This point-of-view provides a comparison to other described feed-forward loops and discusses the physiological relevance of PcrZ in more detail.
A mixed incoherent feed-forward loop contributes to the regulation of bacterial photosynthesis genes
Mank, Nils N.; Berghoff, Bork A.; Klug, Gabriele
2013-01-01
Living cells use a variety of regulatory network motifs for accurate gene expression in response to changes in their environment or during differentiation processes. In Rhodobacter sphaeroides, a complex regulatory network controls expression of photosynthesis genes to guarantee optimal energy supply on one hand and to avoid photooxidative stress on the other hand. Recently, we identified a mixed incoherent feed-forward loop comprising the transcription factor PrrA, the sRNA PcrZ and photosynthesis target genes as part of this regulatory network. This point-of-view provides a comparison to other described feed-forward loops and discusses the physiological relevance of PcrZ in more detail. PMID:23392242
Neural Sequence Generation Using Spatiotemporal Patterns of Inhibition.
Cannon, Jonathan; Kopell, Nancy; Gardner, Timothy; Markowitz, Jeffrey
2015-11-01
Stereotyped sequences of neural activity are thought to underlie reproducible behaviors and cognitive processes ranging from memory recall to arm movement. One of the most prominent theoretical models of neural sequence generation is the synfire chain, in which pulses of synchronized spiking activity propagate robustly along a chain of cells connected by highly redundant feedforward excitation. But recent experimental observations in the avian song production pathway during song generation have shown excitatory activity interacting strongly with the firing patterns of inhibitory neurons, suggesting a process of sequence generation more complex than feedforward excitation. Here we propose a model of sequence generation inspired by these observations in which a pulse travels along a spatially recurrent excitatory chain, passing repeatedly through zones of local feedback inhibition. In this model, synchrony and robust timing are maintained not through redundant excitatory connections, but rather through the interaction between the pulse and the spatiotemporal pattern of inhibition that it creates as it circulates the network. These results suggest that spatially and temporally structured inhibition may play a key role in sequence generation.
Leonard, J L
2000-05-01
Understanding how species-typical movement patterns are organized in the nervous system is a central question in neurobiology. The current explanations involve 'alphabet' models in which an individual neuron may participate in the circuit for several behaviors but each behavior is specified by a specific neural circuit. However, not all of the well-studied model systems fit the 'alphabet' model. The 'equation' model provides an alternative possibility, whereby a system of parallel motor neurons, each with a unique (but overlapping) field of innervation, can account for the production of stereotyped behavior patterns by variable circuits. That is, it is possible for such patterns to arise as emergent properties of a generalized neural network in the absence of feedback, a simple version of a 'self-organizing' behavioral system. Comparison of systems of identified neurons suggest that the 'alphabet' model may account for most observations where CPGs act to organize motor patterns. Other well-known model systems, involving architectures corresponding to feed-forward neural networks with a hidden layer, may organize patterned behavior in a manner consistent with the 'equation' model. Such architectures are found in the Mauthner and reticulospinal circuits, 'escape' locomotion in cockroaches, CNS control of Aplysia gill, and may also be important in the coordination of sensory information and motor systems in insect mushroom bodies and the vertebrate hippocampus. The hidden layer of such networks may serve as an 'internal representation' of the behavioral state and/or body position of the animal, allowing the animal to fine-tune oriented, or particularly context-sensitive, movements to the prevalent conditions. Experiments designed to distinguish between the two models in cases where they make mutually exclusive predictions provide an opportunity to elucidate the neural mechanisms by which behavior is organized in vivo and in vitro. Copyright 2000 S. Karger AG, Basel
Dynamically stable associative learning: a neurobiologically based ANN and its applications
NASA Astrophysics Data System (ADS)
Vogl, Thomas P.; Blackwell, Kim L.; Barbour, Garth; Alkon, Daniel L.
1992-07-01
Most currently popular artificial neural networks (ANN) are based on conceptions of neuronal properties that date back to the 1940s and 50s, i.e., to the ideas of McCullough, Pitts, and Hebb. Dystal is an ANN based on current knowledge of neurobiology at the cellular and subcellular level. Networks based on these neurobiological insights exhibit the following advantageous properties: (1) A theoretical storage capacity of bN non-orthogonal memories, where N is the number of output neurons sharing common inputs and b is the number of distinguishable (gray shade) levels. (2) The ability to learn, store, and recall associations among noisy, arbitrary patterns. (3) A local synaptic learning rule (learning depends neither on the output of the post-synaptic neuron nor on a global error term), some of whose consequences are: (4) Feed-forward, lateral, and feed-back connections (as well as time-sensitive connections) are possible without alteration of the learning algorithm; (5) Storage allocation (patch creation) proceeds dynamically as associations are learned (self- organizing); (6) The number of training set presentations required for learning is small (< 10) and does not change with pattern size or content; and (7) The network exhibits monotonic convergence, reaching equilibrium (fully trained) values without oscillating. The performance of Dystal on pattern completion tasks such as faces with different expressions and/or corrupted by noise, and on reading hand-written digits (98% accuracy) and hand-printed Japanese Kanji (90% accuracy) is demonstrated.
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
NASA Astrophysics Data System (ADS)
Idris, N. H.; Salim, N. A.; Othman, M. M.; Yasin, Z. M.
2018-03-01
This paper presents the Evolutionary Programming (EP) which proposed to optimize the training parameters for Artificial Neural Network (ANN) in predicting cascading collapse occurrence due to the effect of protection system hidden failure. The data has been collected from the probability of hidden failure model simulation from the historical data. The training parameters of multilayer-feedforward with backpropagation has been optimized with objective function to minimize the Mean Square Error (MSE). The optimal training parameters consists of the momentum rate, learning rate and number of neurons in first hidden layer and second hidden layer is selected in EP-ANN. The IEEE 14 bus system has been tested as a case study to validate the propose technique. The results show the reliable prediction of performance validated through MSE and Correlation Coefficient (R).
NASA Technical Reports Server (NTRS)
Hof, P. R.; Ungerleider, L. G.; Webster, M. J.; Gattass, R.; Adams, M. M.; Sailstad, C. A.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)
1996-01-01
Previous studies of the primate cerebral cortex have shown that neurofilament protein is present in pyramidal neuron subpopulations displaying specific regional and laminar distribution patterns. In order to characterize further the neurochemical phenotype of the neurons furnishing feedforward and feedback pathways in the visual cortex of the macaque monkey, we performed an analysis of the distribution of neurofilament protein in corticocortical projection neurons in areas V1, V2, V3, V3A, V4, and MT. Injections of the retrogradely transported dyes Fast Blue and Diamidino Yellow were placed within areas V4 and MT, or in areas V1 and V2, in 14 adult rhesus monkeys, and the brains of these animals were processed for immunohistochemistry with an antibody to nonphosphorylated epitopes of the medium and heavy molecular weight subunits of the neurofilament protein. Overall, there was a higher proportion of neurons projecting from areas V1, V2, V3, and V3A to area MT that were neurofilament protein-immunoreactive (57-100%), than to area V4 (25-36%). In contrast, feedback projections from areas MT, V4, and V3 exhibited a more consistent proportion of neurofilament protein-containing neurons (70-80%), regardless of their target areas (V1 or V2). In addition, the vast majority of feedback neurons projecting to areas V1 and V2 were located in layers V and VI in areas V4 and MT, while they were observed in both supragranular and infragranular layers in area V3. The laminar distribution of feedforward projecting neurons was heterogeneous. In area V1, Meynert and layer IVB cells were found to project to area MT, while neurons projecting to area V4 were particularly dense in layer III within the foveal representation. In area V2, almost all neurons projecting to areas MT or V4 were located in layer III, whereas they were found in both layers II-III and V-VI in areas V3 and V3A. These results suggest that neurofilament protein identifies particular subpopulations of corticocortically projecting neurons with distinct regional and laminar distribution in the monkey visual system. It is possible that the preferential distribution of neurofilament protein within feedforward connections to area MT and all feedback projections is related to other distinctive properties of these corticocortical projection neurons.
Feedback Regulation and Its Efficiency in Biochemical Networks
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuya J.; Yokota, Ryo; Aihara, Kazuyuki
2016-03-01
Intracellular biochemical networks fluctuate dynamically due to various internal and external sources of fluctuation. Dissecting the fluctuation into biologically relevant components is important for understanding how a cell controls and harnesses noise and how information is transferred over apparently noisy intracellular networks. While substantial theoretical and experimental advancement on the decomposition of fluctuation was achieved for feedforward networks without any loop, we still lack a theoretical basis that can consistently extend such advancement to feedback networks. The main obstacle that hampers is the circulative propagation of fluctuation by feedback loops. In order to define the relevant quantity for the impact of feedback loops for fluctuation, disentanglement of the causally interlocked influences between the components is required. In addition, we also lack an approach that enables us to infer non-perturbatively the influence of the feedback to fluctuation in the same way as the dual reporter system does in the feedforward networks. In this work, we address these problems by extending the work on the fluctuation decomposition and the dual reporter system. For a single-loop feedback network with two components, we define feedback loop gain as the feedback efficiency that is consistent with the fluctuation decomposition for feedforward networks. Then, we clarify the relation of the feedback efficiency with the fluctuation propagation in an open-looped FF network. Finally, by extending the dual reporter system, we propose a conjugate feedback and feedforward system for estimating the feedback efficiency non-perturbatively only from the statistics of the system.
Troyano-Rodriguez, Eva; Lladó-Pelfort, Laia; Santana, Noemi; Teruel-Martí, Vicent; Celada, Pau; Artigas, Francesc
2014-12-15
The neurobiological basis of action of noncompetitive N-methyl-D-aspartate acid receptor (NMDA-R) antagonists is poorly understood. Electrophysiological studies indicate that phencyclidine (PCP) markedly disrupts neuronal activity with an overall excitatory effect and reduces the power of low-frequency oscillations (LFO; <4 Hz) in thalamocortical networks. Because the reticular nucleus of the thalamus (RtN) provides tonic feed-forward inhibition to the rest of the thalamic nuclei, we examined the effect of PCP on RtN activity, under the working hypothesis that NMDA-R blockade in RtN would disinhibit thalamocortical networks. Drug effects (PCP followed by clozapine) on the activity of RtN (single unit and local field potential recordings) and prefrontal cortex (PFC; electrocorticogram) in anesthetized rats were assessed. PCP (.25-.5 mg/kg, intravenous) reduced the discharge rate of 19 of 21 RtN neurons to 37% of baseline (p < .000001) and the power of LFO in RtN and PFC to ~20% of baseline (p < .001). PCP also reduced the coherence between PFC and RtN in the LFO range. A low clozapine dose (1 mg/kg intravenous) significantly countered the effect of PCP on LFO in PFC but not in RtN and further reduced the discharge rate of RtN neurons. However, clozapine administration partly antagonized the fall in coherence and phase-locking values produced by PCP. PCP activates thalamocortical circuits in a bottom-up manner by reducing the activity of RtN neurons, which tonically inhibit thalamic relay neurons. However, clozapine reversal of PCP effects is not driven by restoring RtN activity and may involve a cortical action. Copyright © 2014 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Neural dynamics of feedforward and feedback processing in figure-ground segregation
Layton, Oliver W.; Mingolla, Ennio; Yazdanbakhsh, Arash
2014-01-01
Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation. PMID:25346703
Neural dynamics of feedforward and feedback processing in figure-ground segregation.
Layton, Oliver W; Mingolla, Ennio; Yazdanbakhsh, Arash
2014-01-01
Determining whether a region belongs to the interior or exterior of a shape (figure-ground segregation) is a core competency of the primate brain, yet the underlying mechanisms are not well understood. Many models assume that figure-ground segregation occurs by assembling progressively more complex representations through feedforward connections, with feedback playing only a modulatory role. We present a dynamical model of figure-ground segregation in the primate ventral stream wherein feedback plays a crucial role in disambiguating a figure's interior and exterior. We introduce a processing strategy whereby jitter in RF center locations and variation in RF sizes is exploited to enhance and suppress neural activity inside and outside of figures, respectively. Feedforward projections emanate from units that model cells in V4 known to respond to the curvature of boundary contours (curved contour cells), and feedback projections from units predicted to exist in IT that strategically group neurons with different RF sizes and RF center locations (teardrop cells). Neurons (convex cells) that preferentially respond when centered on a figure dynamically balance feedforward (bottom-up) information and feedback from higher visual areas. The activation is enhanced when an interior portion of a figure is in the RF via feedback from units that detect closure in the boundary contours of a figure. Our model produces maximal activity along the medial axis of well-known figures with and without concavities, and inside algorithmically generated shapes. Our results suggest that the dynamic balancing of feedforward signals with the specific feedback mechanisms proposed by the model is crucial for figure-ground segregation.
Gonchar, Yuri; Burkhalter, Andreas
2003-11-26
Processing of visual information is performed in different cortical areas that are interconnected by feedforward (FF) and feedback (FB) pathways. Although FF and FB inputs are excitatory, their influences on pyramidal neurons also depend on the outputs of GABAergic neurons, which receive FF and FB inputs. Rat visual cortex contains at least three different families of GABAergic neurons that express parvalbumin (PV), calretinin (CR), and somatostatin (SOM) (Gonchar and Burkhalter, 1997). To examine whether pathway-specific inhibition (Shao and Burkhalter, 1996) is attributable to distinct connections with GABAergic neurons, we traced FF and FB inputs to PV, CR, and SOM neurons in layers 1-2/3 of area 17 and the secondary lateromedial area in rat visual cortex. We found that in layer 2/3 maximally 2% of FF and FB inputs go to CR and SOM neurons. This contrasts with 12-13% of FF and FB inputs onto layer 2/3 PV neurons. Unlike inputs to layer 2/3, connections to layer 1, which contains CR but lacks SOM and PV somata, are pathway-specific: 21% of FB inputs go to CR neurons, whereas FF inputs to layer 1 and its CR neurons are absent. These findings suggest that FF and FB influences on layer 2/3 pyramidal neurons mainly involve disynaptic connections via PV neurons that control the spike outputs to axons and proximal dendrites. Unlike FF input, FB input in addition makes a disynaptic link via CR neurons, which may influence the excitability of distal pyramidal cell dendrites in layer 1.
NASA Technical Reports Server (NTRS)
Salu, Yehuda; Tilton, James
1993-01-01
The classification of multispectral image data obtained from satellites has become an important tool for generating ground cover maps. This study deals with the application of nonparametric pixel-by-pixel classification methods in the classification of pixels, based on their multispectral data. A new neural network, the Binary Diamond, is introduced, and its performance is compared with a nearest neighbor algorithm and a back-propagation network. The Binary Diamond is a multilayer, feed-forward neural network, which learns from examples in unsupervised, 'one-shot' mode. It recruits its neurons according to the actual training set, as it learns. The comparisons of the algorithms were done by using a realistic data base, consisting of approximately 90,000 Landsat 4 Thematic Mapper pixels. The Binary Diamond and the nearest neighbor performances were close, with some advantages to the Binary Diamond. The performance of the back-propagation network lagged behind. An efficient nearest neighbor algorithm, the binned nearest neighbor, is described. Ways for improving the performances, such as merging categories, and analyzing nonboundary pixels, are addressed and evaluated.
Learning polynomial feedforward neural networks by genetic programming and backpropagation.
Nikolaev, N Y; Iba, H
2003-01-01
This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.
Faghihi, Faramarz; Moustafa, Ahmed A
2016-09-01
The separation of input patterns received from the entorhinal cortex (EC) by the dentate gyrus (DG) is a well-known critical step of information processing in the hippocampus. Although the role of interneurons in separation pattern efficiency of the DG has been theoretically known, the balance of neurogenesis of excitatory neurons and interneurons as well as its potential role in information processing in the DG is not fully understood. In this work, we study separation efficiency of the DG for different rates of neurogenesis of interneurons and excitatory neurons using a novel computational model in which we assume an increase in the synaptic efficacy between excitatory neurons and interneurons and then its decay over time. Information processing in the EC and DG was simulated as information flow in a two layer feed-forward neural network. The neurogenesis rate was modeled as the percentage of new born neurons added to the neuronal population in each time bin. The results show an important role of an optimal neurogenesis rate of interneurons and excitatory neurons in the DG in efficient separation of inputs from the EC in pattern separation tasks. The model predicts that any deviation of the optimal values of neurogenesis rates leads to different decreased levels of the separation deficits of the DG which influences its function to encode memory.
Nonlinear adaptive networks: A little theory, a few applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.D.; Qian, S.; Barnes, C.W.
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We than present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series tidal prediction in Venice Lagoon, sonar transient detection, control of nonlinear processes, balancing a double inverted pendulum and design advice for free electron lasers. 26 refs., 23 figs.
Boundedness and convergence of online gradient method with penalty for feedforward neural networks.
Zhang, Huisheng; Wu, Wei; Liu, Fei; Yao, Mingchen
2009-06-01
In this brief, we consider an online gradient method with penalty for training feedforward neural networks. Specifically, the penalty is a term proportional to the norm of the weights. Its roles in the method are to control the magnitude of the weights and to improve the generalization performance of the network. By proving that the weights are automatically bounded in the network training with penalty, we simplify the conditions that are required for convergence of online gradient method in literature. A numerical example is given to support the theoretical analysis.
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
Prediction of municipal solid waste generation using nonlinear autoregressive network.
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A
2015-12-01
Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.
NASA Astrophysics Data System (ADS)
Kwon, Chung-Jin; Kim, Sung-Joong; Han, Woo-Young; Min, Won-Kyoung
2005-12-01
The rotor position and speed estimation of permanent-magnet synchronous motor(PMSM) was dealt with. By measuring the phase voltages and currents of the PMSM drive, two diagonally recurrent neural network(DRNN) based observers, a neural current observer and a neural velocity observer were developed. DRNN which has self-feedback of the hidden neurons ensures that the outputs of DRNN contain the whole past information of the system even if the inputs of DRNN are only the present states and inputs of the system. Thus the structure of DRNN may be simpler than that of feedforward and fully recurrent neural networks. If the backpropagation method was used for the training of the DRNN the problem of slow convergence arise. In order to reduce this problem, recursive prediction error(RPE) based learning method for the DRNN was presented. The simulation results show that the proposed approach gives a good estimation of rotor speed and position, and RPE based training has requires a shorter computation time compared to backpropagation based training.
Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.
Hang, Giao B; Dan, Yang
2011-01-01
Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.
Burton, Shawn D.
2015-01-01
Granule cell-mediated inhibition is critical to patterning principal neuron activity in the olfactory bulb, and perturbation of synaptic input to granule cells significantly alters olfactory-guided behavior. Despite the critical role of granule cells in olfaction, little is known about how sensory input recruits granule cells. Here, we combined whole-cell patch-clamp electrophysiology in acute mouse olfactory bulb slices with biophysical multicompartmental modeling to investigate the synaptic basis of granule cell recruitment. Physiological activation of sensory afferents within single glomeruli evoked diverse modes of granule cell activity, including subthreshold depolarization, spikelets, and suprathreshold responses with widely distributed spike latencies. The generation of these diverse activity modes depended, in part, on the asynchronous time course of synaptic excitation onto granule cells, which lasted several hundred milliseconds. In addition to asynchronous excitation, each granule cell also received synchronous feedforward inhibition. This inhibition targeted both proximal somatodendritic and distal apical dendritic domains of granule cells, was reliably recruited across sniff rhythms, and scaled in strength with excitation as more glomeruli were activated. Feedforward inhibition onto granule cells originated from deep short-axon cells, which responded to glomerular activation with highly reliable, short-latency firing consistent with tufted cell-mediated excitation. Simulations showed that feedforward inhibition interacts with asynchronous excitation to broaden granule cell spike latency distributions and significantly attenuates granule cell depolarization within local subcellular compartments. Collectively, our results thus identify feedforward inhibition onto granule cells as a core feature of olfactory bulb circuitry and establish asynchronous excitation and feedforward inhibition as critical regulators of granule cell activity. SIGNIFICANCE STATEMENT Inhibitory granule cells are involved critically in shaping odor-evoked principal neuron activity in the mammalian olfactory bulb, yet little is known about how sensory input activates granule cells. Here, we show that sensory input to the olfactory bulb evokes a barrage of asynchronous synaptic excitation and highly reliable, short-latency synaptic inhibition onto granule cells via a disynaptic feedforward inhibitory circuit involving deep short-axon cells. Feedforward inhibition attenuates local depolarization within granule cell dendritic branches, interacts with asynchronous excitation to suppress granule cell spike-timing precision, and scales in strength with excitation across different levels of sensory input to normalize granule cell firing rates. PMID:26490853
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2014-01-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2012-12-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.
NASA Astrophysics Data System (ADS)
An, Soyoung; Choi, Woochul; Paik, Se-Bum
2015-11-01
Understanding the mechanism of information processing in the human brain remains a unique challenge because the nonlinear interactions between the neurons in the network are extremely complex and because controlling every relevant parameter during an experiment is difficult. Therefore, a simulation using simplified computational models may be an effective approach. In the present study, we developed a general model of neural networks that can simulate nonlinear activity patterns in the hierarchical structure of a neural network system. To test our model, we first examined whether our simulation could match the previously-observed nonlinear features of neural activity patterns. Next, we performed a psychophysics experiment for a simple visual working memory task to evaluate whether the model could predict the performance of human subjects. Our studies show that the model is capable of reproducing the relationship between memory load and performance and may contribute, in part, to our understanding of how the structure of neural circuits can determine the nonlinear neural activity patterns in the human brain.
Prediction and control of chaotic processes using nonlinear adaptive networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, R.D.; Barnes, C.W.; Flake, G.W.
1990-01-01
We present the theory of nonlinear adaptive networks and discuss a few applications. In particular, we review the theory of feedforward backpropagation networks. We then present the theory of the Connectionist Normalized Linear Spline network in both its feedforward and iterated modes. Also, we briefly discuss the theory of stochastic cellular automata. We then discuss applications to chaotic time series, tidal prediction in Venice lagoon, finite differencing, sonar transient detection, control of nonlinear processes, control of a negative ion source, balancing a double inverted pendulum and design advice for free electron lasers and laser fusion targets.
Breast cancer detection via Hu moment invariant and feedforward neural network
NASA Astrophysics Data System (ADS)
Zhang, Xiaowei; Yang, Jiquan; Nguyen, Elijah
2018-04-01
One of eight women can get breast cancer during all her life. This study used Hu moment invariant and feedforward neural network to diagnose breast cancer. With the help of K-fold cross validation, we can test the out-of-sample accuracy of our method. Finally, we found that our methods can improve the accuracy of detecting breast cancer and reduce the difficulty of judging.
Cummine, Jacqueline; Cribben, Ivor; Luu, Connie; Kim, Esther; Bahktiari, Reyhaneh; Georgiou, George; Boliek, Carol A
2016-05-01
The neural circuitry associated with language processing is complex and dynamic. Graphical models are useful for studying complex neural networks as this method provides information about unique connectivity between regions within the context of the entire network of interest. Here, the authors explored the neural networks during covert reading to determine the role of feedforward and feedback loops in covert speech production. Brain activity of skilled adult readers was assessed in real word and pseudoword reading tasks with functional MRI (fMRI). The authors provide evidence for activity coherence in the feedforward system (inferior frontal gyrus-supplementary motor area) during real word reading and in the feedback system (supramarginal gyrus-precentral gyrus) during pseudoword reading. Graphical models provided evidence of an extensive, highly connected, neural network when individuals read real words that relied on coordination of the feedforward system. In contrast, when individuals read pseudowords the authors found a limited/restricted network that relied on coordination of the feedback system. Together, these results underscore the importance of considering multiple pathways and articulatory loops during language tasks and provide evidence for a print-to-speech neural network. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Garrido, Lucia; Driver, Jon; Dolan, Raymond J.; Duchaine, Bradley C.; Furl, Nicholas
2016-01-01
Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. SIGNIFICANCE STATEMENT Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face selectivity. Furthermore, people with developmental prosopagnosia, a lifelong face recognition impairment, have reduced face selectivity in the posterior occipitotemporal face areas and left anterior temporal lobe. We show that this reduced face selectivity can be predicted by effective connectivity from early visual cortex to posterior occipitotemporal face areas. This study presents the first network-based account of how face selectivity arises in the human brain. PMID:27030766
Learning and coding in biological neural networks
NASA Astrophysics Data System (ADS)
Fiete, Ila Rani
How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and theoretical results on the scalability of this rule show that learning with stochastic gradient ascent may be adequately fast to explain learning in the bird. Finally, we address the more general issue of the scalability of stochastic gradient learning on quadratic cost surfaces in linear systems, as a function of system size and task characteristics, by deriving analytical expressions for the learning curves.
NASA Astrophysics Data System (ADS)
Berthold, T.; Milbradt, P.; Berkhahn, V.
2018-04-01
This paper presents a model for the approximation of multiple, spatially distributed grain size distributions based on a feedforward neural network. Since a classical feedforward network does not guarantee to produce valid cumulative distribution functions, a priori information is incor porated into the model by applying weight and architecture constraints. The model is derived in two steps. First, a model is presented that is able to produce a valid distribution function for a single sediment sample. Although initially developed for sediment samples, the model is not limited in its application; it can also be used to approximate any other multimodal continuous distribution function. In the second part, the network is extended in order to capture the spatial variation of the sediment samples that have been obtained from 48 locations in the investigation area. Results show that the model provides an adequate approximation of grain size distributions, satisfying the requirements of a cumulative distribution function.
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.
Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco
2017-01-01
The recent “deep learning revolution” in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems. PMID:28377709
Estimating surface longwave radiative fluxes from satellites utilizing artificial neural networks
NASA Astrophysics Data System (ADS)
Nussbaumer, Eric A.; Pinker, Rachel T.
2012-04-01
A novel approach for calculating downwelling surface longwave (DSLW) radiation under all sky conditions is presented. The DSLW model (hereafter, DSLW/UMD v2) similarly to its predecessor, DSLW/UMD v1, is driven with a combination of Moderate Resolution Imaging Spectroradiometer (MODIS) level-3 cloud parameters and information from the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim model. To compute the clear sky component of DSLW a two layer feed-forward artificial neural network with sigmoid hidden neurons and linear output neurons is implemented; it is trained with simulations derived from runs of the Rapid Radiative Transfer Model (RRTM). When computing the cloud contribution to DSLW, the cloud base temperature is estimated by using an independent artificial neural network approach of similar architecture as previously mentioned, and parameterizations. The cloud base temperature neural network is trained using spatially and temporally co-located MODIS and CloudSat Cloud Profiling Radar (CPR) and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) observations. Daily average estimates of DSLW from 2003 to 2009 are compared against ground measurements from the Baseline Surface Radiation Network (BSRN) giving an overall correlation coefficient of 0.98, root mean square error (rmse) of 15.84 W m-2, and a bias of -0.39 W m-2. This is an improvement over an earlier version of the model (DSLW/UMD v1) which for the same time period has an overall correlation coefficient 0.97 rmse of 17.27 W m-2, and bias of 0.73 W m-2.
Klink, P Christiaan; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R
2017-07-05
The visual cortex is hierarchically organized, with low-level areas coding for simple features and higher areas for complex ones. Feedforward and feedback connections propagate information between areas in opposite directions, but their functional roles are only partially understood. We used electrical microstimulation to perturb the propagation of neuronal activity between areas V1 and V4 in monkeys performing a texture-segregation task. In both areas, microstimulation locally caused a brief phase of excitation, followed by inhibition. Both these effects propagated faithfully in the feedforward direction from V1 to V4. Stimulation of V4, however, caused little V1 excitation, but it did yield a delayed suppression during the late phase of visually driven activity. This suppression was pronounced for the V1 figure representation and weaker for background representations. Our results reveal functional differences between feedforward and feedback processing in texture segregation and suggest a specific modulating role for feedback connections in perceptual organization. Copyright © 2017 Elsevier Inc. All rights reserved.
Neural net target-tracking system using structured laser patterns
NASA Astrophysics Data System (ADS)
Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun
1996-06-01
In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.
Dulla, Chris G.; Coulter, Douglas A.; Ziburkus, Jokubas
2015-01-01
Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer’s disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. PMID:25948650
NASA Astrophysics Data System (ADS)
Mandal, Sumantra
2006-11-01
ABSTRACT In this paper, an artificial neural network (ANN) model has been suggested to predict the constitutive flow behavior of a 15Cr-15Ni-2.2Mo-Ti modified austenitic stainless steel under hot deformation. Hot compression tests in the temperature range 850°C- 1250°C and strain rate range 10-3-102 s-1 were carried out. These tests provided the required data for training the neural network and for subsequent testing. The inputs of the neural network are strain, log strain rate and temperature while flow stress is obtained as output. A three layer feed-forward network with ten neurons in a single hidden layer and back-propagation learning algorithm has been employed. A very good correlation between experimental and predicted result has been obtained. The effect of temperature and strain rate on flow behavior has been simulated employing the ANN model. The results have been found to be consistent with the metallurgical trend. Finally, a monte carlo analiysis has been carried out to find out the noise sensitivity of the developed model.
Dulla, Chris G; Coulter, Douglas A; Ziburkus, Jokubas
2016-06-01
Complex circuitry with feed-forward and feed-back systems regulate neuronal activity throughout the brain. Cell biological, electrical, and neurotransmitter systems enable neural networks to process and drive the entire spectrum of cognitive, behavioral, and motor functions. Simultaneous orchestration of distinct cells and interconnected neural circuits relies on hundreds, if not thousands, of unique molecular interactions. Even single molecule dysfunctions can be disrupting to neural circuit activity, leading to neurological pathology. Here, we sample our current understanding of how molecular aberrations lead to disruptions in networks using three neurological pathologies as exemplars: epilepsy, traumatic brain injury (TBI), and Alzheimer's disease (AD). Epilepsy provides a window into how total destabilization of network balance can occur. TBI is an abrupt physical disruption that manifests in both acute and chronic neurological deficits. Last, in AD progressive cell loss leads to devastating cognitive consequences. Interestingly, all three of these neurological diseases are interrelated. The goal of this review, therefore, is to identify molecular changes that may lead to network dysfunction, elaborate on how altered network activity and circuit structure can contribute to neurological disease, and suggest common threads that may lie at the heart of molecular circuit dysfunction. © The Author(s) 2015.
The application of neural networks to the SSME startup transient
NASA Technical Reports Server (NTRS)
Meyer, Claudia M.; Maul, William A.
1991-01-01
Feedforward neural networks were used to model three parameters during the Space Shuttle Main Engine startup transient. The three parameters were the main combustion chamber pressure, a controlled parameter, the high pressure oxidizer turbine discharge temperature, a redlined parameter, and the high pressure fuel pump discharge pressure, a failure-indicating performance parameter. Network inputs consisted of time windows of data from engine measurements that correlated highly to the modeled parameter. A standard backpropagation algorithm was used to train the feedforward networks on two nominal firings. Each trained network was validated with four additional nominal firings. For all three parameters, the neural networks were able to accurately predict the data in the validation sets as well as the training set.
Wang, Jinling; Belatreche, Ammar; Maguire, Liam P; McGinnity, Thomas Martin
2017-01-01
This paper presents an enhanced rank-order-based learning algorithm, called SpikeTemp, for spiking neural networks (SNNs) with a dynamically adaptive structure. The trained feed-forward SNN consists of two layers of spiking neurons: 1) an encoding layer which temporally encodes real-valued features into spatio-temporal spike patterns and 2) an output layer of dynamically grown neurons which perform spatio-temporal classification. Both Gaussian receptive fields and square cosine population encoding schemes are employed to encode real-valued features into spatio-temporal spike patterns. Unlike the rank-order-based learning approach, SpikeTemp uses the precise times of the incoming spikes for adjusting the synaptic weights such that early spikes result in a large weight change and late spikes lead to a smaller weight change. This removes the need to rank all the incoming spikes and, thus, reduces the computational cost of SpikeTemp. The proposed SpikeTemp algorithm is demonstrated on several benchmark data sets and on an image recognition task. The results show that SpikeTemp can achieve better classification performance and is much faster than the existing rank-order-based learning approach. In addition, the number of output neurons is much smaller when the square cosine encoding scheme is employed. Furthermore, SpikeTemp is benchmarked against a selection of existing machine learning algorithms, and the results demonstrate the ability of SpikeTemp to classify different data sets after just one presentation of the training samples with comparable classification performance.
Sodium Pumps Mediate Activity-Dependent Changes in Mammalian Motor Networks
Picton, Laurence D.; Nascimento, Filipe; Broadhead, Matthew J.; Sillar, Keith T.
2017-01-01
Ubiquitously expressed sodium pumps are best known for maintaining the ionic gradients and resting membrane potential required for generating action potentials. However, activity- and state-dependent changes in pump activity can also influence neuronal firing and regulate rhythmic network output. Here we demonstrate that changes in sodium pump activity regulate locomotor networks in the spinal cord of neonatal mice. The sodium pump inhibitor, ouabain, increased the frequency and decreased the amplitude of drug-induced locomotor bursting, effects that were dependent on the presence of the neuromodulator dopamine. Conversely, activating the pump with the sodium ionophore monensin decreased burst frequency. When more “natural” locomotor output was evoked using dorsal-root stimulation, ouabain increased burst frequency and extended locomotor episode duration, whereas monensin slowed and shortened episodes. Decreasing the time between dorsal-root stimulation, and therefore interepisode interval, also shortened and slowed activity, suggesting that pump activity encodes information about past network output and contributes to feedforward control of subsequent locomotor bouts. Using whole-cell patch-clamp recordings from spinal motoneurons and interneurons, we describe a long-duration (∼60 s), activity-dependent, TTX- and ouabain-sensitive, hyperpolarization (∼5 mV), which is mediated by spike-dependent increases in pump activity. The duration of this dynamic pump potential is enhanced by dopamine. Our results therefore reveal sodium pumps as dynamic regulators of mammalian spinal motor networks that can also be affected by neuromodulatory systems. Given the involvement of sodium pumps in movement disorders, such as amyotrophic lateral sclerosis and rapid-onset dystonia parkinsonism, knowledge of their contribution to motor network regulation also has considerable clinical importance. SIGNIFICANCE STATEMENT The sodium pump is ubiquitously expressed and responsible for at least half of total brain energy consumption. The pumps maintain ionic gradients and the resting membrane potential of neurons, but increasing evidence suggests that activity- and state-dependent changes in pump activity also influence neuronal firing. Here we demonstrate that changes in sodium pump activity regulate locomotor output in the spinal cord of neonatal mice. We describe a sodium pump-mediated afterhyperpolarization in spinal neurons, mediated by spike-dependent increases in pump activity, which is affected by dopamine. Understanding how sodium pumps contribute to network regulation and are targeted by neuromodulators, including dopamine, has clinical relevance due to the role of the sodium pump in diseases, including amyotrophic lateral sclerosis, parkinsonism, epilepsy, and hemiplegic migraine. PMID:28123025
Agnew, Zarinah; Nagarajan, Srikantan; Houde, John; Ivry, Richard B.
2017-01-01
The cerebellum has been hypothesized to form a crucial part of the speech motor control network. Evidence for this comes from patients with cerebellar damage, who exhibit a variety of speech deficits, as well as imaging studies showing cerebellar activation during speech production in healthy individuals. To date, the precise role of the cerebellum in speech motor control remains unclear, as it has been implicated in both anticipatory (feedforward) and reactive (feedback) control. Here, we assess both anticipatory and reactive aspects of speech motor control, comparing the performance of patients with cerebellar degeneration and matched controls. Experiment 1 tested feedforward control by examining speech adaptation across trials in response to a consistent perturbation of auditory feedback. Experiment 2 tested feedback control, examining online corrections in response to inconsistent perturbations of auditory feedback. Both male and female patients and controls were tested. The patients were impaired in adapting their feedforward control system relative to controls, exhibiting an attenuated anticipatory response to the perturbation. In contrast, the patients produced even larger compensatory responses than controls, suggesting an increased reliance on sensory feedback to guide speech articulation in this population. Together, these results suggest that the cerebellum is crucial for maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control. SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thought to rely on both predictive, feedforward control as well as reactive, feedback control. While the cerebellum has been shown to be part of the speech motor control network, its functional contribution to feedback and feedforward control remains controversial. Here, we use real-time auditory perturbations of speech to show that patients with cerebellar degeneration are impaired in adapting feedforward control of speech but retain the ability to make online feedback corrections; indeed, the patients show an increased sensitivity to feedback. These results indicate that the cerebellum forms a crucial part of the feedforward control system for speech but is not essential for online, feedback control. PMID:28842410
Least square neural network model of the crude oil blending process.
Rubio, José de Jesús
2016-06-01
In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jeng, J T; Lee, T T
2000-01-01
A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.
Wright, William J; Schlüter, Oliver M; Dong, Yan
2017-04-01
The nucleus accumbens (NAc) gates motivated behaviors through the functional output of principle medium spiny neurons (MSNs), whereas dysfunctional output of NAc MSNs contributes to a variety of psychiatric disorders. Fast-spiking interneurons (FSIs) are sparsely distributed throughout the NAc, forming local feedforward inhibitory circuits. It remains elusive how FSI-based feedforward circuits regulate the output of NAc MSNs. Here, we investigated a distinct subpopulation of NAc FSIs that express the cannabinoid receptor type-1 (CB1). Using a combination of paired electrophysiological recordings and pharmacological approaches, we characterized and compared feedforward inhibition of NAc MSNs from CB1 + FSIs and lateral inhibition from recurrent MSN collaterals. We observed that CB1 + FSIs exerted robust inhibitory control over a large percentage of nearby MSNs in contrast to local MSN collaterals that provided only sparse and weak inhibitory input to their neighboring MSNs. Furthermore, CB1 + FSI-mediated feedforward inhibition was preferentially suppressed by endocannabinoid (eCB) signaling, whereas MSN-mediated lateral inhibition was unaffected. Finally, we demonstrated that CB1 + FSI synapses onto MSNs are capable of undergoing experience-dependent long-term depression in a voltage- and eCB-dependent manner. These findings demonstrated that CB1 + FSIs are a major source of local inhibitory control of MSNs and a critical component of the feedforward inhibitory circuits regulating the output of the NAc.
Wright, William J; Schlüter, Oliver M; Dong, Yan
2017-01-01
The nucleus accumbens (NAc) gates motivated behaviors through the functional output of principle medium spiny neurons (MSNs), whereas dysfunctional output of NAc MSNs contributes to a variety of psychiatric disorders. Fast-spiking interneurons (FSIs) are sparsely distributed throughout the NAc, forming local feedforward inhibitory circuits. It remains elusive how FSI-based feedforward circuits regulate the output of NAc MSNs. Here, we investigated a distinct subpopulation of NAc FSIs that express the cannabinoid receptor type-1 (CB1). Using a combination of paired electrophysiological recordings and pharmacological approaches, we characterized and compared feedforward inhibition of NAc MSNs from CB1+ FSIs and lateral inhibition from recurrent MSN collaterals. We observed that CB1+ FSIs exerted robust inhibitory control over a large percentage of nearby MSNs in contrast to local MSN collaterals that provided only sparse and weak inhibitory input to their neighboring MSNs. Furthermore, CB1+ FSI-mediated feedforward inhibition was preferentially suppressed by endocannabinoid (eCB) signaling, whereas MSN-mediated lateral inhibition was unaffected. Finally, we demonstrated that CB1+ FSI synapses onto MSNs are capable of undergoing experience-dependent long-term depression in a voltage- and eCB-dependent manner. These findings demonstrated that CB1+ FSIs are a major source of local inhibitory control of MSNs and a critical component of the feedforward inhibitory circuits regulating the output of the NAc. PMID:27929113
Complex inhibitory microcircuitry regulates retinal signaling near visual threshold
Grimes, William N.; Zhang, Jun; Tian, Hua; Graydon, Cole W.; Hoon, Mrinalini; Rieke, Fred
2015-01-01
Neuronal microcircuits, small, localized signaling motifs involving two or more neurons, underlie signal processing and computation in the brain. Compartmentalized signaling within a neuron may enable it to participate in multiple, independent microcircuits. Each A17 amacrine cell in the mammalian retina contains within its dendrites hundreds of synaptic feedback microcircuits that operate independently to modulate feedforward signaling in the inner retina. Each of these microcircuits comprises a small (<1 μm) synaptic varicosity that typically receives one excitatory synapse from a presynaptic rod bipolar cell (RBC) and returns two reciprocal inhibitory synapses back onto the same RBC terminal. Feedback inhibition from the A17 sculpts the feedforward signal from the RBC to the AII, a critical component of the circuitry mediating night vision. Here, we show that the two inhibitory synapses from the A17 to the RBC express kinetically distinct populations of GABA receptors: rapidly activating GABAARs are enriched at one synapse while more slowly activating GABACRs are enriched at the other. Anatomical and electrophysiological data suggest that macromolecular complexes of voltage-gated (Cav) channels and Ca2+-activated K+ channels help to regulate GABA release from A17 varicosities and limit GABACR activation under certain conditions. Finally, we find that selective elimination of A17-mediated feedback inhibition reduces the signal to noise ratio of responses to dim flashes recorded in the feedforward pathway (i.e., the AII amacrine cell). We conclude that A17-mediated feedback inhibition improves the signal to noise ratio of RBC-AII transmission near visual threshold, thereby improving visual sensitivity at night. PMID:25972578
Calculation of Crystallographic Texture of BCC Steels During Cold Rolling
NASA Astrophysics Data System (ADS)
Das, Arpan
2017-05-01
BCC alloys commonly tend to develop strong fibre textures and often represent as isointensity diagrams in φ 1 sections or by fibre diagrams. Alpha fibre in bcc steels is generally characterised by <110> crystallographic axis parallel to the rolling direction. The objective of present research is to correlate carbon content, carbide dispersion, rolling reduction, Euler angles (ϕ) (when φ 1 = 0° and φ 2 = 45° along alpha fibre) and the resulting alpha fibre texture orientation intensity. In the present research, Bayesian neural computation has been employed to correlate these and compare with the existing feed-forward neural network model comprehensively. Excellent match to the measured texture data within the bounding box of texture training data set has been already predicted through the feed-forward neural network model by other researchers. Feed-forward neural network prediction outside the bounds of training texture data showed deviations from the expected values. Currently, Bayesian computation has been similarly applied to confirm that the predictions are reasonable in the context of basic metallurgical principles, and matched better outside the bounds of training texture data set than the reported feed-forward neural network. Bayesian computation puts error bars on predicted values and allows significance of each individual parameters to be estimated. Additionally, it is also possible by Bayesian computation to estimate the isolated influence of particular variable such as carbon concentration, which exactly cannot in practice be varied independently. This shows the ability of the Bayesian neural network to examine the new phenomenon in situations where the data cannot be accessed through experiments.
Singer, Donald A.; Kouda, Ryoichi
1996-01-01
A feedforward neural network with one hidden layer and five neurons was trained to recognize the distance to kuroko mineral deposits. Average amounts per hole of pyrite, sericite, and gypsum plus anhydrite as measured by X-rays in 69 drillholes were used to train the net. Drillholes near and between the Fukazawa, Furutobe, and Shakanai mines were used. The training data were selected carefully to represent well-explored areas where some confidence of the distance to ore was assured. A logarithmic transform was applied to remove the skewness of distance and each variable was scaled and centered by subtracting the median and dividing by the interquartile range. The learning algorithm of annealing plus conjugate gradients was used to minimize the mean squared error of the scaled distance to ore. The trained network then was applied to all of the 152 drillholes that had measured gypsum, sericite, and pyrite. A contour plot of the neural net predicted distance to ore shows fairly wide areas of 1 km or less to ore; each of the known deposit groups is within the 1 km contour. The high and low distances on the margins of the contoured distance plot are in part the result of boundary effects of the contouring algorithm. For example, the short distances to ore predicted west of the Shakanai (Hanaoka) deposits are in basement. However, the short distances to ore predicted northeast of Furotobe, just off the figure, coincide with the location of the Nurukawa kuroko deposit and the Omaki deposit, south of the Shakanai-Hanaoka deposits, seems to be on an extension of short distance to ore contour, but is beyond the 3 km limit from drillholes. Also of interest are some areas only a few kilometers from the Fukazawa and Shakanai groups of deposits that are estimated to be many kilometers from ore, apparently reflecting the network's recognition of the extreme local variability of the geology near some deposits.
Strong Recurrent Networks Compute the Orientation-Tuning of Surround Modulation in Primate V1
Shushruth, S.; Mangapathy, Pradeep; Ichida, Jennifer M.; Bressloff, Paul C.; Schwabe, Lars; Angelucci, Alessandra
2012-01-01
In macaque primary visual cortex (V1) neuronal responses to stimuli inside the receptive field (RF) are modulated by stimuli in the RF surround. This modulation is orientation-specific. Previous studies suggested that for some cells this specificity may not be fixed, but changes with the stimulus orientation presented to the RF. We demonstrate, in recording studies, that this tuning behavior is instead highly prevalent in V1 and, in theoretical work, that it arises only if V1 operates in a regime of strong local recurrence. Strongest surround suppression occurs when the stimuli in the RF and the surround are iso-oriented, and strongest facilitation when the stimuli are cross-oriented. This is the case even when the RF is sub-optimally activated by a stimulus of non-preferred orientation, but only if this stimulus can activate the cell when presented alone. This tuning behavior emerges from the interaction of lateral inhibition (via the surround pathways), which is tuned to the RF’s preferred orientation, with weakly-tuned, but strong, local recurrent connections, causing maximal withdrawal of recurrent excitation at the feedforward input orientation. Thus, horizontal and feedback modulation of strong recurrent circuits allows the tuning of contextual effects to change with changing feedforward inputs. PMID:22219292
Perceptual learning as improved probabilistic inference in early sensory areas.
Bejjanki, Vikranth R; Beck, Jeffrey M; Lu, Zhong-Lin; Pouget, Alexandre
2011-05-01
Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.
Recurrent network dynamics reconciles visual motion segmentation and integration.
Medathati, N V Kartheek; Rankin, James; Meso, Andrew I; Kornprobst, Pierre; Masson, Guillaume S
2017-09-12
In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on a linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation.
Lennon, William; Hecht-Nielsen, Robert; Yamazaki, Tadashi
2014-01-01
While the anatomy of the cerebellar microcircuit is well-studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs) form with the molecular layer interneurons (MLIs)—the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1) spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2) adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function. PMID:25520646
A path integral approach to the Hodgkin-Huxley model
NASA Astrophysics Data System (ADS)
Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando
2017-11-01
To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.
Unsupervised Feature Learning With Winner-Takes-All Based STDP
Ferré, Paul; Mamalet, Franck; Thorpe, Simon J.
2018-01-01
We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA) framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods. PMID:29674961
Reliability analysis of C-130 turboprop engine components using artificial neural network
NASA Astrophysics Data System (ADS)
Qattan, Nizar A.
In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.
Parallel processing of afferent olfactory sensory information
Vaaga, Christopher E.
2016-01-01
Key points The functional synaptic connectivity between olfactory receptor neurons and principal cells within the olfactory bulb is not well understood.One view suggests that mitral cells, the primary output neuron of the olfactory bulb, are solely activated by feedforward excitation.Using focal, single glomerular stimulation, we demonstrate that mitral cells receive direct, monosynaptic input from olfactory receptor neurons.Compared to external tufted cells, mitral cells have a prolonged afferent‐evoked EPSC, which serves to amplify the synaptic input.The properties of presynaptic glutamate release from olfactory receptor neurons are similar between mitral and external tufted cells.Our data suggest that afferent input enters the olfactory bulb in a parallel fashion. Abstract Primary olfactory receptor neurons terminate in anatomically and functionally discrete cortical modules known as olfactory bulb glomeruli. The synaptic connectivity and postsynaptic responses of mitral and external tufted cells within the glomerulus may involve both direct and indirect components. For example, it has been suggested that sensory input to mitral cells is indirect through feedforward excitation from external tufted cells. We also observed feedforward excitation of mitral cells with weak stimulation of the olfactory nerve layer; however, focal stimulation of an axon bundle entering an individual glomerulus revealed that mitral cells receive monosynaptic afferent inputs. Although external tufted cells had a 4.1‐fold larger peak EPSC amplitude, integration of the evoked currents showed that the synaptic charge was 5‐fold larger in mitral cells, reflecting the prolonged response in mitral cells. Presynaptic afferents onto mitral and external tufted cells had similar quantal amplitude and release probability, suggesting that the larger peak EPSC in external tufted cells was the result of more synaptic contacts. The results of the present study indicate that the monosynaptic afferent input to mitral cells depends on the strength of odorant stimulation. The enhanced spiking that we observed in response to brief afferent input provides a mechanism for amplifying sensory information and contrasts with the transient response in external tufted cells. These parallel input paths may have discrete functions in processing olfactory sensory input. PMID:27377344
Wagatsuma, Nobuhiko; Sakai, Ko
2017-01-01
Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision. PMID:28163688
Wagatsuma, Nobuhiko; Sakai, Ko
2016-01-01
Border ownership (BO) indicates which side of a contour owns a border, and it plays a fundamental role in figure-ground segregation. The majority of neurons in V2 and V4 areas of monkeys exhibit BO selectivity. A physiological work reported that the responses of BO-selective cells show a rapid transition when a presented square is flipped along its classical receptive field (CRF) so that the opposite BO is presented, whereas the transition is significantly slower when a square with a clear BO is replaced by an ambiguous edge, e.g., when the square is enlarged greatly. The rapid transition seemed to reflect the influence of feedforward processing on BO selectivity. Herein, we investigated the role of feedforward signals and cortical interactions for time-courses in BO-selective cells by modeling a visual cortical network comprising V1, V2, and posterior parietal (PP) modules. In our computational model, the recurrent pathways among these modules gradually established the visual progress and the BO assignments. Feedforward inputs mainly determined the activities of these modules. Surrounding suppression/facilitation of early-level areas modulates the activities of V2 cells to provide BO signals. Weak feedback signals from the PP module enhanced the contrast gain extracted in V1, which underlies the attentional modulation of BO signals. Model simulations exhibited time-courses depending on the BO ambiguity, which were caused by the integration delay of V1 and V2 cells and the local inhibition therein given the difference in input stimulus. However, our model did not fully explain the characteristics of crucially slow transition: the responses of BO-selective physiological cells indicated the persistent activation several times longer than that of our model after the replacement with the ambiguous edge. Furthermore, the time-course of BO-selective model cells replicated the attentional modulation of response time in human psychophysical experiments. These attentional modulations for time-courses were induced by selective enhancement of early-level features due to interactions between V1 and PP. Our proposed model suggests fundamental roles of surrounding suppression/facilitation based on feedforward inputs as well as the interactions between early and parietal visual areas with respect to the ambiguity dependence of the neural dynamics in intermediate-level vision.
Gorochowski, Thomas E; Grierson, Claire S; di Bernardo, Mario
2018-03-01
Network motifs are significantly overrepresented subgraphs that have been proposed as building blocks for natural and engineered networks. Detailed functional analysis has been performed for many types of motif in isolation, but less is known about how motifs work together to perform complex tasks. To address this issue, we measure the aggregation of network motifs via methods that extract precisely how these structures are connected. Applying this approach to a broad spectrum of networked systems and focusing on the widespread feed-forward loop motif, we uncover striking differences in motif organization. The types of connection are often highly constrained, differ between domains, and clearly capture architectural principles. We show how this information can be used to effectively predict functionally important nodes in the metabolic network of Escherichia coli . Our findings have implications for understanding how networked systems are constructed from motif parts and elucidate constraints that guide their evolution.
Grierson, Claire S.
2018-01-01
Network motifs are significantly overrepresented subgraphs that have been proposed as building blocks for natural and engineered networks. Detailed functional analysis has been performed for many types of motif in isolation, but less is known about how motifs work together to perform complex tasks. To address this issue, we measure the aggregation of network motifs via methods that extract precisely how these structures are connected. Applying this approach to a broad spectrum of networked systems and focusing on the widespread feed-forward loop motif, we uncover striking differences in motif organization. The types of connection are often highly constrained, differ between domains, and clearly capture architectural principles. We show how this information can be used to effectively predict functionally important nodes in the metabolic network of Escherichia coli. Our findings have implications for understanding how networked systems are constructed from motif parts and elucidate constraints that guide their evolution. PMID:29670941
Parrell, Benjamin; Agnew, Zarinah; Nagarajan, Srikantan; Houde, John; Ivry, Richard B
2017-09-20
The cerebellum has been hypothesized to form a crucial part of the speech motor control network. Evidence for this comes from patients with cerebellar damage, who exhibit a variety of speech deficits, as well as imaging studies showing cerebellar activation during speech production in healthy individuals. To date, the precise role of the cerebellum in speech motor control remains unclear, as it has been implicated in both anticipatory (feedforward) and reactive (feedback) control. Here, we assess both anticipatory and reactive aspects of speech motor control, comparing the performance of patients with cerebellar degeneration and matched controls. Experiment 1 tested feedforward control by examining speech adaptation across trials in response to a consistent perturbation of auditory feedback. Experiment 2 tested feedback control, examining online corrections in response to inconsistent perturbations of auditory feedback. Both male and female patients and controls were tested. The patients were impaired in adapting their feedforward control system relative to controls, exhibiting an attenuated anticipatory response to the perturbation. In contrast, the patients produced even larger compensatory responses than controls, suggesting an increased reliance on sensory feedback to guide speech articulation in this population. Together, these results suggest that the cerebellum is crucial for maintaining accurate feedforward control of speech, but relatively uninvolved in feedback control. SIGNIFICANCE STATEMENT Speech motor control is a complex activity that is thought to rely on both predictive, feedforward control as well as reactive, feedback control. While the cerebellum has been shown to be part of the speech motor control network, its functional contribution to feedback and feedforward control remains controversial. Here, we use real-time auditory perturbations of speech to show that patients with cerebellar degeneration are impaired in adapting feedforward control of speech but retain the ability to make online feedback corrections; indeed, the patients show an increased sensitivity to feedback. These results indicate that the cerebellum forms a crucial part of the feedforward control system for speech but is not essential for online, feedback control. Copyright © 2017 the authors 0270-6474/17/379249-10$15.00/0.
Hamker, Fred H; Wiltschut, Jan
2007-09-01
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
Cortical Feedback Regulates Feedforward Retinogeniculate Refinement
Thompson, Andrew D; Picard, Nathalie; Min, Lia; Fagiolini, Michela; Chen, Chinfei
2016-01-01
SUMMARY According to the prevailing view of neural development, sensory pathways develop sequentially in a feedforward manner, whereby each local microcircuit refines and stabilizes before directing the wiring of its downstream target. In the visual system, retinal circuits are thought to mature first and direct refinement in the thalamus, after which cortical circuits refine with experience-dependent plasticity. In contrast, we now show that feedback from cortex to thalamus critically regulates refinement of the retinogeniculate projection during a discrete window in development, beginning at postnatal day 20 in mice. Disrupting cortical activity during this window, pharmacologically or chemogenetically, increases the number of retinal ganglion cells innervating each thalamic relay neuron. These results suggest that primary sensory structures develop through the concurrent and interdependent remodeling of subcortical and cortical circuits in response to sensory experience, rather than through a simple feedforward process. Our findings also highlight an unexpected function for the corticothalamic projection. PMID:27545712
Sedlacek, Miloslav; Brenowitz, Stephan D
2014-01-01
Feed-forward inhibition (FFI) represents a powerful mechanism by which control of the timing and fidelity of action potentials in local synaptic circuits of various brain regions is achieved. In the cochlear nucleus, the auditory nerve provides excitation to both principal neurons and inhibitory interneurons. Here, we investigated the synaptic circuit associated with fusiform cells (FCs), principal neurons of the dorsal cochlear nucleus (DCN) that receive excitation from auditory nerve fibers and inhibition from tuberculoventral cells (TVCs) on their basal dendrites in the deep layer of DCN. Despite the importance of these inputs in regulating fusiform cell firing behavior, the mechanisms determining the balance of excitation and FFI in this circuit are not well understood. Therefore, we examined the timing and plasticity of auditory nerve driven FFI onto FCs. We find that in some FCs, excitatory and inhibitory components of FFI had the same stimulation thresholds indicating they could be triggered by activation of the same fibers. In other FCs, excitation and inhibition exhibit different stimulus thresholds, suggesting FCs and TVCs might be activated by different sets of fibers. In addition, we find that during repetitive activation, synapses formed by the auditory nerve onto TVCs and FCs exhibit distinct modes of short-term plasticity. Feed-forward inhibitory post-synaptic currents (IPSCs) in FCs exhibit short-term depression because of prominent synaptic depression at the auditory nerve-TVC synapse. Depression of this feedforward inhibitory input causes a shift in the balance of fusiform cell synaptic input towards greater excitation and suggests that fusiform cell spike output will be enhanced by physiological patterns of auditory nerve activity.
Central control of cardiorespiratory interactions in fish.
Taylor, Edwin W; Leite, Cleo A C; Levings, Jennifer J
2009-01-01
Fish control the relative flow rates of water and blood over the gills in order to optimise respiratory gas exchange. As both flows are markedly pulsatile, close beat-to-beat relationships can be predicted. Cardiorespiratory interactions in fish are controlled primarily by activity in the parasympathetic nervous system that has its origin in cardiac vagal preganglionic neurons. Recordings of efferent activity in the cardiac vagus include units firing in respiration-related bursts. Bursts of electrical stimuli delivered peripherally to the cardiac vagus or centrally to respiratory branches of cranial nerves can recruit the heart over a range of frequencies. So, phasic, efferent activity in cardiac vagi, that in the intact fish are respiration-related, can cause heart rate to be modulated by the respiratory rhythm. In elasmobranch fishes this phasic activity seems to arise primarily from central feed-forward interactions with respiratory motor neurones that have overlapping distributions with cardiac neurons in the brainstem. In teleost fish, they arise from increased levels of efferent vagal activity arising from reflex stimulation of chemoreceptors and mechanoreceptors in the orobranchial cavity. However, these differences are largely a matter of emphasis as both groups show elements of feed-forward and feed-back control of cardiorespiratory interactions.
An Artificial Neural Network Controller for Intelligent Transportation Systems Applications
DOT National Transportation Integrated Search
1996-01-01
An Autonomous Intelligent Cruise Control (AICC) has been designed using a feedforward artificial neural network, as an example for utilizing artificial neural networks for nonlinear control problems arising in intelligent transportation systems appli...
Nikolaev, Anton; Zheng, Lei; Wardill, Trevor J; O'Kane, Cahir J; de Polavieja, Gonzalo G; Juusola, Mikko
2009-01-01
Retinal networks must adapt constantly to best present the ever changing visual world to the brain. Here we test the hypothesis that adaptation is a result of different mechanisms at several synaptic connections within the network. In a companion paper (Part I), we showed that adaptation in the photoreceptors (R1-R6) and large monopolar cells (LMC) of the Drosophila eye improves sensitivity to under-represented signals in seconds by enhancing both the amplitude and frequency distribution of LMCs' voltage responses to repeated naturalistic contrast series. In this paper, we show that such adaptation needs both the light-mediated conductance and feedback-mediated synaptic conductance. A faulty feedforward pathway in histamine receptor mutant flies speeds up the LMC output, mimicking extreme light adaptation. A faulty feedback pathway from L2 LMCs to photoreceptors slows down the LMC output, mimicking dark adaptation. These results underline the importance of network adaptation for efficient coding, and as a mechanism for selectively regulating the size and speed of signals in neurons. We suggest that concert action of many different mechanisms and neural connections are responsible for adaptation to visual stimuli. Further, our results demonstrate the need for detailed circuit reconstructions like that of the Drosophila lamina, to understand how networks process information.
Fuzzy logic and neural networks in artificial intelligence and pattern recognition
NASA Astrophysics Data System (ADS)
Sanchez, Elie
1991-10-01
With the use of fuzzy logic techniques, neural computing can be integrated in symbolic reasoning to solve complex real world problems. In fact, artificial neural networks, expert systems, and fuzzy logic systems, in the context of approximate reasoning, share common features and techniques. A model of Fuzzy Connectionist Expert System is introduced, in which an artificial neural network is designed to construct the knowledge base of an expert system from, training examples (this model can also be used for specifications of rules in fuzzy logic control). Two types of weights are associated with the synaptic connections in an AND-OR structure: primary linguistic weights, interpreted as labels of fuzzy sets, and secondary numerical weights. Cell activation is computed through min-max fuzzy equations of the weights. Learning consists in finding the (numerical) weights and the network topology. This feedforward network is described and first illustrated in a biomedical application (medical diagnosis assistance from inflammatory-syndromes/proteins profiles). Then, it is shown how this methodology can be utilized for handwritten pattern recognition (characters play the role of diagnoses): in a fuzzy neuron describing a number for example, the linguistic weights represent fuzzy sets on cross-detecting lines and the numerical weights reflect the importance (or weakness) of connections between cross-detecting lines and characters.
GABA(B) receptor modulation of feedforward inhibition through hippocampal neurogliaform cells.
Price, Christopher J; Scott, Ricardo; Rusakov, Dmitri A; Capogna, Marco
2008-07-02
Feedforward inhibition of neurons is a fundamental component of information flow control in the brain. We studied the roles played by neurogliaform cells (NGFCs) of stratum lacunosum moleculare of the hippocampus in providing feedforward inhibition to CA1 pyramidal cells. We recorded from synaptically coupled pairs of anatomically identified NGFCs and CA1 pyramidal cells and found that, strikingly, a single presynaptic action potential evoked a biphasic unitary IPSC (uIPSC), consisting of two distinct components mediated by GABA(A) and GABA(B) receptors. A GABA(B) receptor-mediated unitary response has not previously been observed in hippocampal excitatory neurons. The decay of the GABA(A) receptor-mediated response was slow (time constant = 50 ms), and was tightly regulated by presynaptic GABA(B) receptors. Surprisingly, the GABA(B) receptor ligands baclofen and (2S)-3-{[(1S)-1-(3,4-dichlorophenyl)ethyl]amino-2-hydroxypropyl}(phenylmethyl)phosphinic acid (CGP55845), while affecting the NGFC-mediated uIPSCs, had no effect on action potential-evoked presynaptic Ca2+ signals monitored in individual axonal boutons of NGFCs with two-photon microscopy. In contrast, baclofen clearly depressed presynaptic Ca2+ transients in non-NGF interneurons. Changes in extracellular Ca2+ concentration that mimicked the effects of baclofen or CGP55845 on uIPSCs significantly altered presynaptic Ca2+ transients. Electrophysiological data suggest that GABA(B) receptors expressed by NGFCs contribute to the dynamic control of the excitatory input to CA1 pyramidal neurons from the temporoammonic path. The NGFC-CA1 pyramidal cell connection therefore provides a unique and subtle mechanism to shape the integration time domain for signals arriving via a major excitatory input to CA1 pyramidal cells.
Distributed multisensory integration in a recurrent network model through supervised learning
NASA Astrophysics Data System (ADS)
Wang, He; Wong, K. Y. Michael
Sensory integration between different modalities has been extensively studied. It is suggested that the brain integrates signals from different modalities in a Bayesian optimal way. However, how the Bayesian rule is implemented in a neural network remains under debate. In this work we propose a biologically plausible recurrent network model, which can perform Bayesian multisensory integration after trained by supervised learning. Our model is composed of two modules, each for one modality. We assume that each module is a recurrent network, whose activity represents the posterior distribution of each stimulus. The feedforward input on each module is the likelihood of each modality. Two modules are integrated through cross-links, which are feedforward connections from the other modality, and reciprocal connections, which are recurrent connections between different modules. By stochastic gradient descent, we successfully trained the feedforward and recurrent coupling matrices simultaneously, both of which resembles the Mexican-hat. We also find that there are more than one set of coupling matrices that can approximate the Bayesian theorem well. Specifically, reciprocal connections and cross-links will compensate each other if one of them is removed. Even though trained with two inputs, the network's performance with only one input is in good accordance with what is predicted by the Bayesian theorem.
Neural net diagnostics for VLSI test
NASA Technical Reports Server (NTRS)
Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.
1990-01-01
This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.
Li, Cheng-Wei; Chen, Bor-Sen
2010-01-01
Cellular responses to sudden environmental stresses or physiological changes provide living organisms with the opportunity for final survival and further development. Therefore, it is an important topic to understand protective mechanisms against environmental stresses from the viewpoint of gene and protein networks. We propose two coupled nonlinear stochastic dynamic models to reconstruct stress-activated gene and protein regulatory networks via microarray data in response to environmental stresses. According to the reconstructed gene/protein networks, some possible mutual interactions, feedforward and feedback loops are found for accelerating response and filtering noises in these signaling pathways. A bow-tie core network is also identified to coordinate mutual interactions and feedforward loops, feedback inhibitions, feedback activations, and cross talks to cope efficiently with a broader range of environmental stresses with limited proteins and pathways. PMID:20454442
Bichler, Olivier; Querlioz, Damien; Thorpe, Simon J; Bourgoin, Jean-Philippe; Gamrat, Christian
2012-08-01
A biologically inspired approach to learning temporally correlated patterns from a spiking silicon retina is presented. Spikes are generated from the retina in response to relative changes in illumination at the pixel level and transmitted to a feed-forward spiking neural network. Neurons become sensitive to patterns of pixels with correlated activation times, in a fully unsupervised scheme. This is achieved using a special form of Spike-Timing-Dependent Plasticity which depresses synapses that did not recently contribute to the post-synaptic spike activation, regardless of their activation time. Competitive learning is implemented with lateral inhibition. When tested with real-life data, the system is able to extract complex and overlapping temporally correlated features such as car trajectories on a freeway, after only 10 min of traffic learning. Complete trajectories can be learned with a 98% detection rate using a second layer, still with unsupervised learning, and the system may be used as a car counter. The proposed neural network is extremely robust to noise and it can tolerate a high degree of synaptic and neuronal variability with little impact on performance. Such results show that a simple biologically inspired unsupervised learning scheme is capable of generating selectivity to complex meaningful events on the basis of relatively little sensory experience. Copyright © 2012 Elsevier Ltd. All rights reserved.
Prediction of pelvic organ prolapse using an artificial neural network.
Robinson, Christopher J; Swift, Steven; Johnson, Donna D; Almeida, Jonas S
2008-08-01
The objective of this investigation was to test the ability of a feedforward artificial neural network (ANN) to differentiate patients who have pelvic organ prolapse (POP) from those who retain good pelvic organ support. Following institutional review board approval, patients with POP (n = 87) and controls with good pelvic organ support (n = 368) were identified from the urogynecology research database. Historical and clinical information was extracted from the database. Data analysis included the training of a feedforward ANN, variable selection, and external validation of the model with an independent data set. Twenty variables were used. The median-performing ANN model used a median of 3 (quartile 1:3 to quartile 3:5) variables and achieved an area under the receiver operator curve of 0.90 (external, independent validation set). Ninety percent sensitivity and 83% specificity were obtained in the external validation by ANN classification. Feedforward ANN modeling is applicable to the identification and prediction of POP.
Efficiently modeling neural networks on massively parallel computers
NASA Technical Reports Server (NTRS)
Farber, Robert M.
1993-01-01
Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.
Augustinaite, Sigita; Heggelund, Paul
2018-05-24
Synaptic short-term plasticity (STP) regulates synaptic transmission in an activity-dependent manner and thereby has important roles in the signal processing in the brain. In some synapses, a presynaptic train of action potentials elicits post-synaptic potentials that gradually increase during the train (facilitation), but in other synapses, these potentials gradually decrease (depression). We studied STP in neurons in the visual thalamic relay, the dorsal lateral geniculate nucleus (dLGN). The dLGN contains two types of neurons: excitatory thalamocortical (TC) neurons, which transfer signals from retinal afferents to visual cortex, and local inhibitory interneurons, which form an inhibitory feedforward loop that regulates the thalamocortical signal transmission. The overall STP in the retino-thalamic relay is short-term depression, but the distinct kind and characteristics of the plasticity at the different types of synapses are unknown. We studied STP in the excitatory responses of interneurons to stimulation of retinal afferents, in the inhibitory responses of TC neurons to stimulation of afferents from interneurons, and in the disynaptic inhibitory responses of TC neurons to stimulation of retinal afferents. Moreover, we studied STP at the direct excitatory input to TC neurons from retinal afferents. The STP at all types of the synapses showed short-term depression. This depression can accentuate rapid changes in the stream of signals and thereby promote detectability of significant features in the sensory input. In vision, detection of edges and contours is essential for object perception, and the synaptic short-term depression in the early visual pathway provides important contributions to this detection process. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Blaesse, Peter; Goedecke, Lena; Bazelot, Michaël; Capogna, Marco; Pape, Hans-Christian; Jüngling, Kay
2015-05-13
The amygdala is a key region for the processing of information underlying fear, anxiety, and fear extinction. Within the local neuronal networks of the amygdala, a population of inhibitory, intercalated neurons (ITCs) modulates the flow of information among various nuclei of amygdala, including the basal nucleus (BA) and the centromedial nucleus (CeM) of the amygdala. These ITCs have been shown to be important during fear extinction and are target of a variety of neurotransmitters and neuropeptides. Here we provide evidence that the activation of μ-opioid receptors (MORs) by the specific agonist DAMGO ([D-Ala2,N-Me-Phe4,Gly5-ol]-Enkephalin) hyperpolarizes medially located ITCs (mITCs) in acute brain slices of mice. Moreover, we use whole-cell patch-clamp recordings in combination with local electrical stimulation or glutamate uncaging to analyze the effect of MOR activation on local microcircuits. We show that the GABAergic transmission between mITCs and CeM neurons is attenuated by DAMGO, whereas the glutamatergic transmission on CeM neurons and mITCs is unaffected. Furthermore, MOR activation induced by theta burst stimulation in BA suppresses plastic changes of feedforward inhibitory transmission onto CeM neurons as revealed by the MOR antagonist CTAP d-Phe-Cys-Tyr-d-Trp-Arg-Thr-Pen-Thr-NH2. In summary, the mITCs constitute a target for the opioid system, and therefore, the activation of MOR in ITCs might play a central role in the modulation of the information processing between the basolateral complex of the amygdala and central nuclei of the amygdala. Copyright © 2015 Blaesse, Goedecke et al.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures.
Benson, Austin R; Gleich, David F; Leskovec, Jure
2015-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms.
Tensor Spectral Clustering for Partitioning Higher-order Network Structures
Benson, Austin R.; Gleich, David F.; Leskovec, Jure
2016-01-01
Spectral graph theory-based methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a first-order Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higher-order network substructures such as triangles, cycles, and feed-forward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higher-order network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higher-order network structures (cycles, feed-forward loops, etc.) should be preserved by the network clustering. Higher-order network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higher-order structure of particular interest is the directed 3-cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3-cycles than standard spectral clustering algorithms. PMID:27812399
Modeling gene regulatory network motifs using statecharts
2012-01-01
Background Gene regulatory networks are widely used by biologists to describe the interactions among genes, proteins and other components at the intra-cellular level. Recently, a great effort has been devoted to give gene regulatory networks a formal semantics based on existing computational frameworks. For this purpose, we consider Statecharts, which are a modular, hierarchical and executable formal model widely used to represent software systems. We use Statecharts for modeling small and recurring patterns of interactions in gene regulatory networks, called motifs. Results We present an improved method for modeling gene regulatory network motifs using Statecharts and we describe the successful modeling of several motifs, including those which could not be modeled or whose models could not be distinguished using the method of a previous proposal. We model motifs in an easy and intuitive way by taking advantage of the visual features of Statecharts. Our modeling approach is able to simulate some interesting temporal properties of gene regulatory network motifs: the delay in the activation and the deactivation of the "output" gene in the coherent type-1 feedforward loop, the pulse in the incoherent type-1 feedforward loop, the bistability nature of double positive and double negative feedback loops, the oscillatory behavior of the negative feedback loop, and the "lock-in" effect of positive autoregulation. Conclusions We present a Statecharts-based approach for the modeling of gene regulatory network motifs in biological systems. The basic motifs used to build more complex networks (that is, simple regulation, reciprocal regulation, feedback loop, feedforward loop, and autoregulation) can be faithfully described and their temporal dynamics can be analyzed. PMID:22536967
Development of programmable artificial neural networks
NASA Technical Reports Server (NTRS)
Meade, Andrew J.
1993-01-01
Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.
Axonal synapse sorting in medial entorhinal cortex
NASA Astrophysics Data System (ADS)
Schmidt, Helene; Gour, Anjali; Straehle, Jakob; Boergens, Kevin M.; Brecht, Michael; Helmstaedter, Moritz
2017-09-01
Research on neuronal connectivity in the cerebral cortex has focused on the existence and strength of synapses between neurons, and their location on the cell bodies and dendrites of postsynaptic neurons. The synaptic architecture of individual presynaptic axonal trees, however, remains largely unknown. Here we used dense reconstructions from three-dimensional electron microscopy in rats to study the synaptic organization of local presynaptic axons in layer 2 of the medial entorhinal cortex, the site of grid-like spatial representations. We observe path-length-dependent axonal synapse sorting, such that axons of excitatory neurons sequentially target inhibitory neurons followed by excitatory neurons. Connectivity analysis revealed a cellular feedforward inhibition circuit involving wide, myelinated inhibitory axons and dendritic synapse clustering. Simulations show that this high-precision circuit can control the propagation of synchronized activity in the medial entorhinal cortex, which is known for temporally precise discharges.
A neural network device for on-line particle identification in cosmic ray experiments
NASA Astrophysics Data System (ADS)
Scrimaglio, R.; Finetti, N.; D'Altorio, L.; Rantucci, E.; Raso, M.; Segreto, E.; Tassoni, A.; Cardarilli, G. C.
2004-05-01
On-line particle identification is one of the main goals of many experiments in space both for rare event studies and for optimizing measurements along the orbital trajectory. Neural networks can be a useful tool for signal processing and real time data analysis in such experiments. In this document we report on the performances of a programmable neural device which was developed in VLSI analog/digital technology. Neurons and synapses were accomplished by making use of Operational Transconductance Amplifier (OTA) structures. In this paper we report on the results of measurements performed in order to verify the agreement of the characteristic curves of each elementary cell with simulations and on the device performances obtained by implementing simple neural structures on the VLSI chip. A feed-forward neural network (Multi-Layer Perceptron, MLP) was implemented on the VLSI chip and trained to identify particles by processing the signals of two-dimensional position-sensitive Si detectors. The radiation monitoring device consisted of three double-sided silicon strip detectors. From the analysis of a set of simulated data it was found that the MLP implemented on the neural device gave results comparable with those obtained with the standard method of analysis confirming that the implemented neural network could be employed for real time particle identification.
Zhang, Huisheng; Zhang, Ying; Xu, Dongpo; Liu, Xiaodong
2015-06-01
It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.
Design of Neural Networks for Fast Convergence and Accuracy
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1998-01-01
A novel procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed to provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component spacecraft design changes and measures of its performance. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The design algorithm attempts to avoid the local minima phenomenon that hampers the traditional network training. A numerical example is performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Anatomy of hierarchy: Feedforward and feedback pathways in macaque visual cortex
Markov, Nikola T; Vezoli, Julien; Chameau, Pascal; Falchier, Arnaud; Quilodran, René; Huissoud, Cyril; Lamy, Camille; Misery, Pierre; Giroud, Pascale; Ullman, Shimon; Barone, Pascal; Dehay, Colette; Knoblauch, Kenneth; Kennedy, Henry
2013-01-01
The laminar location of the cell bodies and terminals of interareal connections determines the hierarchical structural organization of the cortex and has been intensively studied. However, we still have only a rudimentary understanding of the connectional principles of feedforward (FF) and feedback (FB) pathways. Quantitative analysis of retrograde tracers was used to extend the notion that the laminar distribution of neurons interconnecting visual areas provides an index of hierarchical distance (percentage of supragranular labeled neurons [SLN]). We show that: 1) SLN values constrain models of cortical hierarchy, revealing previously unsuspected areal relations; 2) SLN reflects the operation of a combinatorial distance rule acting differentially on sets of connections between areas; 3) Supragranular layers contain highly segregated bottom-up and top-down streams, both of which exhibit point-to-point connectivity. This contrasts with the infragranular layers, which contain diffuse bottom-up and top-down streams; 4) Cell filling of the parent neurons of FF and FB pathways provides further evidence of compartmentalization; 5) FF pathways have higher weights, cross fewer hierarchical levels, and are less numerous than FB pathways. Taken together, the present results suggest that cortical hierarchies are built from supra- and infragranular counterstreams. This compartmentalized dual counterstream organization allows point-to-point connectivity in both bottom-up and top-down directions. PMID:23983048
Perceptual Learning via Modification of Cortical Top-Down Signals
Schäfer, Roland; Vasilaki, Eleni; Senn, Walter
2007-01-01
The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning. PMID:17715996
Feedforward inhibitory control of sensory information in higher-order thalamic nuclei.
Lavallée, Philippe; Urbain, Nadia; Dufresne, Caroline; Bokor, Hajnalka; Acsády, László; Deschênes, Martin
2005-08-17
Sensory stimuli evoke strong responses in thalamic relay cells, which ensure a faithful relay of information to the neocortex. However, relay cells of the posterior thalamic nuclear group in rodents, despite receiving significant trigeminal input, respond poorly to vibrissa deflection. Here we show that sensory transmission in this nucleus is impeded by fast feedforward inhibition mediated by GABAergic neurons of the zona incerta. Intracellular recordings of posterior group neurons revealed that the first synaptic event after whisker deflection is a prominent inhibition. Whisker-evoked EPSPs with fast rise time and longer onset latency are unveiled only after lesioning the zona incerta. Excitation survives barrel cortex lesion, demonstrating its peripheral origin. Electron microscopic data confirm that trigeminal axons make large synaptic terminals on the proximal dendrites of posterior group cells and on the somata of incertal neurons. Thus, the connectivity of the system allows an unusual situation in which inhibition precedes ascending excitation resulting in efficient shunting of the responses. The dominance of inhibition over excitation strongly suggests that the paralemniscal pathway is not designed to relay inputs triggered by passive whisker deflection. Instead, we propose that this pathway operates through disinhibition, and that the posterior group forwards to the cerebral cortex sensory information that is contingent on motor instructions.
Lee, Hyung-Chul; Ryu, Ho-Geol; Chung, Eun-Jin; Jung, Chul-Woo
2018-03-01
The discrepancy between predicted effect-site concentration and measured bispectral index is problematic during intravenous anesthesia with target-controlled infusion of propofol and remifentanil. We hypothesized that bispectral index during total intravenous anesthesia would be more accurately predicted by a deep learning approach. Long short-term memory and the feed-forward neural network were sequenced to simulate the pharmacokinetic and pharmacodynamic parts of an empirical model, respectively, to predict intraoperative bispectral index during combined use of propofol and remifentanil. Inputs of long short-term memory were infusion histories of propofol and remifentanil, which were retrieved from target-controlled infusion pumps for 1,800 s at 10-s intervals. Inputs of the feed-forward network were the outputs of long short-term memory and demographic data such as age, sex, weight, and height. The final output of the feed-forward network was the bispectral index. The performance of bispectral index prediction was compared between the deep learning model and previously reported response surface model. The model hyperparameters comprised 8 memory cells in the long short-term memory layer and 16 nodes in the hidden layer of the feed-forward network. The model training and testing were performed with separate data sets of 131 and 100 cases. The concordance correlation coefficient (95% CI) were 0.561 (0.560 to 0.562) in the deep learning model, which was significantly larger than that in the response surface model (0.265 [0.263 to 0.266], P < 0.001). The deep learning model-predicted bispectral index during target-controlled infusion of propofol and remifentanil more accurately compared to the traditional model. The deep learning approach in anesthetic pharmacology seems promising because of its excellent performance and extensibility.
Kaplan, Bernhard A; Lansner, Anders
2014-01-01
Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin-Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian-Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian-Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures.
Kaplan, Bernhard A.; Lansner, Anders
2014-01-01
Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin–Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian–Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian–Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures. PMID:24570657
Serotonin increases synaptic activity in olfactory bulb glomeruli
Brill, Julia; Shao, Zuoyi; Puche, Adam C.; Wachowiak, Matt
2016-01-01
Serotoninergic fibers densely innervate olfactory bulb glomeruli, the first sites of synaptic integration in the olfactory system. Acting through 5HT2A receptors, serotonin (5HT) directly excites external tufted cells (ETCs), key excitatory glomerular neurons, and depolarizes some mitral cells (MCs), the olfactory bulb's main output neurons. We further investigated 5HT action on MCs and determined its effects on the two major classes of glomerular interneurons: GABAergic/dopaminergic short axon cells (SACs) and GABAergic periglomerular cells (PGCs). In SACs, 5HT evoked a depolarizing current mediated by 5HT2C receptors but did not significantly impact spike rate. 5HT had no measurable direct effect in PGCs. Serotonin increased spontaneous excitatory and inhibitory postsynaptic currents (sEPSCs and sIPSCs) in PGCs and SACs. Increased sEPSCs were mediated by 5HT2A receptors, suggesting that they are primarily due to enhanced excitatory drive from ETCs. Increased sIPSCs resulted from elevated excitatory drive onto GABAergic interneurons and augmented GABA release from SACs. Serotonin-mediated GABA release from SACs was action potential independent and significantly increased miniature IPSC frequency in glomerular neurons. When focally applied to a glomerulus, 5HT increased MC spontaneous firing greater than twofold but did not increase olfactory nerve-evoked responses. Taken together, 5HT modulates glomerular network activity in several ways: 1) it increases ETC-mediated feed-forward excitation onto MCs, SACs, and PGCs; 2) it increases inhibition of glomerular interneurons; 3) it directly triggers action potential-independent GABA release from SACs; and 4) these network actions increase spontaneous MC firing without enhancing responses to suprathreshold sensory input. This may enhance MC sensitivity while maintaining dynamic range. PMID:26655822
Serotonin increases synaptic activity in olfactory bulb glomeruli.
Brill, Julia; Shao, Zuoyi; Puche, Adam C; Wachowiak, Matt; Shipley, Michael T
2016-03-01
Serotoninergic fibers densely innervate olfactory bulb glomeruli, the first sites of synaptic integration in the olfactory system. Acting through 5HT2A receptors, serotonin (5HT) directly excites external tufted cells (ETCs), key excitatory glomerular neurons, and depolarizes some mitral cells (MCs), the olfactory bulb's main output neurons. We further investigated 5HT action on MCs and determined its effects on the two major classes of glomerular interneurons: GABAergic/dopaminergic short axon cells (SACs) and GABAergic periglomerular cells (PGCs). In SACs, 5HT evoked a depolarizing current mediated by 5HT2C receptors but did not significantly impact spike rate. 5HT had no measurable direct effect in PGCs. Serotonin increased spontaneous excitatory and inhibitory postsynaptic currents (sEPSCs and sIPSCs) in PGCs and SACs. Increased sEPSCs were mediated by 5HT2A receptors, suggesting that they are primarily due to enhanced excitatory drive from ETCs. Increased sIPSCs resulted from elevated excitatory drive onto GABAergic interneurons and augmented GABA release from SACs. Serotonin-mediated GABA release from SACs was action potential independent and significantly increased miniature IPSC frequency in glomerular neurons. When focally applied to a glomerulus, 5HT increased MC spontaneous firing greater than twofold but did not increase olfactory nerve-evoked responses. Taken together, 5HT modulates glomerular network activity in several ways: 1) it increases ETC-mediated feed-forward excitation onto MCs, SACs, and PGCs; 2) it increases inhibition of glomerular interneurons; 3) it directly triggers action potential-independent GABA release from SACs; and 4) these network actions increase spontaneous MC firing without enhancing responses to suprathreshold sensory input. This may enhance MC sensitivity while maintaining dynamic range. Copyright © 2016 the American Physiological Society.
Optogenetic Examination of Prefrontal-Amygdala Synaptic Development.
Arruda-Carvalho, Maithe; Wu, Wan-Chen; Cummings, Kirstie A; Clem, Roger L
2017-03-15
A brain network comprising the medial prefrontal cortex (mPFC) and amygdala plays important roles in developmentally regulated cognitive and emotional processes. However, very little is known about the maturation of mPFC-amygdala circuitry. We conducted anatomical tracing of mPFC projections and optogenetic interrogation of their synaptic connections with neurons in the basolateral amygdala (BLA) at neonatal to adult developmental stages in mice. Results indicate that mPFC-BLA projections exhibit delayed emergence relative to other mPFC pathways and establish synaptic transmission with BLA excitatory and inhibitory neurons in late infancy, events that coincide with a massive increase in overall synaptic drive. During subsequent adolescence, mPFC-BLA circuits are further modified by excitatory synaptic strengthening as well as a transient surge in feedforward inhibition. The latter was correlated with increased spontaneous inhibitory currents in excitatory neurons, suggesting that mPFC-BLA circuit maturation culminates in a period of exuberant GABAergic transmission. These findings establish a time course for the onset and refinement of mPFC-BLA transmission and point to potential sensitive periods in the development of this critical network. SIGNIFICANCE STATEMENT Human mPFC-amygdala functional connectivity is developmentally regulated and figures prominently in numerous psychiatric disorders with a high incidence of adolescent onset. However, it remains unclear when synaptic connections between these structures emerge or how their properties change with age. Our work establishes developmental windows and cellular substrates for synapse maturation in this pathway involving both excitatory and inhibitory circuits. The engagement of these substrates by early life experience may support the ontogeny of fundamental behaviors but could also lead to inappropriate circuit refinement and psychopathology in adverse situations. Copyright © 2017 the authors 0270-6474/17/372976-10$15.00/0.
Neonatal Restriction of Tactile Inputs Leads to Long-Lasting Impairments of Cross-Modal Processing
Röder, Brigitte; Hanganu-Opatz, Ileana L.
2015-01-01
Optimal behavior relies on the combination of inputs from multiple senses through complex interactions within neocortical networks. The ontogeny of this multisensory interplay is still unknown. Here, we identify critical factors that control the development of visual-tactile processing by combining in vivo electrophysiology with anatomical/functional assessment of cortico-cortical communication and behavioral investigation of pigmented rats. We demonstrate that the transient reduction of unimodal (tactile) inputs during a short period of neonatal development prior to the first cross-modal experience affects feed-forward subcortico-cortical interactions by attenuating the cross-modal enhancement of evoked responses in the adult primary somatosensory cortex. Moreover, the neonatal manipulation alters cortico-cortical interactions by decreasing the cross-modal synchrony and directionality in line with the sparsification of direct projections between primary somatosensory and visual cortices. At the behavioral level, these functional and structural deficits resulted in lower cross-modal matching abilities. Thus, neonatal unimodal experience during defined developmental stages is necessary for setting up the neuronal networks of multisensory processing. PMID:26600123
Canonical microcircuits for predictive coding
Bastos, Andre M.; Usrey, W. Martin; Adams, Rick A.; Mangun, George R.; Fries, Pascal; Friston, Karl J.
2013-01-01
Summary This review considers the influential notion of a canonical (cortical) microcircuit in light of recent theories about neuronal processing. Specifically, we conciliate quantitative studies of microcircuitry and the functional logic of neuronal computations. We revisit the established idea that message passing among hierarchical cortical areas implements a form of Bayesian inference – paying careful attention to the implications for intrinsic connections among neuronal populations. By deriving canonical forms for these computations, one can associate specific neuronal populations with specific computational roles. This analysis discloses a remarkable correspondence between the microcircuitry of the cortical column and the connectivity implied by predictive coding. Furthermore, it provides some intuitive insights into the functional asymmetries between feedforward and feedback connections and the characteristic frequencies over which they operate. PMID:23177956
Oizumi, Masafumi; Satoh, Ryota; Kazama, Hokto; Okada, Masato
2012-01-01
The Drosophila antennal lobe is subdivided into multiple glomeruli, each of which represents a unique olfactory information processing channel. In each glomerulus, feedforward input from olfactory receptor neurons (ORNs) is transformed into activity of projection neurons (PNs), which represent the output. Recent investigations have indicated that lateral presynaptic inhibitory input from other glomeruli controls the gain of this transformation. Here, we address why this gain control acts "pre"-synaptically rather than "post"-synaptically. Postsynaptic inhibition could work similarly to presynaptic inhibition with regard to regulating the firing rates of PNs depending on the stimulus intensity. We investigate the differences between pre- and postsynaptic gain control in terms of odor discriminability by simulating a network model of the Drosophila antennal lobe with experimental data. We first demonstrate that only presynaptic inhibition can reproduce the type of gain control observed in experiments. We next show that presynaptic inhibition decorrelates PN responses whereas postsynaptic inhibition does not. Due to this effect, presynaptic gain control enhances the accuracy of odor discrimination by a linear decoder while its postsynaptic counterpart only diminishes it. Our results provide the reason gain control operates "pre"-synaptically but not "post"-synaptically in the Drosophila antennal lobe.
Artificial neural networks in Space Station optimal attitude control
NASA Astrophysics Data System (ADS)
Kumar, Renjith R.; Seywald, Hans; Deshpande, Samir M.; Rahman, Zia
1992-08-01
Innovative techniques of using 'Artificial Neural Networks' (ANN) for improving the performance of the pitch axis attitude control system of Space Station Freedom using Control Moment Gyros (CMGs) are investigated. The first technique uses a feedforward ANN with multilayer perceptrons to obtain an on-line controller which improves the performance of the control system via a model following approach. The second techique uses a single layer feedforward ANN with a modified back propagation scheme to estimate the internal plant variations and the external disturbances separately. These estimates are then used to solve two differential Riccati equations to obtain time varying gains which improve the control system performance in successive orbits.
Processing speed in recurrent visual networks correlates with general intelligence.
Jolij, Jacob; Huisman, Danielle; Scholte, Steven; Hamel, Ronald; Kemner, Chantal; Lamme, Victor A F
2007-01-08
Studies on the neural basis of general fluid intelligence strongly suggest that a smarter brain processes information faster. Different brain areas, however, are interconnected by both feedforward and feedback projections. Whether both types of connections or only one of the two types are faster in smarter brains remains unclear. Here we show, by measuring visual evoked potentials during a texture discrimination task, that general fluid intelligence shows a strong correlation with processing speed in recurrent visual networks, while there is no correlation with speed of feedforward connections. The hypothesis that a smarter brain runs faster may need to be refined: a smarter brain's feedback connections run faster.
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2001-01-01
Artificial neural networks have been used for a number of years to process holography-generated characteristic patterns of vibrating structures. This technology depends critically on the selection and the conditioning of the training sets. A scaling operation called folding is discussed for conditioning training sets optimally for training feed-forward neural networks to process characteristic fringe patterns. Folding allows feed-forward nets to be trained easily to detect damage-induced vibration-displacement-distribution changes as small as 10 nm. A specific application to aerospace of neural-net processing of characteristic patterns is presented to motivate the conditioning and optimization effort.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Neural networks for feedback feedforward nonlinear control systems.
Parisini, T; Zoppoli, R
1994-01-01
This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.
Measuring the hierarchy of feedforward networks
NASA Astrophysics Data System (ADS)
Corominas-Murtra, Bernat; Rodríguez-Caso, Carlos; Goñi, Joaquín; Solé, Ricard
2011-03-01
In this paper we explore the concept of hierarchy as a quantifiable descriptor of ordered structures, departing from the definition of three conditions to be satisfied for a hierarchical structure: order, predictability, and pyramidal structure. According to these principles, we define a hierarchical index taking concepts from graph and information theory. This estimator allows to quantify the hierarchical character of any system susceptible to be abstracted in a feedforward causal graph, i.e., a directed acyclic graph defined in a single connected structure. Our hierarchical index is a balance between this predictability and pyramidal condition by the definition of two entropies: one attending the onward flow and the other for the backward reversion. We show how this index allows to identify hierarchical, antihierarchical, and nonhierarchical structures. Our formalism reveals that departing from the defined conditions for a hierarchical structure, feedforward trees and the inverted tree graphs emerge as the only causal structures of maximal hierarchical and antihierarchical systems respectively. Conversely, null values of the hierarchical index are attributed to a number of different configuration networks; from linear chains, due to their lack of pyramid structure, to full-connected feedforward graphs where the diversity of onward pathways is canceled by the uncertainty (lack of predictability) when going backward. Some illustrative examples are provided for the distinction among these three types of hierarchical causal graphs.
Seismic activity prediction using computational intelligence techniques in northern Pakistan
NASA Astrophysics Data System (ADS)
Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat
2017-10-01
Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.
Numerical solution of the nonlinear Schrodinger equation by feedforward neural networks
NASA Astrophysics Data System (ADS)
Shirvany, Yazdan; Hayati, Mohsen; Moradian, Rostam
2008-12-01
We present a method to solve boundary value problems using artificial neural networks (ANN). A trial solution of the differential equation is written as a feed-forward neural network containing adjustable parameters (the weights and biases). From the differential equation and its boundary conditions we prepare the energy function which is used in the back-propagation method with momentum term to update the network parameters. We improved energy function of ANN which is derived from Schrodinger equation and the boundary conditions. With this improvement of energy function we can use unsupervised training method in the ANN for solving the equation. Unsupervised training aims to minimize a non-negative energy function. We used the ANN method to solve Schrodinger equation for few quantum systems. Eigenfunctions and energy eigenvalues are calculated. Our numerical results are in agreement with their corresponding analytical solution and show the efficiency of ANN method for solving eigenvalue problems.
Ensemble learning in fixed expansion layer networks for mitigating catastrophic forgetting.
Coop, Robert; Mishtal, Aaron; Arel, Itamar
2013-10-01
Catastrophic forgetting is a well-studied attribute of most parameterized supervised learning systems. A variation of this phenomenon, in the context of feedforward neural networks, arises when nonstationary inputs lead to loss of previously learned mappings. The majority of the schemes proposed in the literature for mitigating catastrophic forgetting were not data driven and did not scale well. We introduce the fixed expansion layer (FEL) feedforward neural network, which embeds a sparsely encoding hidden layer to help mitigate forgetting of prior learned representations. In addition, we investigate a novel framework for training ensembles of FEL networks, based on exploiting an information-theoretic measure of diversity between FEL learners, to further control undesired plasticity. The proposed methodology is demonstrated on a basic classification task, clearly emphasizing its advantages over existing techniques. The architecture proposed can be enhanced to address a range of computational intelligence tasks, such as regression problems and system control.
Combined feedforward and feedback control of a redundant, nonlinear, dynamic musculoskeletal system.
Blana, Dimitra; Kirsch, Robert F; Chadwick, Edward K
2009-05-01
A functional electrical stimulation controller is presented that uses a combination of feedforward and feedback for arm control in high-level injury. The feedforward controller generates the muscle activations nominally required for desired movements, and the feedback controller corrects for errors caused by muscle fatigue and external disturbances. The feedforward controller is an artificial neural network (ANN) which approximates the inverse dynamics of the arm. The feedback loop includes a PID controller in series with a second ANN representing the nonlinear properties and biomechanical interactions of muscles and joints. The controller was designed and tested using a two-joint musculoskeletal model of the arm that includes four mono-articular and two bi-articular muscles. Its performance during goal-oriented movements of varying amplitudes and durations showed a tracking error of less than 4 degrees in ideal conditions, and less than 10 degrees even in the case of considerable fatigue and external disturbances.
Contributions of the 12 neuron classes in the fly lamina to motion vision
Tuthill, John C.; Nern, Aljoscha; Holtz, Stephen L.; Rubin, Gerald M.; Reiser, Michael B.
2013-01-01
SUMMARY Motion detection is a fundamental neural computation performed by many sensory systems. In the fly, local motion computation is thought to occur within the first two layers of the visual system, the lamina and medulla. We constructed specific genetic driver lines for each of the 12 neuron classes in the lamina. We then depolarized and hyperpolarized each neuron type, and quantified fly behavioral responses to a diverse set of motion stimuli. We found that only a small number of lamina output neurons are essential for motion detection, while most neurons serve to sculpt and enhance these feedforward pathways. Two classes of feedback neurons (C2 and C3), and lamina output neurons (L2 and L4), are required for normal detection of directional motion stimuli. Our results reveal a prominent role for feedback and lateral interactions in motion processing, and demonstrate that motion-dependent behaviors rely on contributions from nearly all lamina neuron classes. PMID:23849200
Contributions of the 12 neuron classes in the fly lamina to motion vision.
Tuthill, John C; Nern, Aljoscha; Holtz, Stephen L; Rubin, Gerald M; Reiser, Michael B
2013-07-10
Motion detection is a fundamental neural computation performed by many sensory systems. In the fly, local motion computation is thought to occur within the first two layers of the visual system, the lamina and medulla. We constructed specific genetic driver lines for each of the 12 neuron classes in the lamina. We then depolarized and hyperpolarized each neuron type and quantified fly behavioral responses to a diverse set of motion stimuli. We found that only a small number of lamina output neurons are essential for motion detection, while most neurons serve to sculpt and enhance these feedforward pathways. Two classes of feedback neurons (C2 and C3), and lamina output neurons (L2 and L4), are required for normal detection of directional motion stimuli. Our results reveal a prominent role for feedback and lateral interactions in motion processing and demonstrate that motion-dependent behaviors rely on contributions from nearly all lamina neuron classes. Copyright © 2013 Elsevier Inc. All rights reserved.
The cerebellar Golgi cell and spatiotemporal organization of granular layer activity
D'Angelo, Egidio; Solinas, Sergio; Mapelli, Jonathan; Gandolfi, Daniela; Mapelli, Lisa; Prestori, Francesca
2013-01-01
The cerebellar granular layer has been suggested to perform a complex spatiotemporal reconfiguration of incoming mossy fiber signals. Central to this role is the inhibitory action exerted by Golgi cells over granule cells: Golgi cells inhibit granule cells through both feedforward and feedback inhibitory loops and generate a broad lateral inhibition that extends beyond the afferent synaptic field. This characteristic connectivity has recently been investigated in great detail and been correlated with specific functional properties of these neurons. These include theta-frequency pacemaking, network entrainment into coherent oscillations and phase resetting. Important advances have also been made in terms of determining the membrane and synaptic properties of the neuron, and clarifying the mechanisms of activation by input bursts. Moreover, voltage sensitive dye imaging and multi-electrode array (MEA) recordings, combined with mathematical simulations based on realistic computational models, have improved our understanding of the impact of Golgi cell activity on granular layer circuit computations. These investigations have highlighted the critical role of Golgi cells in: generating dense clusters of granule cell activity organized in center-surround structures, implementing combinatorial operations on multiple mossy fiber inputs, regulating transmission gain, and cut-off frequency, controlling spike timing and burst transmission, and determining the sign, intensity and duration of long-term synaptic plasticity at the mossy fiber-granule cell relay. This review considers recent advances in the field, highlighting the functional implications of Golgi cells for granular layer network computation and indicating new challenges for cerebellar research. PMID:23730271
McLelland, Douglas; VanRullen, Rufin
2016-10-01
Several theories have been advanced to explain how cross-frequency coupling, the interaction of neuronal oscillations at different frequencies, could enable item multiplexing in neural systems. The communication-through-coherence theory proposes that phase-matching of gamma oscillations between areas enables selective processing of a single item at a time, and a later refinement of the theory includes a theta-frequency oscillation that provides a periodic reset of the system. Alternatively, the theta-gamma neural code theory proposes that a sequence of items is processed, one per gamma cycle, and that this sequence is repeated or updated across theta cycles. In short, both theories serve to segregate representations via the temporal domain, but differ on the number of objects concurrently represented. In this study, we set out to test whether each of these theories is actually physiologically plausible, by implementing them within a single model inspired by physiological data. Using a spiking network model of visual processing, we show that each of these theories is physiologically plausible and computationally useful. Both theories were implemented within a single network architecture, with two areas connected in a feedforward manner, and gamma oscillations generated by feedback inhibition within areas. Simply increasing the amplitude of global inhibition in the lower area, equivalent to an increase in the spatial scope of the gamma oscillation, yielded a switch from one mode to the other. Thus, these different processing modes may co-exist in the brain, enabling dynamic switching between exploratory and selective modes of attention.
Tu, Junchao; Zhang, Liyan
2018-01-12
A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN)to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.
GABAergic Local Interneurons Shape Female Fruit Fly Response to Mating Songs.
Yamada, Daichi; Ishimoto, Hiroshi; Li, Xiaodong; Kohashi, Tsunehiko; Ishikawa, Yuki; Kamikouchi, Azusa
2018-05-02
Many animals use acoustic signals to attract a potential mating partner. In fruit flies ( Drosophila melanogaster ), the courtship pulse song has a species-specific interpulse interval (IPI) that activates mating. Although a series of auditory neurons in the fly brain exhibit different tuning patterns to IPIs, it is unclear how the response of each neuron is tuned. Here, we studied the neural circuitry regulating the activity of antennal mechanosensory and motor center (AMMC)-B1 neurons, key secondary auditory neurons in the excitatory neural pathway that relay song information. By performing Ca 2+ imaging in female flies, we found that the IPI selectivity observed in AMMC-B1 neurons differs from that of upstream auditory sensory neurons [Johnston's organ (JO)-B]. Selective knock-down of a GABA A receptor subunit in AMMC-B1 neurons increased their response to short IPIs, suggesting that GABA suppresses AMMC-B1 activity at these IPIs. Connection mapping identified two GABAergic local interneurons that synapse with AMMC-B1 and JO-B. Ca 2+ imaging combined with neuronal silencing revealed that these local interneurons, AMMC-LN and AMMC-B2, shape the response pattern of AMMC-B1 neurons at a 15 ms IPI. Neuronal silencing studies further suggested that both GABAergic local interneurons suppress the behavioral response to artificial pulse songs in flies, particularly those with a 15 ms IPI. Altogether, we identified a circuit containing two GABAergic local interneurons that affects the temporal tuning of AMMC-B1 neurons in the song relay pathway and the behavioral response to the courtship song. Our findings suggest that feedforward inhibitory pathways adjust the behavioral response to courtship pulse songs in female flies. SIGNIFICANCE STATEMENT To understand how the brain detects time intervals between sound elements, we studied the neural pathway that relays species-specific courtship song information in female Drosophila melanogaster We demonstrate that the signal transmission from auditory sensory neurons to key secondary auditory neurons antennal mechanosensory and motor center (AMMC)-B1 is the first-step to generate time interval selectivity of neurons in the song relay pathway. Two GABAergic local interneurons are suggested to shape the interval selectivity of AMMC-B1 neurons by receiving auditory inputs and in turn providing feedforward inhibition onto AMMC-B1 neurons. Furthermore, these GABAergic local interneurons suppress the song response behavior in an interval-dependent manner. Our results provide new insights into the neural circuit basis to adjust neuronal and behavioral responses to a species-specific communication sound. Copyright © 2018 the authors 0270-6474/18/384329-19$15.00/0.
NASA Technical Reports Server (NTRS)
Lary, David J.; Mussa, Yussuf
2004-01-01
In this study a new extended Kalman filter (EKF) learning algorithm for feed-forward neural networks (FFN) is used. With the EKF approach, the training of the FFN can be seen as state estimation for a non-linear stationary process. The EKF method gives excellent convergence performances provided that there is enough computer core memory and that the machine precision is high. Neural networks are ideally suited to describe the spatial and temporal dependence of tracer-tracer correlations. The neural network performs well even in regions where the correlations are less compact and normally a family of correlation curves would be required. For example, the CH4-N2O correlation can be well described using a neural network trained with the latitude, pressure, time of year, and CH4 volume mixing ratio (v.m.r.). The neural network was able to reproduce the CH4-N2O correlation with a correlation coefficient between simulated and training values of 0.9997. The neural network Fortran code used is available for download.
NASA Astrophysics Data System (ADS)
Pahlavani, P.; Gholami, A.; Azimi, S.
2017-09-01
This paper presents an indoor positioning technique based on a multi-layer feed-forward (MLFF) artificial neural networks (ANN). Most of the indoor received signal strength (RSS)-based WLAN positioning systems use the fingerprinting technique that can be divided into two phases: the offline (calibration) phase and the online (estimation) phase. In this paper, RSSs were collected for all references points in four directions and two periods of time (Morning and Evening). Hence, RSS readings were sampled at a regular time interval and specific orientation at each reference point. The proposed ANN based model used Levenberg-Marquardt algorithm for learning and fitting the network to the training data. This RSS readings in all references points and the known position of these references points was prepared for training phase of the proposed MLFF neural network. Eventually, the average positioning error for this network using 30% check and validation data was computed approximately 2.20 meter.
Numerical solution of differential equations by artificial neural networks
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1995-01-01
Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks (ANN's) are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed by the author to mate the adaptability of the ANN with the speed and precision of the digital computer. This method has been successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.
ERIC Educational Resources Information Center
Treurniet, William
A study applied artificial neural networks, trained with the back-propagation learning algorithm, to modelling phonemes extracted from the DARPA TIMIT multi-speaker, continuous speech data base. A number of proposed network architectures were applied to the phoneme classification task, ranging from the simple feedforward multilayer network to more…
NASA Astrophysics Data System (ADS)
Asal Kzar, Ahmed; Mat Jafri, M. Z.; Hwee San, Lim; Al-Zuky, Ali A.; Mutter, Kussay N.; Hassan Al-Saleh, Anwar
2016-06-01
There are many techniques that have been given for water quality problem, but the remote sensing techniques have proven their success, especially when the artificial neural networks are used as mathematical models with these techniques. Hopfield neural network is one type of artificial neural networks which is common, fast, simple, and efficient, but it when it deals with images that have more than two colours such as remote sensing images. This work has attempted to solve this problem via modifying the network that deals with colour remote sensing images for water quality mapping. A Feed-forward Hopfield Neural Network Algorithm (FHNNA) was modified and used with a satellite colour image from type of Thailand earth observation system (THEOS) for TSS mapping in the Penang strait, Malaysia, through the classification of TSS concentrations. The new algorithm is based essentially on three modifications: using HNN as feed-forward network, considering the weights of bitplanes, and non-self-architecture or zero diagonal of weight matrix, in addition, it depends on a validation data. The achieved map was colour-coded for visual interpretation. The efficiency of the new algorithm has found out by the higher correlation coefficient (R=0.979) and the lower root mean square error (RMSE=4.301) between the validation data that were divided into two groups. One used for the algorithm and the other used for validating the results. The comparison was with the minimum distance classifier. Therefore, TSS mapping of polluted water in Penang strait, Malaysia, can be performed using FHNNA with remote sensing technique (THEOS). It is a new and useful application of HNN, so it is a new model with remote sensing techniques for water quality mapping which is considered important environmental problem.
Feed-Forward Neural Network Prediction of the Mechanical Properties of Sandcrete Materials
Asteris, Panagiotis G.; Roussis, Panayiotis C.; Douvika, Maria G.
2017-01-01
This work presents a soft-sensor approach for estimating critical mechanical properties of sandcrete materials. Feed-forward (FF) artificial neural network (ANN) models are employed for building soft-sensors able to predict the 28-day compressive strength and the modulus of elasticity of sandcrete materials. To this end, a new normalization technique for the pre-processing of data is proposed. The comparison of the derived results with the available experimental data demonstrates the capability of FF ANNs to predict with pinpoint accuracy the mechanical properties of sandcrete materials. Furthermore, the proposed normalization technique has been proven effective and robust compared to other normalization techniques available in the literature. PMID:28598400
Artificial neural networks in Space Station optimal attitude control
NASA Astrophysics Data System (ADS)
Kumar, Renjith R.; Seywald, Hans; Deshpande, Samir M.; Rahman, Zia
1995-01-01
Innovative techniques of using "artificial neural networks" (ANN) for improving the performance of the pitch axis attitude control system of Space Station Freedom using control moment gyros (CMGs) are investigated. The first technique uses a feed-forward ANN with multi-layer perceptrons to obtain an on-line controller which improves the performance of the control system via a model following approach. The second technique uses a single layer feed-forward ANN with a modified back propagation scheme to estimate the internal plant variations and the external disturbances separately. These estimates are then used to solve two differential Riccati equations to obtain time varying gains which improve the control system performance in successive orbits.
Design of Neural Networks for Fast Convergence and Accuracy: Dynamics and Control
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Sparks, Dean W., Jr.
1997-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Design of neural networks for fast convergence and accuracy: dynamics and control.
Maghami, P G; Sparks, D R
2000-01-01
A procedure for the design and training of artificial neural networks, used for rapid and efficient controls and dynamics design and analysis for flexible space systems, has been developed. Artificial neural networks are employed, such that once properly trained, they provide a means of evaluating the impact of design changes rapidly. Specifically, two-layer feedforward neural networks are designed to approximate the functional relationship between the component/spacecraft design changes and measures of its performance or nonlinear dynamics of the system/components. A training algorithm, based on statistical sampling theory, is presented, which guarantees that the trained networks provide a designer-specified degree of accuracy in mapping the functional relationship. Within each iteration of this statistical-based algorithm, a sequential design algorithm is used for the design and training of the feedforward network to provide rapid convergence to the network goals. Here, at each sequence a new network is trained to minimize the error of previous network. The proposed method should work for applications wherein an arbitrary large source of training data can be generated. Two numerical examples are performed on a spacecraft application in order to demonstrate the feasibility of the proposed approach.
Random noise effects in pulse-mode digital multilayer neural networks.
Kim, Y C; Shanblatt, M A
1995-01-01
A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.
On the use of harmony search algorithm in the training of wavelet neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2015-10-01
Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.
Artificial neural network-aided image analysis system for cell counting.
Sjöström, P J; Frydel, B R; Wahlberg, L U
1999-05-01
In histological preparations containing debris and synthetic materials, it is difficult to automate cell counting using standard image analysis tools, i.e., systems that rely on boundary contours, histogram thresholding, etc. In an attempt to mimic manual cell recognition, an automated cell counter was constructed using a combination of artificial intelligence and standard image analysis methods. Artificial neural network (ANN) methods were applied on digitized microscopy fields without pre-ANN feature extraction. A three-layer feed-forward network with extensive weight sharing in the first hidden layer was employed and trained on 1,830 examples using the error back-propagation algorithm on a Power Macintosh 7300/180 desktop computer. The optimal number of hidden neurons was determined and the trained system was validated by comparison with blinded human counts. System performance at 50x and lO0x magnification was evaluated. The correlation index at 100x magnification neared person-to-person variability, while 50x magnification was not useful. The system was approximately six times faster than an experienced human. ANN-based automated cell counting in noisy histological preparations is feasible. Consistent histology and computer power are crucial for system performance. The system provides several benefits, such as speed of analysis and consistency, and frees up personnel for other tasks.
Shakiba, Mohammad; Parson, Nick; Chen, X-Grant
2016-06-30
The hot deformation behavior of Al-0.12Fe-0.1Si alloys with varied amounts of Cu (0.002-0.31 wt %) was investigated by uniaxial compression tests conducted at different temperatures (400 °C-550 °C) and strain rates (0.01-10 s -1 ). The results demonstrated that flow stress decreased with increasing deformation temperature and decreasing strain rate, while flow stress increased with increasing Cu content for all deformation conditions studied due to the solute drag effect. Based on the experimental data, an artificial neural network (ANN) model was developed to study the relationship between chemical composition, deformation variables and high-temperature flow behavior. A three-layer feed-forward back-propagation artificial neural network with 20 neurons in a hidden layer was established in this study. The input parameters were Cu content, temperature, strain rate and strain, while the flow stress was the output. The performance of the proposed model was evaluated using the K-fold cross-validation method. The results showed excellent generalization capability of the developed model. Sensitivity analysis indicated that the strain rate is the most important parameter, while the Cu content exhibited a modest but significant influence on the flow stress.
Use of artificial neural network for spatial rainfall analysis
NASA Astrophysics Data System (ADS)
Paraskevas, Tsangaratos; Dimitrios, Rozos; Andreas, Benardos
2014-04-01
In the present study, the precipitation data measured at 23 rain gauge stations over the Achaia County, Greece, were used to estimate the spatial distribution of the mean annual precipitation values over a specific catchment area. The objective of this work was achieved by programming an Artificial Neural Network (ANN) that uses the feed-forward back-propagation algorithm as an alternative interpolating technique. A Geographic Information System (GIS) was utilized to process the data derived by the ANN and to create a continuous surface that represented the spatial mean annual precipitation distribution. The ANN introduced an optimization procedure that was implemented during training, adjusting the hidden number of neurons and the convergence of the ANN in order to select the best network architecture. The performance of the ANN was evaluated using three standard statistical evaluation criteria applied to the study area and showed good performance. The outcomes were also compared with the results obtained from a previous study in the area of research which used a linear regression analysis for the estimation of the mean annual precipitation values giving more accurate results. The information and knowledge gained from the present study could improve the accuracy of analysis concerning hydrology and hydrogeological models, ground water studies, flood related applications and climate analysis studies.
Shakiba, Mohammad; Parson, Nick; Chen, X.-Grant
2016-01-01
The hot deformation behavior of Al-0.12Fe-0.1Si alloys with varied amounts of Cu (0.002–0.31 wt %) was investigated by uniaxial compression tests conducted at different temperatures (400 °C–550 °C) and strain rates (0.01–10 s−1). The results demonstrated that flow stress decreased with increasing deformation temperature and decreasing strain rate, while flow stress increased with increasing Cu content for all deformation conditions studied due to the solute drag effect. Based on the experimental data, an artificial neural network (ANN) model was developed to study the relationship between chemical composition, deformation variables and high-temperature flow behavior. A three-layer feed-forward back-propagation artificial neural network with 20 neurons in a hidden layer was established in this study. The input parameters were Cu content, temperature, strain rate and strain, while the flow stress was the output. The performance of the proposed model was evaluated using the K-fold cross-validation method. The results showed excellent generalization capability of the developed model. Sensitivity analysis indicated that the strain rate is the most important parameter, while the Cu content exhibited a modest but significant influence on the flow stress. PMID:28773658
Decoding the cortical transformations for visually guided reaching in 3D space.
Blohm, Gunnar; Keith, Gerald P; Crawford, J Douglas
2009-06-01
To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.
Lateral Spread of Orientation Selectivity in V1 is Controlled by Intracortical Cooperativity
Chavane, Frédéric; Sharon, Dahlia; Jancke, Dirk; Marre, Olivier; Frégnac, Yves; Grinvald, Amiram
2011-01-01
Neurons in the primary visual cortex receive subliminal information originating from the periphery of their receptive fields (RF) through a variety of cortical connections. In the cat primary visual cortex, long-range horizontal axons have been reported to preferentially bind to distant columns of similar orientation preferences, whereas feedback connections from higher visual areas provide a more diverse functional input. To understand the role of these lateral interactions, it is crucial to characterize their effective functional connectivity and tuning properties. However, the overall functional impact of cortical lateral connections, whatever their anatomical origin, is unknown since it has never been directly characterized. Using direct measurements of postsynaptic integration in cat areas 17 and 18, we performed multi-scale assessments of the functional impact of visually driven lateral networks. Voltage-sensitive dye imaging showed that local oriented stimuli evoke an orientation-selective activity that remains confined to the cortical feedforward imprint of the stimulus. Beyond a distance of one hypercolumn, the lateral spread of cortical activity gradually lost its orientation preference approximated as an exponential with a space constant of about 1 mm. Intracellular recordings showed that this loss of orientation selectivity arises from the diversity of converging synaptic input patterns originating from outside the classical RF. In contrast, when the stimulus size was increased, we observed orientation-selective spread of activation beyond the feedforward imprint. We conclude that stimulus-induced cooperativity enhances the long-range orientation-selective spread. PMID:21629708
NASA Astrophysics Data System (ADS)
Laidi, Maamar; Hanini, Salah; Rezrazi, Ahmed; Yaiche, Mohamed Redha; El Hadj, Abdallah Abdallah; Chellali, Farouk
2017-04-01
In this study, a backpropagation artificial neural network (BP-ANN) model is used as an alternative approach to predict solar radiation on tilted surfaces (SRT) using a number of variables involved in physical process. These variables are namely the latitude of the site, mean temperature and relative humidity, Linke turbidity factor and Angstrom coefficient, extraterrestrial solar radiation, solar radiation data measured on horizontal surfaces (SRH), and solar zenith angle. Experimental solar radiation data from 13 stations spread all over Algeria around the year (2004) were used for training/validation and testing the artificial neural networks (ANNs), and one station was used to make the interpolation of the designed ANN. The ANN model was trained, validated, and tested using 60, 20, and 20 % of all data, respectively. The configuration 8-35-1 (8 inputs, 35 hidden, and 1 output neurons) presented an excellent agreement between the prediction and the experimental data during the test stage with determination coefficient of 0.99 and root meat squared error of 5.75 Wh/m2, considering a three-layer feedforward backpropagation neural network with Levenberg-Marquardt training algorithm, a hyperbolic tangent sigmoid and linear transfer function at the hidden and the output layer, respectively. This novel model could be used by researchers or scientists to design high-efficiency solar devices that are usually tilted at an optimum angle to increase the solar incident on the surface.
Zhou, Li; Liu, Ming-Zhe; Li, Qing; Deng, Juan; Mu, Di; Sun, Yan-Gang
2017-03-21
Serotonergic neurons play key roles in various biological processes. However, circuit mechanisms underlying tight control of serotonergic neurons remain largely unknown. Here, we systematically investigated the organization of long-range synaptic inputs to serotonergic neurons and GABAergic neurons in the dorsal raphe nucleus (DRN) of mice with a combination of viral tracing, slice electrophysiological, and optogenetic techniques. We found that DRN serotonergic neurons and GABAergic neurons receive largely comparable synaptic inputs from six major upstream brain areas. Upon further analysis of the fine functional circuit structures, we found both bilateral and ipsilateral patterns of topographic connectivity in the DRN for the axons from different inputs. Moreover, the upstream brain areas were found to bidirectionally control the activity of DRN serotonergic neurons by recruiting feedforward inhibition or via a push-pull mechanism. Our study provides a framework for further deciphering the functional roles of long-range circuits controlling the activity of serotonergic neurons in the DRN. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Nucleus accumbens feedforward inhibition circuit promotes cocaine self-administration
Yu, Jun; Yan, Yijin; Li, King-Lun; Wang, Yao; Huang, Yanhua H.; Urban, Nathaniel N.; Nestler, Eric J.; Schlüter, Oliver M.; Dong, Yan
2017-01-01
The basolateral amygdala (BLA) sends excitatory projections to the nucleus accumbens (NAc) and regulates motivated behaviors partially by activating NAc medium spiny neurons (MSNs). Here, we characterized a feedforward inhibition circuit, through which BLA-evoked activation of NAc shell (NAcSh) MSNs was fine-tuned by GABAergic monosynaptic innervation from adjacent fast-spiking interneurons (FSIs). Specifically, BLA-to-NAcSh projections predominantly innervated NAcSh FSIs compared with MSNs and triggered action potentials in FSIs preceding BLA-mediated activation of MSNs. Due to these anatomical and temporal properties, activation of the BLA-to-NAcSh projection resulted in a rapid FSI-mediated inhibition of MSNs, timing-contingently dictating BLA-evoked activation of MSNs. Cocaine self-administration selectively and persistently up-regulated the presynaptic release probability of BLA-to-FSI synapses, entailing enhanced FSI-mediated feedforward inhibition of MSNs upon BLA activation. Experimentally enhancing the BLA-to-FSI transmission in vivo expedited the acquisition of cocaine self-administration. These results reveal a previously unidentified role of an FSI-embedded circuit in regulating NAc-based drug seeking and taking. PMID:28973852
Asynchronous networks: modularization of dynamics theorem
NASA Astrophysics Data System (ADS)
Bick, Christian; Field, Michael
2017-02-01
Building on the first part of this paper, we develop the theory of functional asynchronous networks. We show that a large class of functional asynchronous networks can be (uniquely) represented as feedforward networks connecting events or dynamical modules. For these networks we can give a complete description of the network function in terms of the function of the events comprising the network: the modularization of dynamics theorem. We give examples to illustrate the main results.
Biomimetic Hybrid Feedback Feedforward Neural-Network Learning Control.
Pan, Yongping; Yu, Haoyong
2017-06-01
This brief presents a biomimetic hybrid feedback feedforward neural-network learning control (NNLC) strategy inspired by the human motor learning control mechanism for a class of uncertain nonlinear systems. The control structure includes a proportional-derivative controller acting as a feedback servo machine and a radial-basis-function (RBF) NN acting as a feedforward predictive machine. Under the sufficient constraints on control parameters, the closed-loop system achieves semiglobal practical exponential stability, such that an accurate NN approximation is guaranteed in a local region along recurrent reference trajectories. Compared with the existing NNLC methods, the novelties of the proposed method include: 1) the implementation of an adaptive NN control to guarantee plant states being recurrent is not needed, since recurrent reference signals rather than plant states are utilized as NN inputs, which greatly simplifies the analysis and synthesis of the NNLC and 2) the domain of NN approximation can be determined a priori by the given reference signals, which leads to an easy construction of the RBF-NNs. Simulation results have verified the effectiveness of this approach.
Belciug, Smaranda; Gorunescu, Florin
2018-06-08
Methods based on microarrays (MA), mass spectrometry (MS), and machine learning (ML) algorithms have evolved rapidly in recent years, allowing for early detection of several types of cancer. A pitfall of these approaches, however, is the overfitting of data due to large number of attributes and small number of instances -- a phenomenon known as the 'curse of dimensionality'. A potentially fruitful idea to avoid this drawback is to develop algorithms that combine fast computation with a filtering module for the attributes. The goal of this paper is to propose a statistical strategy to initiate the hidden nodes of a single-hidden layer feedforward neural network (SLFN) by using both the knowledge embedded in data and a filtering mechanism for attribute relevance. In order to attest its feasibility, the proposed model has been tested on five publicly available high-dimensional datasets: breast, lung, colon, and ovarian cancer regarding gene expression and proteomic spectra provided by cDNA arrays, DNA microarray, and MS. The novel algorithm, called adaptive SLFN (aSLFN), has been compared with four major classification algorithms: traditional ELM, radial basis function network (RBF), single-hidden layer feedforward neural network trained by backpropagation algorithm (BP-SLFN), and support vector-machine (SVM). Experimental results showed that the classification performance of aSLFN is competitive with the comparison models. Copyright © 2018. Published by Elsevier Inc.
Neuronal Activity Promotes Glioma Growth through Neuroligin-3 Secretion.
Venkatesh, Humsa S; Johung, Tessa B; Caretti, Viola; Noll, Alyssa; Tang, Yujie; Nagaraja, Surya; Gibson, Erin M; Mount, Christopher W; Polepalli, Jai; Mitra, Siddhartha S; Woo, Pamelyn J; Malenka, Robert C; Vogel, Hannes; Bredel, Markus; Mallick, Parag; Monje, Michelle
2015-05-07
Active neurons exert a mitogenic effect on normal neural precursor and oligodendroglial precursor cells, the putative cellular origins of high-grade glioma (HGG). By using optogenetic control of cortical neuronal activity in a patient-derived pediatric glioblastoma xenograft model, we demonstrate that active neurons similarly promote HGG proliferation and growth in vivo. Conditioned medium from optogenetically stimulated cortical slices promoted proliferation of pediatric and adult patient-derived HGG cultures, indicating secretion of activity-regulated mitogen(s). The synaptic protein neuroligin-3 (NLGN3) was identified as the leading candidate mitogen, and soluble NLGN3 was sufficient and necessary to promote robust HGG cell proliferation. NLGN3 induced PI3K-mTOR pathway activity and feedforward expression of NLGN3 in glioma cells. NLGN3 expression levels in human HGG negatively correlated with patient overall survival. These findings indicate the important role of active neurons in the brain tumor microenvironment and identify secreted NLGN3 as an unexpected mechanism promoting neuronal activity-regulated cancer growth. Copyright © 2015 Elsevier Inc. All rights reserved.
Synthetic incoherent feedforward circuits show adaptation to the amount of their genetic template
Bleris, Leonidas; Xie, Zhen; Glass, David; Adadey, Asa; Sontag, Eduardo; Benenson, Yaakov
2011-01-01
Natural and synthetic biological networks must function reliably in the face of fluctuating stoichiometry of their molecular components. These fluctuations are caused in part by changes in relative expression efficiency and the DNA template amount of the network-coding genes. Gene product levels could potentially be decoupled from these changes via built-in adaptation mechanisms, thereby boosting network reliability. Here, we show that a mechanism based on an incoherent feedforward motif enables adaptive gene expression in mammalian cells. We modeled, synthesized, and tested transcriptional and post-transcriptional incoherent loops and found that in all cases the gene product adapts to changes in DNA template abundance. We also observed that the post-transcriptional form results in superior adaptation behavior, higher absolute expression levels, and lower intrinsic fluctuations. Our results support a previously hypothesized endogenous role in gene dosage compensation for such motifs and suggest that their incorporation in synthetic networks will improve their robustness and reliability. PMID:21811230
Computing preimages of Boolean networks.
Klotz, Johannes; Bossert, Martin; Schober, Steffen
2013-01-01
In this paper we present an algorithm based on the sum-product algorithm that finds elements in the preimage of a feed-forward Boolean networks given an output of the network. Our probabilistic method runs in linear time with respect to the number of nodes in the network. We evaluate our algorithm for randomly constructed Boolean networks and a regulatory network of Escherichia coli and found that it gives a valid solution in most cases.
NASA Astrophysics Data System (ADS)
Uca; Toriman, Ekhwan; Jaafar, Othman; Maru, Rosmini; Arfan, Amal; Saleh Ahmar, Ansari
2018-01-01
Prediction of suspended sediment discharge in a catchments area is very important because it can be used to evaluation the erosion hazard, management of its water resources, water quality, hydrology project management (dams, reservoirs, and irrigation) and to determine the extent of the damage that occurred in the catchments. Multiple Linear Regression analysis and artificial neural network can be used to predict the amount of daily suspended sediment discharge. Regression analysis using the least square method, whereas artificial neural networks using Radial Basis Function (RBF) and feedforward multilayer perceptron with three learning algorithms namely Levenberg-Marquardt (LM), Scaled Conjugate Descent (SCD) and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton (BFGS). The number neuron of hidden layer is three to sixteen, while in output layer only one neuron because only one output target. The mean absolute error (MAE), root mean square error (RMSE), coefficient of determination (R2 ) and coefficient of efficiency (CE) of the multiple linear regression (MLRg) value Model 2 (6 input variable independent) has the lowest the value of MAE and RMSE (0.0000002 and 13.6039) and highest R2 and CE (0.9971 and 0.9971). When compared between LM, SCG and RBF, the BFGS model structure 3-7-1 is the better and more accurate to prediction suspended sediment discharge in Jenderam catchment. The performance value in testing process, MAE and RMSE (13.5769 and 17.9011) is smallest, meanwhile R2 and CE (0.9999 and 0.9998) is the highest if it compared with the another BFGS Quasi-Newton model (6-3-1, 9-10-1 and 12-12-1). Based on the performance statistics value, MLRg, LM, SCG, BFGS and RBF suitable and accurately for prediction by modeling the non-linear complex behavior of suspended sediment responses to rainfall, water depth and discharge. The comparison between artificial neural network (ANN) and MLRg, the MLRg Model 2 accurately for to prediction suspended sediment discharge (kg/day) in Jenderan catchment area.
Thermosensory processing in the Drosophila brain
Liu, Wendy W.; Mazor, Ofer; Wilson, Rachel I.
2014-01-01
In Drosophila, just as in vertebrates, changes in external temperature are encoded by bidirectional opponent thermoreceptor cells: some cells are excited by warming and inhibited by cooling, whereas others are excited by cooling and inhibited by warming1,2. The central circuits that process these signals are not understood. In Drosophila, a specific brain region receives input from thermoreceptor cells2,3. Here we show that distinct genetically-identified projection neurons (PNs) in this brain region are excited by cooling, warming, or both. The PNs excited by cooling receive mainly feedforward excitation from cool thermoreceptors. In contrast, the PNs excited by warming (“warm-PNs”) receive both excitation from warm thermoreceptors and crossover inhibition from cool thermoreceptors via inhibitory interneurons. Notably, this crossover inhibition elicits warming-evoked excitation, because warming suppresses tonic activity in cool thermoreceptors. This in turn disinhibits warm-PNs and sums with feedforward excitation evoked by warming. Crossover inhibition could cancel non-thermal activity (noise) that is positively-correlated among warm and cool thermoreceptor cells, while reinforcing thermal activity which is anti-correlated. Our results show how central circuits can combine signals from bidirectional opponent neurons to construct sensitive and robust neural codes. PMID:25739502
ERIC Educational Resources Information Center
Alderete, John; Tupper, Paul; Frisch, Stefan A.
2013-01-01
A significant problem in computational language learning is that of inferring the content of well-formedness constraints from input data. In this article, we approach the constraint induction problem as the gradual adjustment of subsymbolic constraints in a connectionist network. In particular, we develop a multi-layer feed-forward network that…
Wei, Q; Hu, Y
2009-01-01
The major hurdle for segmenting lung lobes in computed tomographic (CT) images is to identify fissure regions, which encase lobar fissures. Accurate identification of these regions is difficult due to the variable shape and appearance of the fissures, along with the low contrast and high noise associated with CT images. This paper studies the effectiveness of two texture analysis methods - the gray level co-occurrence matrix (GLCM) and the gray level run length matrix (GLRLM) - in identifying fissure regions from isotropic CT image stacks. To classify GLCM and GLRLM texture features, we applied a feed-forward back-propagation neural network and achieved the best classification accuracy utilizing 16 quantized levels for computing the GLCM and GLRLM texture features and 64 neurons in the input/hidden layers of the neural network. Tested on isotropic CT image stacks of 24 patients with the pathologic lungs, we obtained accuracies of 86% and 87% for identifying fissure regions using the GLCM and GLRLM methods, respectively. These accuracies compare favorably with surgeons/radiologists' accuracy of 80% for identifying fissure regions in clinical settings. This shows promising potential for segmenting lung lobes using the GLCM and GLRLM methods.
2014-03-27
0.8.0. The virtual machine’s network adapter was set to internal network only to keep any outside traffic from interfering. A MySQL -based query...primary output of Fullstats is the ARFF file format, intended for use with the WEKA Java -based data mining software developed at the University of Waikato
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform
Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B.
2016-01-01
Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks. PMID:26909015
Event-Based Computation of Motion Flow on a Neuromorphic Analog Neural Platform.
Giulioni, Massimiliano; Lagorce, Xavier; Galluppi, Francesco; Benosman, Ryad B
2016-01-01
Estimating the speed and direction of moving objects is a crucial component of agents behaving in a dynamic world. Biological organisms perform this task by means of the neural connections originating from their retinal ganglion cells. In artificial systems the optic flow is usually extracted by comparing activity of two or more frames captured with a vision sensor. Designing artificial motion flow detectors which are as fast, robust, and efficient as the ones found in biological systems is however a challenging task. Inspired by the architecture proposed by Barlow and Levick in 1965 to explain the spiking activity of the direction-selective ganglion cells in the rabbit's retina, we introduce an architecture for robust optical flow extraction with an analog neuromorphic multi-chip system. The task is performed by a feed-forward network of analog integrate-and-fire neurons whose inputs are provided by contrast-sensitive photoreceptors. Computation is supported by the precise time of spike emission, and the extraction of the optical flow is based on time lag in the activation of nearby retinal neurons. Mimicking ganglion cells our neuromorphic detectors encode the amplitude and the direction of the apparent visual motion in their output spiking pattern. Hereby we describe the architectural aspects, discuss its latency, scalability, and robustness properties and demonstrate that a network of mismatched delicate analog elements can reliably extract the optical flow from a simple visual scene. This work shows how precise time of spike emission used as a computational basis, biological inspiration, and neuromorphic systems can be used together for solving specific tasks.
The iso-response method: measuring neuronal stimulus integration with closed-loop experiments
Gollisch, Tim; Herz, Andreas V. M.
2012-01-01
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments. PMID:23267315
Is extreme learning machine feasible? A theoretical assessment (part I).
Liu, Xia; Lin, Shaobo; Fang, Jian; Xu, Zongben
2015-01-01
An extreme learning machine (ELM) is a feedforward neural network (FNN) like learning system whose connections with output neurons are adjustable, while the connections with and within hidden neurons are randomly fixed. Numerous applications have demonstrated the feasibility and high efficiency of ELM-like systems. It has, however, been open if this is true for any general applications. In this two-part paper, we conduct a comprehensive feasibility analysis of ELM. In Part I, we provide an answer to the question by theoretically justifying the following: 1) for some suitable activation functions, such as polynomials, Nadaraya-Watson and sigmoid functions, the ELM-like systems can attain the theoretical generalization bound of the FNNs with all connections adjusted, i.e., they do not degrade the generalization capability of the FNNs even when the connections with and within hidden neurons are randomly fixed; 2) the number of hidden neurons needed for an ELM-like system to achieve the theoretical bound can be estimated; and 3) whenever the activation function is taken as polynomial, the deduced hidden layer output matrix is of full column-rank, therefore the generalized inverse technique can be efficiently applied to yield the solution of an ELM-like system, and, furthermore, for the nonpolynomial case, the Tikhonov regularization can be applied to guarantee the weak regularity while not sacrificing the generalization capability. In Part II, however, we reveal a different aspect of the feasibility of ELM: there also exists some activation functions, which makes the corresponding ELM degrade the generalization capability. The obtained results underlie the feasibility and efficiency of ELM-like systems, and yield various generalizations and improvements of the systems as well.
Escobar, Gina M.; Maffei, Arianna; Miller, Paul
2014-01-01
The computation of direction selectivity requires that a cell respond to joint spatial and temporal characteristics of the stimulus that cannot be separated into independent components. Direction selectivity in ferret visual cortex is not present at the time of eye opening but instead develops in the days and weeks following eye opening in a process that requires visual experience with moving stimuli. Classic Hebbian or spike timing-dependent modification of excitatory feed-forward synaptic inputs is unable to produce direction-selective cells from unselective or weakly directionally biased initial conditions because inputs eventually grow so strong that they can independently drive cortical neurons, violating the joint spatial-temporal activation requirement. Furthermore, without some form of synaptic competition, cells cannot develop direction selectivity in response to training with bidirectional stimulation, as cells in ferret visual cortex do. We show that imposing a maximum lateral geniculate nucleus (LGN)-to-cortex synaptic weight allows neurons to develop direction-selective responses that maintain the requirement for joint spatial and temporal activation. We demonstrate that a novel form of inhibitory plasticity, postsynaptic activity-dependent long-term potentiation of inhibition (POSD-LTPi), which operates in the developing cortex at the time of eye opening, can provide synaptic competition and enables robust development of direction-selective receptive fields with unidirectional or bidirectional stimulation. We propose a general model of the development of spatiotemporal receptive fields that consists of two phases: an experience-independent establishment of initial biases, followed by an experience-dependent amplification or modification of these biases via correlation-based plasticity of excitatory inputs that compete against gradually increasing feed-forward inhibition. PMID:24598528
Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan
2014-01-01
As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system. PMID:24828010
Zhou, Miaolei; Zhang, Qi; Wang, Jingyuan
2014-01-01
As a new type of smart material, magnetic shape memory alloy has the advantages of a fast response frequency and outstanding strain capability in the field of microdrive and microposition actuators. The hysteresis nonlinearity in magnetic shape memory alloy actuators, however, limits system performance and further application. Here we propose a feedforward-feedback hybrid control method to improve control precision and mitigate the effects of the hysteresis nonlinearity of magnetic shape memory alloy actuators. First, hysteresis nonlinearity compensation for the magnetic shape memory alloy actuator is implemented by establishing a feedforward controller which is an inverse hysteresis model based on Krasnosel'skii-Pokrovskii operator. Secondly, the paper employs the classical Proportion Integration Differentiation feedback control with feedforward control to comprise the hybrid control system, and for further enhancing the adaptive performance of the system and improving the control accuracy, the Radial Basis Function neural network self-tuning Proportion Integration Differentiation feedback control replaces the classical Proportion Integration Differentiation feedback control. Utilizing self-learning ability of the Radial Basis Function neural network obtains Jacobian information of magnetic shape memory alloy actuator for the on-line adjustment of parameters in Proportion Integration Differentiation controller. Finally, simulation results show that the hybrid control method proposed in this paper can greatly improve the control precision of magnetic shape memory alloy actuator and the maximum tracking error is reduced from 1.1% in the open-loop system to 0.43% in the hybrid control system.
Sailamul, Pachaya; Jang, Jaeson; Paik, Se-Bum
2017-12-01
Correlated neural activities such as synchronizations can significantly alter the characteristics of spike transfer between neural layers. However, it is not clear how this synchronization-dependent spike transfer can be affected by the structure of convergent feedforward wiring. To address this question, we implemented computer simulations of model neural networks: a source and a target layer connected with different types of convergent wiring rules. In the Gaussian-Gaussian (GG) model, both the connection probability and the strength are given as Gaussian distribution as a function of spatial distance. In the Uniform-Constant (UC) and Uniform-Exponential (UE) models, the connection probability density is a uniform constant within a certain range, but the connection strength is set as a constant value or an exponentially decaying function, respectively. Then we examined how the spike transfer function is modulated under these conditions, while static or synchronized input patterns were introduced to simulate different levels of feedforward spike synchronization. We observed that the synchronization-dependent modulation of the transfer function appeared noticeably different for each convergence condition. The modulation of the spike transfer function was largest in the UC model, and smallest in the UE model. Our analysis showed that this difference was induced by the different spike weight distributions that was generated from convergent synapses in each model. Our results suggest that, the structure of the feedforward convergence is a crucial factor for correlation-dependent spike control, thus must be considered important to understand the mechanism of information transfer in the brain.
On adaptive learning rate that guarantees convergence in feedforward networks.
Behera, Laxmidhar; Kumar, Swagat; Patnaik, Awhan
2006-09-01
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
Multi-Layered Feedforward Neural Networks for Image Segmentation
1991-12-01
the Gram-Schmidt Network ...................... 80 xi Preface WILLIAM SHAKESPEARE 1564-1616 Is this a dagger which I see before me, The handle toward...any input-output mapping with a single hidden layer of non-linear nodes, the result may be like proving that a monkey could write Hamlet . Certainly it
2011-03-01
algorithm is utilized by Belue, Steppe, & Bauer and Kocur , et al. (Belue, Steppe, & Bauer, April 1996) ( Kocur , et al., 1996). Bacauskiene and...Society. Cardiff, UK. Kocur , C., Roger, S., Myers, L., Burns, T., Hoffmeister, J., Bauer, K., et al. (1996). Using neural networks to select
Feed-forward neural network model for hunger and satiety related VAS score prediction.
Krishnan, Shaji; Hendriks, Henk F J; Hartvigsen, Merete L; de Graaf, Albert A
2016-07-07
An artificial neural network approach was chosen to model the outcome of the complex signaling pathways in the gastro-intestinal tract and other peripheral organs that eventually produce the satiety feeling in the brain upon feeding. A multilayer feed-forward neural network was trained with sets of experimental data relating concentration-time courses of plasma satiety hormones to Visual Analog Scales (VAS) scores. The network successfully predicted VAS responses from sets of satiety hormone data obtained in experiments using different food compositions. The correlation coefficients for the predicted VAS responses for test sets having i) a full set of three satiety hormones, ii) a set of only two satiety hormones, and iii) a set of only one satiety hormone were 0.96, 0.96, and 0.89, respectively. The predicted VAS responses discriminated the satiety effects of high satiating food types from less satiating food types both in orally fed and ileal infused forms. From this application of artificial neural networks, one may conclude that neural network models are very suitable to describe situations where behavior is complex and incompletely understood. However, training data sets that fit the experimental conditions need to be available.
Criteria for Choosing the Best Neural Network: Part 1
1991-07-24
Touretzky, pp. 177-185. San Mateo: Morgan Kaufmann. Harp, S.A., Samad , T., and Guha, A . (1990). Designing application-specific neural networks using genetic...determining a parsimonious neural network for use in prediction/generalization based on a given fixed learning sample. Both the classification and...statistical settings, algorithms for selecting the number of hidden layer nodes in a three layer, feedforward neural network are presented. The selection
Neuronal Inputs and Outputs of Aging and Longevity
Alcedo, Joy; Flatt, Thomas; Pasyukova, Elena G.
2013-01-01
An animal’s survival strongly depends on its ability to maintain homeostasis in response to the changing quality of its external and internal environment. This is achieved through intracellular and intercellular communication within and among different tissues. One of the organ systems that plays a major role in this communication and the maintenance of homeostasis is the nervous system. Here we highlight different aspects of the neuronal inputs and outputs of pathways that affect aging and longevity. Accordingly, we discuss how sensory inputs influence homeostasis and lifespan through the modulation of different types of neuronal signals, which reflects the complexity of the environmental cues that affect physiology. We also describe feedback, compensatory, and feed-forward mechanisms in these longevity-modulating pathways that are necessary for homeostasis. Finally, we consider the temporal requirements for these neuronal processes and the potential role of natural genetic variation in shaping the neurobiology of aging. PMID:23653632
Zhang, Qiang; Pi, Jingbo; Woods, Courtney G; Andersen, Melvin E
2009-06-15
Hormetic responses to xenobiotic exposure likely occur as a result of overcompensation by the homeostatic control systems operating in biological organisms. However, the mechanisms underlying overcompensation that leads to hormesis are still unclear. A well-known homeostatic circuit in the cell is the gene induction network comprising phase I, II and III metabolizing enzymes, which are responsible for xenobiotic detoxification, and in many cases, bioactivation. By formulating a differential equation-based computational model, we investigated in this study whether hormesis can arise from the operation of this gene/enzyme network. The model consists of two feedback and one feedforward controls. With the phase I negative feedback control, xenobiotic X activates nuclear receptors to induce cytochrome P450 enzyme, which bioactivates X into a reactive metabolite X'. With the phase II negative feedback control, X' activates transcription factor Nrf2 to induce phase II enzymes such as glutathione S-transferase and glutamate cysteine ligase, etc., which participate in a set of reactions that lead to the metabolism of X' into a less toxic conjugate X''. The feedforward control involves phase I to II cross-induction, in which the parent chemical X can also induce phase II enzymes directly through the nuclear receptor and indirectly through transcriptionally upregulating Nrf2. As a result of the active feedforward control, a steady-state hormetic relationship readily arises between the concentrations of the reactive metabolite X' and the extracellular parent chemical X to which the cell is exposed. The shape of dose-response evolves over time from initially monotonically increasing to J-shaped at the final steady state-a temporal sequence consistent with adaptation-mediated hormesis. The magnitude of the hormetic response is enhanced by increases in the feedforward gain, but attenuated by increases in the bioactivation or phase II feedback loop gains. Our study suggests a possibly common mechanism for the hormetic responses observed with many mutagens/carcinogens whose activities require bioactivation by phase I enzymes. Feedforward control, often operating in combination with negative feedback regulation in a homeostatic system, may be a general control theme responsible for steady-state hormesis.
Miceli, Francesco; Soldovieri, Maria Virginia; Ambrosino, Paolo; De Maria, Michela; Migliore, Michele; Migliore, Rosanna; Taglialatela, Maurizio
2015-03-04
Mutations in Kv7.2 (KCNQ2) and Kv7.3 (KCNQ3) genes, encoding for voltage-gated K(+) channel subunits underlying the neuronal M-current, have been associated with a wide spectrum of early-onset epileptic disorders ranging from benign familial neonatal seizures to severe epileptic encephalopathies. The aim of the present work has been to investigate the molecular mechanisms of channel dysfunction caused by voltage-sensing domain mutations in Kv7.2 (R144Q, R201C, and R201H) or Kv7.3 (R230C) recently found in patients with epileptic encephalopathies and/or intellectual disability. Electrophysiological studies in mammalian cells transfected with human Kv7.2 and/or Kv7.3 cDNAs revealed that each of these four mutations stabilized the activated state of the channel, thereby producing gain-of-function effects, which are opposite to the loss-of-function effects produced by previously found mutations. Multistate structural modeling revealed that the R201 residue in Kv7.2, corresponding to R230 in Kv7.3, stabilized the resting and nearby voltage-sensing domain states by forming an intricate network of electrostatic interactions with neighboring negatively charged residues, a result also confirmed by disulfide trapping experiments. Using a realistic model of a feedforward inhibitory microcircuit in the hippocampal CA1 region, an increased excitability of pyramidal neurons was found upon incorporation of the experimentally defined parameters for mutant M-current, suggesting that changes in network interactions rather than in intrinsic cell properties may be responsible for the neuronal hyperexcitability by these gain-of-function mutations. Together, the present results suggest that gain-of-function mutations in Kv7.2/3 currents may cause human epilepsy with a severe clinical course, thus revealing a previously unexplored level of complexity in disease pathogenetic mechanisms. Copyright © 2015 the authors 0270-6474/15/353782-12$15.00/0.
Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm.
Wu, Haizhou; Zhou, Yongquan; Luo, Qifang; Basset, Mohamed Abdel
2016-01-01
Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentili, Pier Luigi, E-mail: pierluigi.gentili@unipg.it; Gotoda, Hiroshi; Dolnik, Milos
Forecasting of aperiodic time series is a compelling challenge for science. In this work, we analyze aperiodic spectrophotometric data, proportional to the concentrations of two forms of a thermoreversible photochromic spiro-oxazine, that are generated when a cuvette containing a solution of the spiro-oxazine undergoes photoreaction and convection due to localized ultraviolet illumination. We construct the phase space for the system using Takens' theorem and we calculate the Lyapunov exponents and the correlation dimensions to ascertain the chaotic character of the time series. Finally, we predict the time series using three distinct methods: a feed-forward neural network, fuzzy logic, and amore » local nonlinear predictor. We compare the performances of these three methods.« less
Terzenidis, Nikos; Moralis-Pegios, Miltiadis; Mourgias-Alexandris, George; Vyrsokinos, Konstantinos; Pleros, Nikos
2018-04-02
Departing from traditional server-centric data center architectures towards disaggregated systems that can offer increased resource utilization at reduced cost and energy envelopes, the use of high-port switching with highly stringent latency and bandwidth requirements becomes a necessity. We present an optical switch architecture exploiting a hybrid broadcast-and-select/wavelength routing scheme with small-scale optical feedforward buffering. The architecture is experimentally demonstrated at 10Gb/s, reporting error-free performance with a power penalty of <2.5dB. Moreover, network simulations for a 256-node system, revealed low-latency values of only 605nsec, at throughput values reaching 80% when employing 2-packet-size optical buffers, while multi-rack network performance was also investigated.
Martin, E Anne; Muralidhar, Shruti; Wang, Zhirong; Cervantes, Diégo Cordero; Basu, Raunak; Taylor, Matthew R; Hunter, Jennifer; Cutforth, Tyler; Wilke, Scott A; Ghosh, Anirvan; Williams, Megan E
2015-11-17
Synaptic target specificity, whereby neurons make distinct types of synapses with different target cells, is critical for brain function, yet the mechanisms driving it are poorly understood. In this study, we demonstrate Kirrel3 regulates target-specific synapse formation at hippocampal mossy fiber (MF) synapses, which connect dentate granule (DG) neurons to both CA3 and GABAergic neurons. Here, we show Kirrel3 is required for formation of MF filopodia; the structures that give rise to DG-GABA synapses and that regulate feed-forward inhibition of CA3 neurons. Consequently, loss of Kirrel3 robustly increases CA3 neuron activity in developing mice. Alterations in the Kirrel3 gene are repeatedly associated with intellectual disabilities, but the role of Kirrel3 at synapses remained largely unknown. Our findings demonstrate that subtle synaptic changes during development impact circuit function and provide the first insight toward understanding the cellular basis of Kirrel3-dependent neurodevelopmental disorders.
Hunger Promotes Fear Extinction by Activation of an Amygdala Microcircuit
Verma, Dilip; Wood, James; Lach, Gilliard; Herzog, Herbert; Sperk, Guenther; Tasan, Ramon
2016-01-01
Emotions control evolutionarily-conserved behavior that is central to survival in a natural environment. Imbalance within emotional circuitries, however, may result in malfunction and manifestation of anxiety disorders. Thus, a better understanding of emotional processes and, in particular, the interaction of the networks involved is of considerable clinical relevance. Although neurobiological substrates of emotionally controlled circuitries are increasingly evident, their mutual influences are not. To investigate interactions between hunger and fear, we performed Pavlovian fear conditioning in fasted wild-type mice and in mice with genetic modification of a feeding-related gene. Furthermore, we analyzed in these mice the electrophysiological microcircuits underlying fear extinction. Short-term fasting before fear acquisition specifically impaired long-term fear memory, whereas fasting before fear extinction facilitated extinction learning. Furthermore, genetic deletion of the Y4 receptor reduced appetite and completely impaired fear extinction, a phenomenon that was rescued by fasting. A marked increase in feed-forward inhibition between the basolateral and central amygdala has been proposed as a synaptic correlate of fear extinction and involves activation of the medial intercalated cells. This form of plasticity was lost in Y4KO mice. Fasting before extinction learning, however, resulted in specific activation of the medial intercalated neurons and re-established the enhancement of feed-forward inhibition in this amygdala microcircuit of Y4KO mice. Hence, consolidation of fear and extinction memories is differentially regulated by hunger, suggesting that fasting and modification of feeding-related genes could augment the effectiveness of exposure therapy and provide novel drug targets for treatment of anxiety disorders. PMID:26062787
Fujita, Masahiko
2013-06-01
A new supervised learning theory is proposed for a hierarchical neural network with a single hidden layer of threshold units, which can approximate any continuous transformation, and applied to a cerebellar function to suppress the end-point variability of saccades. In motor systems, feedback control can reduce noise effects if the noise is added in a pathway from a motor center to a peripheral effector; however, it cannot reduce noise effects if the noise is generated in the motor center itself: a new control scheme is necessary for such noise. The cerebellar cortex is well known as a supervised learning system, and a novel theory of cerebellar cortical function developed in this study can explain the capability of the cerebellum to feedforwardly reduce noise effects, such as end-point variability of saccades. This theory assumes that a Golgi-granule cell system can encode the strength of a mossy fiber input as the state of neuronal activity of parallel fibers. By combining these parallel fiber signals with appropriate connection weights to produce a Purkinje cell output, an arbitrary continuous input-output relationship can be obtained. By incorporating such flexible computation and learning ability in a process of saccadic gain adaptation, a new control scheme in which the cerebellar cortex feedforwardly suppresses the end-point variability when it detects a variation in saccadic commands can be devised. Computer simulation confirmed the efficiency of such learning and showed a reduction in the variability of saccadic end points, similar to results obtained from experimental data.
Hunger Promotes Fear Extinction by Activation of an Amygdala Microcircuit.
Verma, Dilip; Wood, James; Lach, Gilliard; Herzog, Herbert; Sperk, Guenther; Tasan, Ramon
2016-01-01
Emotions control evolutionarily-conserved behavior that is central to survival in a natural environment. Imbalance within emotional circuitries, however, may result in malfunction and manifestation of anxiety disorders. Thus, a better understanding of emotional processes and, in particular, the interaction of the networks involved is of considerable clinical relevance. Although neurobiological substrates of emotionally controlled circuitries are increasingly evident, their mutual influences are not. To investigate interactions between hunger and fear, we performed Pavlovian fear conditioning in fasted wild-type mice and in mice with genetic modification of a feeding-related gene. Furthermore, we analyzed in these mice the electrophysiological microcircuits underlying fear extinction. Short-term fasting before fear acquisition specifically impaired long-term fear memory, whereas fasting before fear extinction facilitated extinction learning. Furthermore, genetic deletion of the Y4 receptor reduced appetite and completely impaired fear extinction, a phenomenon that was rescued by fasting. A marked increase in feed-forward inhibition between the basolateral and central amygdala has been proposed as a synaptic correlate of fear extinction and involves activation of the medial intercalated cells. This form of plasticity was lost in Y4KO mice. Fasting before extinction learning, however, resulted in specific activation of the medial intercalated neurons and re-established the enhancement of feed-forward inhibition in this amygdala microcircuit of Y4KO mice. Hence, consolidation of fear and extinction memories is differentially regulated by hunger, suggesting that fasting and modification of feeding-related genes could augment the effectiveness of exposure therapy and provide novel drug targets for treatment of anxiety disorders.
Control of cerebellar granule cell output by sensory-evoked Golgi cell inhibition
Duguid, Ian; Branco, Tiago; Chadderton, Paul; Arlt, Charlotte; Powell, Kate; Häusser, Michael
2015-01-01
Classical feed-forward inhibition involves an excitation–inhibition sequence that enhances the temporal precision of neuronal responses by narrowing the window for synaptic integration. In the input layer of the cerebellum, feed-forward inhibition is thought to preserve the temporal fidelity of granule cell spikes during mossy fiber stimulation. Although this classical feed-forward inhibitory circuit has been demonstrated in vitro, the extent to which inhibition shapes granule cell sensory responses in vivo remains unresolved. Here we combined whole-cell patch-clamp recordings in vivo and dynamic clamp recordings in vitro to directly assess the impact of Golgi cell inhibition on sensory information transmission in the granule cell layer of the cerebellum. We show that the majority of granule cells in Crus II of the cerebrocerebellum receive sensory-evoked phasic and spillover inhibition prior to mossy fiber excitation. This preceding inhibition reduces granule cell excitability and sensory-evoked spike precision, but enhances sensory response reproducibility across the granule cell population. Our findings suggest that neighboring granule cells and Golgi cells can receive segregated and functionally distinct mossy fiber inputs, enabling Golgi cells to regulate the size and reproducibility of sensory responses. PMID:26432880
Altered cortical communication in amyotrophic lateral sclerosis.
Blain-Moraes, Stefanie; Mashour, George A; Lee, Heonsoo; Huggins, Jane E; Lee, Uncheol
2013-05-24
Amyotrophic lateral sclerosis (ALS) is a disorder associated primarily with the degeneration of the motor system. More recently, functional connectivity studies have demonstrated potentially adaptive changes in ALS brain organization, but disease-related changes in cortical communication remain unknown. We recruited individuals with ALS and age-matched controls to operate a brain-computer interface while electroencephalography was recorded over three sessions. Using normalized symbolic transfer entropy, we measured directed functional connectivity from frontal to parietal (feedback connectivity) and parietal to frontal (feedforward connectivity) regions. Feedback connectivity was not significantly different between groups, but feedforward connectivity was significantly higher in individuals with ALS. This result was consistent across a broad electroencephalographic spectrum (4-35 Hz), and in theta, alpha and beta frequency bands. Feedback connectivity has been associated with conscious state and was found to be independent of ALS symptom severity in this study, which may have significant implications for the detection of consciousness in individuals with advanced ALS. We suggest that increases in feedforward connectivity represent a compensatory response to the ALS-related loss of input such that sensory stimuli have sufficient strength to cross the threshold necessary for conscious processing in the global neuronal workspace. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The 'sensory tolerance limit': A hypothetical construct determining exercise performance?
Hureau, Thomas J; Romer, Lee M; Amann, Markus
2018-02-01
Neuromuscular fatigue compromises exercise performance and is determined by central and peripheral mechanisms. Interactions between the two components of fatigue can occur via neural pathways, including feedback and feedforward processes. This brief review discusses the influence of feedback and feedforward mechanisms on exercise limitation. In terms of feedback mechanisms, particular attention is given to group III/IV sensory neurons which link limb muscle with the central nervous system. Central corollary discharge, a copy of the neural drive from the brain to the working muscles, provides a signal from the motor system to sensory systems and is considered a feedforward mechanism that might influence fatigue and consequently exercise performance. We highlight findings from studies supporting the existence of a 'critical threshold of peripheral fatigue', a previously proposed hypothesis based on the idea that a negative feedback loop operates to protect the exercising limb muscle from severe threats to homeostasis during whole-body exercise. While the threshold theory remains to be disproven within a given task, it is not generalisable across different exercise modalities. The 'sensory tolerance limit', a more theoretical concept, may address this issue and explain exercise tolerance in more global terms and across exercise modalities. The 'sensory tolerance limit' can be viewed as a negative feedback loop which accounts for the sum of all feedback (locomotor muscles, respiratory muscles, organs, and muscles not directly involved in exercise) and feedforward signals processed within the central nervous system with the purpose of regulating the intensity of exercise to ensure that voluntary activity remains tolerable.
Regulation of spatial selectivity by crossover inhibition.
Cafaro, Jon; Rieke, Fred
2013-04-10
Signals throughout the nervous system diverge into parallel excitatory and inhibitory pathways that later converge on downstream neurons to control their spike output. Converging excitatory and inhibitory synaptic inputs can exhibit a variety of temporal relationships. A common motif is feedforward inhibition, in which an increase (decrease) in excitatory input precedes a corresponding increase (decrease) in inhibitory input. The delay of inhibitory input relative to excitatory input originates from an extra synapse in the circuit shaping inhibitory input. Another common motif is push-pull or "crossover" inhibition, in which increases (decreases) in excitatory input occur together with decreases (increases) in inhibitory input. Primate On midget ganglion cells receive primarily feedforward inhibition and On parasol cells receive primarily crossover inhibition; this difference provides an opportunity to study how each motif shapes the light responses of cell types that play a key role in visual perception. For full-field stimuli, feedforward inhibition abbreviated and attenuated responses of On midget cells, while crossover inhibition, though plentiful, had surprisingly little impact on the responses of On parasol cells. Spatially structured stimuli, however, could cause excitatory and inhibitory inputs to On parasol cells to increase together, adopting a temporal relation very much like that for feedforward inhibition. In this case, inhibitory inputs substantially abbreviated a cell's spike output. Thus inhibitory input shapes the temporal stimulus selectivity of both midget and parasol ganglion cells, but its impact on responses of parasol cells depends strongly on the spatial structure of the light inputs.
Mustafa, Yasmen A; Jaid, Ghydaa M; Alwared, Abeer I; Ebrahim, Mothana
2014-06-01
The application of advanced oxidation process (AOP) in the treatment of wastewater contaminated with oil was investigated in this study. The AOP investigated is the homogeneous photo-Fenton (UV/H2O2/Fe(+2)) process. The reaction is influenced by the input concentration of hydrogen peroxide H2O2, amount of the iron catalyst Fe(+2), pH, temperature, irradiation time, and concentration of oil in the wastewater. The removal efficiency for the used system at the optimal operational parameters (H2O2 = 400 mg/L, Fe(+2) = 40 mg/L, pH = 3, irradiation time = 150 min, and temperature = 30 °C) for 1,000 mg/L oil load was found to be 72%. The study examined the implementation of artificial neural network (ANN) for the prediction and simulation of oil degradation in aqueous solution by photo-Fenton process. The multilayered feed-forward networks were trained by using a backpropagation algorithm; a three-layer network with 22 neurons in the hidden layer gave optimal results. The results show that the ANN model can predict the experimental results with high correlation coefficient (R (2) = 0.9949). The sensitivity analysis showed that all studied variables (H2O2, Fe(+2), pH, irradiation time, temperature, and oil concentration) have strong effect on the oil degradation. The pH was found to be the most influential parameter with relative importance of 20.6%.
Prediction of Weld Penetration in FCAW of HSLA steel using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Asl, Y. Dadgar; Mostafa, N. B.; Panahizadeh R., V.; Seyedkashi, S. M. H.
2011-01-01
Flux-cored arc welding (FCAW) is a semiautomatic or automatic arc welding process that requires a continuously-fed consumable tubular electrode containing a flux. The main FCAW process parameters affecting the depth of penetration are welding current, arc voltage, nozzle-to-work distance, torch angle and welding speed. Shallow depth of penetration may contribute to failure of a welded structure since penetration determines the stress-carrying capacity of a welded joint. To avoid such occurrences; the welding process parameters influencing the weld penetration must be properly selected to obtain an acceptable weld penetration and hence a high quality joint. Artificial neural networks (ANN), also called neural networks (NN), are computational models used to express complex non-linear relationships between input and output data. In this paper, artificial neural network (ANN) method is used to predict the effects of welding current, arc voltage, nozzle-to-work distance, torch angle and welding speed on weld penetration depth in gas shielded FCAW of a grade of high strength low alloy steel. 32 experimental runs were carried out using the bead-on-plate welding technique. Weld penetrations were measured and on the basis of these 32 sets of experimental data, a feed-forward back-propagation neural network was created. 28 sets of the experiments were used as the training data and the remaining 4 sets were used for the testing phase of the network. The ANN has one hidden layer with eight neurons and is trained after 840 iterations. The comparison between the experimental results and ANN results showed that the trained network could predict the effects of the FCAW process parameters on weld penetration adequately.
Self-growing neural network architecture using crisp and fuzzy entropy
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.
1992-01-01
The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed.
Self-growing neural network architecture using crisp and fuzzy entropy
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.
1992-01-01
The paper briefly describes the self-growing neural network algorithm, CID3, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results for a real-life recognition problem of distinguishing defects in a glass ribbon, and for a benchmark problen of telling two spirals apart are shown and discussed.
NASA Technical Reports Server (NTRS)
Hof, P. R.; Ungerleider, L. G.; Adams, M. M.; Webster, M. J.; Gattass, R.; Blumberg, D. M.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)
1997-01-01
Previous immunohistochemical studies combined with retrograde tracing in macaque monkeys have demonstrated that corticocortical projections can be differentiated by their content of neurofilament protein. The present study analyzed the distribution of nonphosphorylated neurofilament protein in callosally projecting neurons located at the V1/V2 border. All of the retrogradely labeled neurons were located in layer III at the V1/V2 border and at an immediately adjacent zone of area V2. A quantitative analysis showed that the vast majority (almost 95%) of these interhemispheric projection neurons contain neurofilament protein immunoreactivity. This observation differs from data obtained in other sets of callosal connections, including homotypical interhemispheric projections in the prefrontal, temporal, and parietal association cortices, that were found to contain uniformly low proportions of neurofilament protein-immunoreactive neurons. Comparably, highly variable proportions of neurofilament protein-containing neurons have been reported in intrahemispheric corticocortical pathways, including feedforward and feedback visual connections. These results indicate that neurofilament protein is a prominent neurochemical feature that identifies a particular population of interhemispheric projection neurons at the V1/V2 border and suggest that this biochemical attribute may be critical for the function of this subset of callosal neurons.
Bringing Interpretability and Visualization with Artificial Neural Networks
ERIC Educational Resources Information Center
Gritsenko, Andrey
2017-01-01
Extreme Learning Machine (ELM) is a training algorithm for Single-Layer Feed-forward Neural Network (SLFN). The difference in theory of ELM from other training algorithms is in the existence of explicitly-given solution due to the immutability of initialed weights. In practice, ELMs achieve performance similar to that of other state-of-the-art…
Dordek, Yedidyah; Soudry, Daniel; Meir, Ron; Derdikman, Dori
2016-01-01
Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights are learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Both numerical results and analytic considerations indicate that if the components of the feedforward neural network are non-negative, the output converges to a hexagonal lattice. Without the non-negativity constraint, the output converges to a square lattice. Consistent with experiments, grid spacing ratio between the first two consecutive modules is −1.4. Our results express a possible linkage between place cell to grid cell interactions and PCA. DOI: http://dx.doi.org/10.7554/eLife.10094.001 PMID:26952211
Training feed-forward neural networks with gain constraints
Hartman
2000-04-01
Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.
Interlocked feedforward loops control cell-type-specific Rhodopsin expression in the Drosophila eye.
Johnston, Robert J; Otake, Yoshiaki; Sood, Pranidhi; Vogt, Nina; Behnia, Rudy; Vasiliauskas, Daniel; McDonald, Elizabeth; Xie, Baotong; Koenig, Sebastian; Wolf, Reinhard; Cook, Tiffany; Gebelein, Brian; Kussell, Edo; Nakagoshi, Hideki; Desplan, Claude
2011-06-10
How complex networks of activators and repressors lead to exquisitely specific cell-type determination during development is poorly understood. In the Drosophila eye, expression patterns of Rhodopsins define at least eight functionally distinct though related subtypes of photoreceptors. Here, we describe a role for the transcription factor gene defective proventriculus (dve) as a critical node in the network regulating Rhodopsin expression. dve is a shared component of two opposing, interlocked feedforward loops (FFLs). Orthodenticle and Dve interact in an incoherent FFL to repress Rhodopsin expression throughout the eye. In R7 and R8 photoreceptors, a coherent FFL relieves repression by Dve while activating Rhodopsin expression. Therefore, this network uses repression to restrict and combinatorial activation to induce cell-type-specific expression. Furthermore, Dve levels are finely tuned to yield cell-type- and region-specific repression or activation outcomes. This interlocked FFL motif may be a general mechanism to control terminal cell-fate specification. Copyright © 2011 Elsevier Inc. All rights reserved.
Functional approximation using artificial neural networks in structural mechanics
NASA Technical Reports Server (NTRS)
Alam, Javed; Berke, Laszlo
1993-01-01
The artificial neural networks (ANN) methodology is an outgrowth of research in artificial intelligence. In this study, the feed-forward network model that was proposed by Rumelhart, Hinton, and Williams was applied to the mapping of functions that are encountered in structural mechanics problems. Several different network configurations were chosen to train the available data for problems in materials characterization and structural analysis of plates and shells. By using the recall process, the accuracy of these trained networks was assessed.
Inverse kinematics problem in robotics using neural networks
NASA Technical Reports Server (NTRS)
Choi, Benjamin B.; Lawrence, Charles
1992-01-01
In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.
Training Feedforward Neural Networks Using Symbiotic Organisms Search Algorithm
Wu, Haizhou; Luo, Qifang
2016-01-01
Symbiotic organisms search (SOS) is a new robust and powerful metaheuristic algorithm, which stimulates the symbiotic interaction strategies adopted by organisms to survive and propagate in the ecosystem. In the supervised learning area, it is a challenging task to present a satisfactory and efficient training algorithm for feedforward neural networks (FNNs). In this paper, SOS is employed as a new method for training FNNs. To investigate the performance of the aforementioned method, eight different datasets selected from the UCI machine learning repository are employed for experiment and the results are compared among seven metaheuristic algorithms. The results show that SOS performs better than other algorithms for training FNNs in terms of converging speed. It is also proven that an FNN trained by the method of SOS has better accuracy than most algorithms compared. PMID:28105044
Drexel, M.; Preidt, A.P.; Kirchmair, E.; Sperk, G.
2011-01-01
The subiculum is the major output area of the hippocampus. It is closely interconnected with the entorhinal cortex and other parahippocampal areas. In animal models of temporal lobe epilepsy (TLE) and in TLE patients it exerts increased network excitability and may crucially contribute to the propagation of limbic seizures. Using immunohistochemistry and in situ-hybridization we now investigated neuropathological changes affecting parvalbumin and calretinin containing neurons in the subiculum and other parahippocampal areas after kainic acid-induced status epilepticus. We observed prominent losses in parvalbumin containing interneurons in the subiculum and entorhinal cortex, and in the principal cell layers of the pre- and parasubiculum. Degeneration of parvalbumin-positive neurons was associated with significant precipitation of parvalbumin-immunoreactive debris 24 h after kainic acid injection. In the subiculum the superficial portion of the pyramidal cell layer was more severely affected than its deep part. In the entorhinal cortex, the deep layers were more severely affected than the superficial ones. The decrease in number of parvalbumin-positive neurons in the subiculum and entorhinal cortex correlated with the number of spontaneous seizures subsequently experienced by the rats. The loss of parvalbumin neurons thus may contribute to the development of spontaneous seizures. On the other hand, surviving parvalbumin neurons revealed markedly increased expression of parvalbumin mRNA notably in the pyramidal cell layer of the subiculum and in all layers of the entorhinal cortex. This indicates increased activity of these neurons aiming to compensate for the partial loss of this functionally important neuron population. Furthermore, calretinin-positive fibers terminating in the molecular layer of the subiculum, in sector CA1 of the hippocampus proper and in the entorhinal cortex degenerated together with their presumed perikarya in the thalamic nucleus reuniens. In addition, a significant loss of calretinin containing interneurons was observed in the subiculum. Notably, the loss in parvalbumin positive neurons in the subiculum equaled that in human TLE. It may result in marked impairment of feed-forward inhibition of the temporo-ammonic pathway and may significantly contribute to epileptogenesis. Similarly, the loss of calretinin-positive fiber tracts originating from the nucleus reuniens thalami significantly contributes to the rearrangement of neuronal circuitries in the subiculum and entorhinal cortex during epileptogenesis. PMID:21616128
Li, Ling-Yun; Xiong, Xiaorui R; Ibrahim, Leena A; Yuan, Wei; Tao, Huizhong W; Zhang, Li I
2015-07-01
Cortical inhibitory circuits play important roles in shaping sensory processing. In auditory cortex, however, functional properties of genetically identified inhibitory neurons are poorly characterized. By two-photon imaging-guided recordings, we specifically targeted 2 major types of cortical inhibitory neuron, parvalbumin (PV) and somatostatin (SOM) expressing neurons, in superficial layers of mouse auditory cortex. We found that PV cells exhibited broader tonal receptive fields with lower intensity thresholds and stronger tone-evoked spike responses compared with SOM neurons. The latter exhibited similar frequency selectivity as excitatory neurons. The broader/weaker frequency tuning of PV neurons was attributed to a broader range of synaptic inputs and stronger subthreshold responses elicited, which resulted in a higher efficiency in the conversion of input to output. In addition, onsets of both the input and spike responses of SOM neurons were significantly delayed compared with PV and excitatory cells. Our results suggest that PV and SOM neurons engage in auditory cortical circuits in different manners: while PV neurons may provide broadly tuned feedforward inhibition for a rapid control of ascending inputs to excitatory neurons, the delayed and more selective inhibition from SOM neurons may provide a specific modulation of feedback inputs on their distal dendrites. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
2015-12-29
Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.
Neural attractor network for application in visual field data classification.
Fink, Wolfgang
2004-07-07
The purpose was to introduce a novel method for computer-based classification of visual field data derived from perimetric examination, that may act as a 'counsellor', providing an independent 'second opinion' to the diagnosing physician. The classification system consists of a Hopfield-type neural attractor network that obtains its input data from perimetric examination results. An iterative relaxation process determines the states of the neurons dynamically. Therefore, even 'noisy' perimetric output, e.g., early stages of a disease, may eventually be classified correctly according to the predefined idealized visual field defect (scotoma) patterns, stored as attractors of the network, that are found with diseases of the eye, optic nerve and the central nervous system. Preliminary tests of the classification system on real visual field data derived from perimetric examinations have shown a classification success of over 80%. Some of the main advantages of the Hopfield-attractor-network-based approach over feed-forward type neural networks are: (1) network architecture is defined by the classification problem; (2) no training is required to determine the neural coupling strengths; (3) assignment of an auto-diagnosis confidence level is possible by means of an overlap parameter and the Hamming distance. In conclusion, the novel method for computer-based classification of visual field data, presented here, furnishes a valuable first overview and an independent 'second opinion' in judging perimetric examination results, pointing towards a final diagnosis by a physician. It should not be considered a substitute for the diagnosing physician. Thanks to the worldwide accessibility of the Internet, the classification system offers a promising perspective towards modern computer-assisted diagnosis in both medicine and tele-medicine, for example and in particular, with respect to non-ophthalmic clinics or in communities where perimetric expertise is not readily available.
Hendrickson, Phillip J.; Yu, Gene J.; Song, Dong; Berger, Theodore W.
2015-01-01
This paper reports on findings from a million-cell granule cell model of the rat dentate gyrus that was used to explore the contributions of local interneuronal and associational circuits to network-level activity. The model contains experimentally derived morphological parameters for granule cells, which each contain approximately 200 compartments, and biophysical parameters for granule cells, basket cells, and mossy cells that were based both on electrophysiological data and previously published models. Synaptic input to cells in the model consisted of glutamatergic AMPA-like EPSPs and GABAergic-like IPSPs from excitatory and inhibitory neurons, respectively. The main source of input to the model was from layer II entorhinal cortical neurons. Network connectivity was constrained by the topography of the system, and was derived from axonal transport studies, which provided details about the spatial spread of axonal terminal fields, as well as how subregions of the medial and lateral entorhinal cortices project to subregions of the dentate gyrus. Results of this study show that strong feedback inhibition from the basket cell population can cause high-frequency rhythmicity in granule cells, while the strength of feedforward inhibition serves to scale the total amount of granule cell activity. Results furthermore show that the topography of local interneuronal circuits can have just as strong an impact on the development of spatio-temporal clusters in the granule cell population as the perforant path topography does, both sharpening existing clusters and introducing new ones with a greater spatial extent. Finally, results show that the interactions between the inhibitory and associational loops can cause high frequency oscillations that are modulated by a low-frequency oscillatory signal. These results serve to further illustrate the importance of topographical constraints on a global signal processing feature of a neural network, while also illustrating how rich spatio-temporal and oscillatory dynamics can evolve from a relatively small number of interacting local circuits. PMID:26635545
Callosal responses in a retrosplenial column.
Sempere-Ferràndez, Alejandro; Andrés-Bayón, Belén; Geijo-Barrientos, Emilio
2018-04-01
The axons forming the corpus callosum sustain the interhemispheric communication across homotopic cortical areas. We have studied how neurons throughout the columnar extension of the retrosplenial cortex integrate the contralateral input from callosal projecting neurons in cortical slices. Our results show that pyramidal neurons in layers 2/3 and the large, thick-tufted pyramidal neurons in layer 5B showed larger excitatory callosal responses than layer 5A and layer 5B thin-tufted pyramidal neurons, while layer 6 remained silent to this input. Feed-forward inhibitory currents generated by fast spiking, parvalbumin expressing interneurons recruited by callosal axons mimicked the response size distribution of excitatory responses across pyramidal subtypes, being larger in those of superficial layers and in the layer 5B thick-tufted pyramidal cells. Overall, the combination of the excitatory and inhibitory currents evoked by callosal input had a strong and opposed effect in different layers of the cortex; while layer 2/3 pyramidal neurons were powerfully inhibited, the thick-tufted but not thin-tufted pyramidal neurons in layer 5 were strongly recruited. We believe that these results will help to understand the functional role of callosal connections in physiology and disease.
Machine Learning Technique to Find Quantum Many-Body Ground States of Bosons on a Lattice
NASA Astrophysics Data System (ADS)
Saito, Hiroki; Kato, Masaya
2018-01-01
We have developed a variational method to obtain many-body ground states of the Bose-Hubbard model using feedforward artificial neural networks. A fully connected network with a single hidden layer works better than a fully connected network with multiple hidden layers, and a multilayer convolutional network is more efficient than a fully connected network. AdaGrad and Adam are optimization methods that work well. Moreover, we show that many-body ground states with different numbers of particles can be generated by a single network.
Stable Odor Recognition by a neuro-adaptive Electronic Nose
Martinelli, Eugenio; Magna, Gabriele; Polese, Davide; Vergara, Alexander; Schild, Detlev; Di Natale, Corrado
2015-01-01
Sensitivity, selectivity and stability are decisive properties of sensors. In chemical gas sensors odor recognition can be severely compromised by poor signal stability, particularly in real life applications where the sensors are exposed to unpredictable sequences of odors under changing external conditions. Although olfactory receptor neurons in the nose face similar stimulus sequences under likewise changing conditions, odor recognition is very stable and odorants can be reliably identified independently from past odor perception. We postulate that appropriate pre-processing of the output signals of chemical sensors substantially contributes to the stability of odor recognition, in spite of marked sensor instabilities. To investigate this hypothesis, we use an adaptive, unsupervised neural network inspired by the glomerular input circuitry of the olfactory bulb. Essentially the model reduces the effect of the sensors’ instabilities by utilizing them via an adaptive multicompartment feed-forward inhibition. We collected and analyzed responses of a 4 × 4 gas sensor array to a number of volatile compounds applied over a period of 18 months, whereby every sensor was sampled episodically. The network conferred excellent stability to the compounds’ identification and was clearly superior over standard classifiers, even when one of the sensors exhibited random fluctuations or stopped working at all. PMID:26043043
Vehicle Engine Classification Using Spectral Tone-Pitch Vibration Indexing and Neural Network*
Wei, Jie; Vongsy, Karmon; Mendoza-Schrock, Olga; Liu, Chi-Him
2015-01-01
As a non-invasive and remote sensor, the Laser Doppler Vibrometer (LDV) has found a broad spectrum of applications in various areas such as civil engineering, biomedical engineering, and even security and restoration within art museums. LDV is an ideal sensor to detect threats earlier and provide better protection to society, which is of utmost importance to military and law enforcement institutions. However, the use of LDV in situational surveillance, in particular vehicle classification, is still in its infancy due to the lack of systematic investigations on its behavioral properties. In this work, as a result of the pilot project initiated by Air Force Research Laboratory, the innate features of LDV data from many vehicles are examined, beginning with an investigation of feature differences compared to human speech signals. A spectral tone-pitch vibration indexing scheme is developed to capture the engine’s periodic vibrations and the associated fundamental frequencies over the vehicles’ surface. A two-layer feed-forward neural network with 20 intermediate neurons is employed to classify vehicles’ engines based on their spectral tone-pitch indices. The classification results using the proposed approach over the complete LDV dataset collected by the project are exceedingly encouraging; consistently higher than 96% accuracies are attained for all four types of engines collected from this project. PMID:26788417
Hamaguchi, Kosuke; Mooney, Richard
2012-01-01
Complex brain functions, such as the capacity to learn and modulate vocal sequences, depend on activity propagation in highly distributed neural networks. To explore the synaptic basis of activity propagation in such networks, we made dual in vivo intracellular recordings in anesthetized zebra finches from the input (nucleus HVC) and output (lateral magnocellular nucleus of the anterior nidopallium (LMAN)) neurons of a songbird cortico-basal ganglia (BG) pathway necessary to the learning and modulation of vocal motor sequences. These recordings reveal evidence of bidirectional interactions, rather than only feedforward propagation of activity from HVC to LMAN, as had been previously supposed. A combination of dual and triple recording configurations and pharmacological manipulations was used to map out circuitry by which activity propagates from LMAN to HVC. These experiments indicate that activity travels to HVC through at least two independent ipsilateral pathways, one of which involves fast signaling through a midbrain dopaminergic cell group, reminiscent of recurrent mesocortical loops described in mammals. We then used in vivo pharmacological manipulations to establish that augmented LMAN activity is sufficient to restore high levels of sequence variability in adult birds, suggesting that recurrent interactions through highly distributed forebrain – midbrain pathways can modulate learned vocal sequences. PMID:22915110
Ahnert, S E; Fink, T M A
2016-07-01
Network motifs have been studied extensively over the past decade, and certain motifs, such as the feed-forward loop, play an important role in regulatory networks. Recent studies have used Boolean network motifs to explore the link between form and function in gene regulatory networks and have found that the structure of a motif does not strongly determine its function, if this is defined in terms of the gene expression patterns the motif can produce. Here, we offer a different, higher-level definition of the 'function' of a motif, in terms of two fundamental properties of its dynamical state space as a Boolean network. One is the basin entropy, which is a complexity measure of the dynamics of Boolean networks. The other is the diversity of cyclic attractor lengths that a given motif can produce. Using these two measures, we examine all 104 topologically distinct three-node motifs and show that the structural properties of a motif, such as the presence of feedback loops and feed-forward loops, predict fundamental characteristics of its dynamical state space, which in turn determine aspects of its functional versatility. We also show that these higher-level properties have a direct bearing on real regulatory networks, as both basin entropy and cycle length diversity show a close correspondence with the prevalence, in neural and genetic regulatory networks, of the 13 connected motifs without self-interactions that have been studied extensively in the literature. © 2016 The Authors.
Top-level dynamics and the regulated gene response of feed-forward loop transcriptional motifs.
Mayo, Michael; Abdelzaher, Ahmed; Perkins, Edward J; Ghosh, Preetam
2014-09-01
Feed-forward loops are hierarchical three-node transcriptional subnetworks, wherein a top-level protein regulates the activity of a target gene via two paths: a direct-regulatory path, and an indirect route, whereby the top-level proteins act implicitly through an intermediate transcription factor. Using a transcriptional network of the model bacterium Escherichia coli, we confirmed that nearly all types of feed-forward loop were significantly overrepresented in the bacterial network. We then used mathematical modeling to study their dynamics by manipulating the rise times of the top-level protein concentration, termed the induction time, through alteration of the protein destruction rates. Rise times of the regulated proteins exhibited two qualitatively different regimes, depending on whether top-level inductions were "fast" or "slow." In the fast regime, rise times were nearly independent of rapid top-level inductions, indicative of biological robustness, and occurred when RNA production rate-limits the protein yield. Alternatively, the protein rise times were dependent upon slower top-level inductions, greater than approximately one bacterial cell cycle. An equation is given for this crossover, which depends upon three parameters of the direct-regulatory path: transcriptional cooperation at the DNA-binding site, a protein-DNA dissociation constant, and the relative magnitude of the top-level protien concentration.
Training trajectories by continuous recurrent multilayer networks.
Leistritz, L; Galicki, M; Witte, H; Kochs, E
2002-01-01
This paper addresses the problem of training trajectories by means of continuous recurrent neural networks whose feedforward parts are multilayer perceptrons. Such networks can approximate a general nonlinear dynamic system with arbitrary accuracy. The learning process is transformed into an optimal control framework where the weights are the controls to be determined. A training algorithm based upon a variational formulation of Pontryagin's maximum principle is proposed for such networks. Computer examples demonstrating the efficiency of the given approach are also presented.
Neural network feedforward control of a closed-circuit wind tunnel
NASA Astrophysics Data System (ADS)
Sutcliffe, Peter
Accurate control of wind-tunnel test conditions can be dramatically enhanced using feedforward control architectures which allow operating conditions to be maintained at a desired setpoint through the use of mathematical models as the primary source of prediction. However, as the desired accuracy of the feedforward prediction increases, the model complexity also increases, so that an ever increasing computational load is incurred. This drawback can be avoided by employing a neural network that is trained offline using the output of a high fidelity wind-tunnel mathematical model, so that the neural network can rapidly reproduce the predictions of the model with a greatly reduced computational overhead. A novel neural network database generation method, developed through the use of fractional factorial arrays, was employed such that a neural network can accurately predict wind-tunnel parameters across a wide range of operating conditions whilst trained upon a highly efficient database. The subsequent network was incorporated into a Neural Network Model Predictive Control (NNMPC) framework to allow an optimised output schedule capable of providing accurate control of the wind-tunnel operating parameters. Facilitation of an optimised path through the solution space is achieved through the use of a chaos optimisation algorithm such that a more globally optimum solution is likely to be found with less computational expense than the gradient descent method. The parameters associated with the NNMPC such as the control horizon are determined through the use of a Taguchi methodology enabling the minimum number of experiments to be carried out to determine the optimal combination. The resultant NNMPC scheme was employed upon the Hessert Low Speed Wind Tunnel at the University of Notre Dame to control the test-section temperature such that it follows a pre-determined reference trajectory during changes in the test-section velocity. Experimental testing revealed that the derived NNMPC controller provided an excellent level of control over the test-section temperature in adherence to a reference trajectory even when faced with unforeseen disturbances such as rapid changes in the operating environment.
Wang, Jie-Sheng; Han, Shuang
2015-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034
Foo, Mathias; Kim, Jongrae; Sawlekar, Rucha; Bates, Declan G
2017-04-06
Feedback control is widely used in chemical engineering to improve the performance and robustness of chemical processes. Feedback controllers require a 'subtractor' that is able to compute the error between the process output and the reference signal. In the case of embedded biomolecular control circuits, subtractors designed using standard chemical reaction network theory can only realise one-sided subtraction, rendering standard controller design approaches inadequate. Here, we show how a biomolecular controller that allows tracking of required changes in the outputs of enzymatic reaction processes can be designed and implemented within the framework of chemical reaction network theory. The controller architecture employs an inversion-based feedforward controller that compensates for the limitations of the one-sided subtractor that generates the error signals for a feedback controller. The proposed approach requires significantly fewer chemical reactions to implement than alternative designs, and should have wide applicability throughout the fields of synthetic biology and biological engineering.
Takeda, Kosuke; Shao, Danying; Adler, Micha; Charest, Pascale G; Loomis, William F; Levine, Herbert; Groisman, Alex; Rappel, Wouter-Jan; Firtel, Richard A
2012-01-03
Adaptation in signaling systems, during which the output returns to a fixed baseline after a change in the input, often involves negative feedback loops and plays a crucial role in eukaryotic chemotaxis. We determined the dynamical response to a uniform change in chemoattractant concentration of a eukaryotic chemotaxis pathway immediately downstream from G protein-coupled receptors. The response of an activated Ras showed near-perfect adaptation, leading us to attempt to fit the results using mathematical models for the two possible simple network topologies that can provide perfect adaptation. Only the incoherent feedforward network accurately described the experimental results. This analysis revealed that adaptation in this Ras pathway is achieved through the proportional activation of upstream components and not through negative feedback loops. Furthermore, these results are consistent with a local excitation, global inhibition mechanism for gradient sensing, possibly with a Ras guanosine triphosphatase-activating protein acting as a global inhibitor.
NASA Astrophysics Data System (ADS)
Lan, Ganhui
2015-09-01
We present here the analytical relation between the gain of eukaryotic gradient sensing network and the associated thermodynamic cost. By analyzing a general incoherent type-1 feed-forward loop, we derive the gain function (G ) through the reaction network and explicitly show that G depends on the nonequilibrium factor (0 ≤γ ≤1 with γ =0 and 1 representing irreversible and equilibrium reaction systems, respectively), the Michaelis constant (KM), and the turnover ratio (rcat) of the participating enzymes. We further find the maximum possible gain is intrinsically determined by KM/Gmax=(1 /KM+2 ) /4 . Our model also indicates that the dissipated energy (measured by -lnγ ), from the intracellular energy-bearing bioparticles (e.g., ATP), is used to generate a force field Fγ∝(1 -√{γ }) that reshapes and disables the effective potential around the zero gain region, which leads to the ultrasensitive response to external chemical gradients.
An application of artificial neural networks to experimental data approximation
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1993-01-01
As an initial step in the evaluation of networks, a feedforward architecture is trained to approximate experimental data by the backpropagation algorithm. Several drawbacks were detected and an alternative learning algorithm was then developed to partially address the drawbacks. This noniterative algorithm has a number of advantages over the backpropagation method and is easily implemented on existing hardware.
Multidimensional adaptive evolution of a feed-forward network and the illusion of compensation
Bullaughey, Kevin
2016-01-01
When multiple substitutions affect a trait in opposing ways, they are often assumed to be compensatory, not only with respect to the trait, but also with respect to fitness. This type of compensatory evolution has been suggested to underlie the evolution of protein structures and interactions, RNA secondary structures, and gene regulatory modules and networks. The possibility for compensatory evolution results from epistasis. Yet if epistasis is widespread, then it is also possible that the opposing substitutions are individually adaptive. I term this possibility an adaptive reversal. Although possible for arbitrary phenotype-fitness mappings, it has not yet been investigated whether such epistasis is prevalent in a biologically-realistic setting. I investigate a particular regulatory circuit, the type I coherent feed-forward loop, which is ubiquitous in natural systems and is accurately described by a simple mathematical model. I show that such reversals are common during adaptive evolution, can result solely from the topology of the fitness landscape, and can occur even when adaptation follows a modest environmental change and the network was well adapted to the original environment. The possibility of adaptive reversals warrants a systems perspective when interpreting substitution patterns in gene regulatory networks. PMID:23289561
Processing oscillatory signals by incoherent feedforward loops
NASA Astrophysics Data System (ADS)
Zhang, Carolyn; Wu, Feilun; Tsoi, Ryan; Shats, Igor; You, Lingchong
From the timing of amoeba development to the maintenance of stem cell pluripotency,many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression.While networks underlying this signal decoding are diverse,many are built around a common motif, the incoherent feedforward loop (IFFL),where an input simultaneously activates an output and an inhibitor of the output.With appropriate parameters,this motif can generate temporal adaptation,where the system is desensitized to a sustained input.This property serves as the foundation for distinguishing signals with varying temporal profiles.Here,we use quantitative modeling to examine another property of IFFLs,the ability to process oscillatory signals.Our results indicate that the system's ability to translate pulsatile dynamics is limited by two constraints.The kinetics of IFFL components dictate the input range for which the network can decode pulsatile dynamics.In addition,a match between the network parameters and signal characteristics is required for optimal ``counting''.We elucidate one potential mechanism by which information processing occurs in natural networks with implications in the design of synthetic gene circuits for this purpose. This work was partially supported by the National Science Foundation Graduate Research Fellowship (CZ).
Processing Oscillatory Signals by Incoherent Feedforward Loops
Zhang, Carolyn; You, Lingchong
2016-01-01
From the timing of amoeba development to the maintenance of stem cell pluripotency, many biological signaling pathways exhibit the ability to differentiate between pulsatile and sustained signals in the regulation of downstream gene expression. While the networks underlying this signal decoding are diverse, many are built around a common motif, the incoherent feedforward loop (IFFL), where an input simultaneously activates an output and an inhibitor of the output. With appropriate parameters, this motif can exhibit temporal adaptation, where the system is desensitized to a sustained input. This property serves as the foundation for distinguishing input signals with varying temporal profiles. Here, we use quantitative modeling to examine another property of IFFLs—the ability to process oscillatory signals. Our results indicate that the system’s ability to translate pulsatile dynamics is limited by two constraints. The kinetics of the IFFL components dictate the input range for which the network is able to decode pulsatile dynamics. In addition, a match between the network parameters and input signal characteristics is required for optimal “counting”. We elucidate one potential mechanism by which information processing occurs in natural networks, and our work has implications in the design of synthetic gene circuits for this purpose. PMID:27623175
Network feedback regulates motor output across a range of modulatory neuron activity
Spencer, Robert M.
2016-01-01
Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5–35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. PMID:27030739
Random Wiring, Ganglion Cell Mosaics, and the Functional Architecture of the Visual Cortex
Coppola, David; White, Leonard E.; Wolf, Fred
2015-01-01
The architecture of iso-orientation domains in the primary visual cortex (V1) of placental carnivores and primates apparently follows species invariant quantitative laws. Dynamical optimization models assuming that neurons coordinate their stimulus preferences throughout cortical circuits linking millions of cells specifically predict these invariants. This might indicate that V1’s intrinsic connectome and its functional architecture adhere to a single optimization principle with high precision and robustness. To validate this hypothesis, it is critical to closely examine the quantitative predictions of alternative candidate theories. Random feedforward wiring within the retino-cortical pathway represents a conceptually appealing alternative to dynamical circuit optimization because random dimension-expanding projections are believed to generically exhibit computationally favorable properties for stimulus representations. Here, we ask whether the quantitative invariants of V1 architecture can be explained as a generic emergent property of random wiring. We generalize and examine the stochastic wiring model proposed by Ringach and coworkers, in which iso-orientation domains in the visual cortex arise through random feedforward connections between semi-regular mosaics of retinal ganglion cells (RGCs) and visual cortical neurons. We derive closed-form expressions for cortical receptive fields and domain layouts predicted by the model for perfectly hexagonal RGC mosaics. Including spatial disorder in the RGC positions considerably changes the domain layout properties as a function of disorder parameters such as position scatter and its correlations across the retina. However, independent of parameter choice, we find that the model predictions substantially deviate from the layout laws of iso-orientation domains observed experimentally. Considering random wiring with the currently most realistic model of RGC mosaic layouts, a pairwise interacting point process, the predicted layouts remain distinct from experimental observations and resemble Gaussian random fields. We conclude that V1 layout invariants are specific quantitative signatures of visual cortical optimization, which cannot be explained by generic random feedforward-wiring models. PMID:26575467
Lateral presynaptic inhibition mediates gain control in an olfactory circuit.
Olsen, Shawn R; Wilson, Rachel I
2008-04-24
Olfactory signals are transduced by a large family of odorant receptor proteins, each of which corresponds to a unique glomerulus in the first olfactory relay of the brain. Crosstalk between glomeruli has been proposed to be important in olfactory processing, but it is not clear how these interactions shape the odour responses of second-order neurons. In the Drosophila antennal lobe (a region analogous to the vertebrate olfactory bulb), we selectively removed most interglomerular input to genetically identified second-order olfactory neurons. Here we show that this broadens the odour tuning of these neurons, implying that interglomerular inhibition dominates over interglomerular excitation. The strength of this inhibitory signal scales with total feedforward input to the entire antennal lobe, and has similar tuning in different glomeruli. A substantial portion of this interglomerular inhibition acts at a presynaptic locus, and our results imply that this is mediated by both ionotropic and metabotropic receptors on the same nerve terminal.
Bele, Tanja; Fabbretti, Elsa
2016-08-01
P2X3 receptors, gated by extracellular ATP, are expressed by sensory neurons and are involved in peripheral nociception and pain sensitization. The ability of P2X3 receptors to transduce extracellular stimuli into neuronal signals critically depends on the dynamic molecular partnership with the calcium/calmodulin-dependent serine protein kinase (CASK). The present work used trigeminal sensory neurons to study the impact that activation of P2X3 receptors (evoked by the agonist α,β-meATP) has on the release of endogenous ATP and how CASK modulates this phenomenon. P2X3 receptor function was followed by ATP efflux via Pannexin1 (Panx1) hemichannels, a mechanism that was blocked by the P2X3 receptor antagonist A-317491, and by P2X3 silencing. ATP efflux was enhanced by nerve growth factor, a treatment known to potentiate P2X3 receptor function. Basal ATP efflux was not controlled by CASK, and carbenoxolone or Pannexin silencing reduced ATP release upon P2X3 receptor function. CASK-controlled ATP efflux followed P2X3 receptor activity, but not depolarization-evoked ATP release. Molecular biology experiments showed that CASK was essential for the transactivation of Panx1 upon P2X3 receptor activation. These data suggest that P2X3 receptor function controls a new type of feed-forward purinergic signaling on surrounding cells, with consequences at peripheral and spinal cord level. Thus, P2X3 receptor-mediated ATP efflux may be considered for the future development of pharmacological strategies aimed at containing neuronal sensitization. P2X3 receptors are involved in sensory transduction and associate to CASK. We have studied in primary sensory neurons the molecular mechanisms downstream P2X3 receptor activation, namely ATP release and partnership with CASK or Panx1. Our data suggest that CASK and P2X3 receptors are part of an ATP keeper complex, with important feed-forward consequences at peripheral and central level. © 2016 International Society for Neurochemistry.
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
2011-01-01
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Contribution of synchronized GABAergic neurons to dopaminergic neuron firing and bursting.
Morozova, Ekaterina O; Myroshnychenko, Maxym; Zakharov, Denis; di Volo, Matteo; Gutkin, Boris; Lapish, Christopher C; Kuznetsov, Alexey
2016-10-01
In the ventral tegmental area (VTA), interactions between dopamine (DA) and γ-aminobutyric acid (GABA) neurons are critical for regulating DA neuron activity and thus DA efflux. To provide a mechanistic explanation of how GABA neurons influence DA neuron firing, we developed a circuit model of the VTA. The model is based on feed-forward inhibition and recreates canonical features of the VTA neurons. Simulations revealed that γ-aminobutyric acid (GABA) receptor (GABAR) stimulation can differentially influence the firing pattern of the DA neuron, depending on the level of synchronization among GABA neurons. Asynchronous activity of GABA neurons provides a constant level of inhibition to the DA neuron and, when removed, produces a classical disinhibition burst. In contrast, when GABA neurons are synchronized by common synaptic input, their influence evokes additional spikes in the DA neuron, resulting in increased measures of firing and bursting. Distinct from previous mechanisms, the increases were not based on lowered firing rate of the GABA neurons or weaker hyperpolarization by the GABAR synaptic current. This phenomenon was induced by GABA-mediated hyperpolarization of the DA neuron that leads to decreases in intracellular calcium (Ca 2+ ) concentration, thus reducing the Ca 2+ -dependent potassium (K + ) current. In this way, the GABA-mediated hyperpolarization replaces Ca 2+ -dependent K + current; however, this inhibition is pulsatile, which allows the DA neuron to fire during the rhythmic pauses in inhibition. Our results emphasize the importance of inhibition in the VTA, which has been discussed in many studies, and suggest a novel mechanism whereby computations can occur locally. Copyright © 2016 the American Physiological Society.
Contribution of synchronized GABAergic neurons to dopaminergic neuron firing and bursting
Myroshnychenko, Maxym; Zakharov, Denis; di Volo, Matteo; Gutkin, Boris; Lapish, Christopher C.; Kuznetsov, Alexey
2016-01-01
In the ventral tegmental area (VTA), interactions between dopamine (DA) and γ-aminobutyric acid (GABA) neurons are critical for regulating DA neuron activity and thus DA efflux. To provide a mechanistic explanation of how GABA neurons influence DA neuron firing, we developed a circuit model of the VTA. The model is based on feed-forward inhibition and recreates canonical features of the VTA neurons. Simulations revealed that γ-aminobutyric acid (GABA) receptor (GABAR) stimulation can differentially influence the firing pattern of the DA neuron, depending on the level of synchronization among GABA neurons. Asynchronous activity of GABA neurons provides a constant level of inhibition to the DA neuron and, when removed, produces a classical disinhibition burst. In contrast, when GABA neurons are synchronized by common synaptic input, their influence evokes additional spikes in the DA neuron, resulting in increased measures of firing and bursting. Distinct from previous mechanisms, the increases were not based on lowered firing rate of the GABA neurons or weaker hyperpolarization by the GABAR synaptic current. This phenomenon was induced by GABA-mediated hyperpolarization of the DA neuron that leads to decreases in intracellular calcium (Ca2+) concentration, thus reducing the Ca2+-dependent potassium (K+) current. In this way, the GABA-mediated hyperpolarization replaces Ca2+-dependent K+ current; however, this inhibition is pulsatile, which allows the DA neuron to fire during the rhythmic pauses in inhibition. Our results emphasize the importance of inhibition in the VTA, which has been discussed in many studies, and suggest a novel mechanism whereby computations can occur locally. PMID:27440240
Oscillatory stimuli differentiate adapting circuit topologies
Rahi, Sahand Jamal; Larsch, Johannes; Pecani, Kresti; Katsov, Alexander Y.; Mansouri, Nahal; Tsaneva-Atanasova, Krasimira; Sontag, Eduardo D.; Cross, Frederick R.
2017-01-01
Adapting pathways consist of negative feedback loops (NFLs) or incoherent feedforward loops (IFFLs), which we show can be differentiated using oscillatory stimulation: NFLs but not IFFLs generically show ‘refractory period stabilization’ or ‘period skipping’. Using these signatures and genetic rewiring we identified the circuit dominating cell cycle timing in yeast. In C. elegans AWA neurons we uncovered a Ca2+-NFL, diffcult to find by other means, especially in wild-type, intact animals. (70 words) PMID:28846089
A P2P Botnet detection scheme based on decision tree and adaptive multilayer neural networks.
Alauthaman, Mohammad; Aslam, Nauman; Zhang, Li; Alasem, Rafe; Hossain, M A
2018-01-01
In recent years, Botnets have been adopted as a popular method to carry and spread many malicious codes on the Internet. These malicious codes pave the way to execute many fraudulent activities including spam mail, distributed denial-of-service attacks and click fraud. While many Botnets are set up using centralized communication architecture, the peer-to-peer (P2P) Botnets can adopt a decentralized architecture using an overlay network for exchanging command and control data making their detection even more difficult. This work presents a method of P2P Bot detection based on an adaptive multilayer feed-forward neural network in cooperation with decision trees. A classification and regression tree is applied as a feature selection technique to select relevant features. With these features, a multilayer feed-forward neural network training model is created using a resilient back-propagation learning algorithm. A comparison of feature set selection based on the decision tree, principal component analysis and the ReliefF algorithm indicated that the neural network model with features selection based on decision tree has a better identification accuracy along with lower rates of false positives. The usefulness of the proposed approach is demonstrated by conducting experiments on real network traffic datasets. In these experiments, an average detection rate of 99.08 % with false positive rate of 0.75 % was observed.
Center for Neural Engineering: applications of pulse-coupled neural networks
NASA Astrophysics Data System (ADS)
Malkani, Mohan; Bodruzzaman, Mohammad; Johnson, John L.; Davis, Joel
1999-03-01
Pulsed-Coupled Neural Network (PCNN) is an oscillatory model neural network where grouping of cells and grouping among the groups that form the output time series (number of cells that fires in each input presentation also called `icon'). This is based on the synchronicity of oscillations. Recent work by Johnson and others demonstrated the functional capabilities of networks containing such elements for invariant feature extraction using intensity maps. PCNN thus presents itself as a more biologically plausible model with solid functional potential. This paper will present the summary of several projects and their results where we successfully applied PCNN. In project one, the PCNN was applied for object recognition and classification through a robotic vision system. The features (icons) generated by the PCNN were then fed into a feedforward neural network for classification. In project two, we developed techniques for sensory data fusion. The PCNN algorithm was implemented and tested on a B14 mobile robot. The PCNN-based features were extracted from the images taken from the robot vision system and used in conjunction with the map generated by data fusion of the sonar and wheel encoder data for the navigation of the mobile robot. In our third project, we applied the PCNN for speaker recognition. The spectrogram image of speech signals are fed into the PCNN to produce invariant feature icons which are then fed into a feedforward neural network for speaker identification.
Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.
Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W
2012-01-01
Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.
Neuromimetic Circuits with Synaptic Devices Based on Strongly Correlated Electron Systems
NASA Astrophysics Data System (ADS)
Ha, Sieu D.; Shi, Jian; Meroz, Yasmine; Mahadevan, L.; Ramanathan, Shriram
2014-12-01
Strongly correlated electron systems such as the rare-earth nickelates (R NiO3 , R denotes a rare-earth element) can exhibit synapselike continuous long-term potentiation and depression when gated with ionic liquids; exploiting the extreme sensitivity of coupled charge, spin, orbital, and lattice degrees of freedom to stoichiometry. We present experimental real-time, device-level classical conditioning and unlearning using nickelate-based synaptic devices in an electronic circuit compatible with both excitatory and inhibitory neurons. We establish a physical model for the device behavior based on electric-field-driven coupled ionic-electronic diffusion that can be utilized for design of more complex systems. We use the model to simulate a variety of associate and nonassociative learning mechanisms, as well as a feedforward recurrent network for storing memory. Our circuit intuitively parallels biological neural architectures, and it can be readily generalized to other forms of cellular learning and extinction. The simulation of neural function with electronic device analogs may provide insight into biological processes such as decision making, learning, and adaptation, while facilitating advanced parallel information processing in hardware.
Network feedback regulates motor output across a range of modulatory neuron activity.
Spencer, Robert M; Blitz, Dawn M
2016-06-01
Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.
Neuronal correlates of a virtual-reality-based passive sensory P300 network.
Chen, Chun-Chuan; Syue, Kai-Syun; Li, Kai-Chiun; Yeh, Shih-Ching
2014-01-01
P300, a positive event-related potential (ERP) evoked at around 300 ms after stimulus, can be elicited using an active or passive oddball paradigm. Active P300 requires a person's intentional response, whereas passive P300 does not require an intentional response. Passive P300 has been used in incommunicative patients for consciousness detection and brain computer interface. Active and passive P300 differ in amplitude, but not in latency or scalp distribution. However, no study has addressed the mechanism underlying the production of passive P300. In particular, it remains unclear whether the passive P300 shares an identical active P300 generating network architecture when no response is required. This study aims to explore the hierarchical network of passive sensory P300 production using dynamic causal modelling (DCM) for ERP and a novel virtual reality (VR)-based passive oddball paradigm. Moreover, we investigated the causal relationship of this passive P300 network and the changes in connection strength to address the possible functional roles. A classical ERP analysis was performed to verify that the proposed VR-based game can reliably elicit passive P300. The DCM results suggested that the passive and active P300 share the same parietal-frontal neural network for attentional control and, underlying the passive network, the feed-forward modulation is stronger than the feed-back one. The functional role of this forward modulation may indicate the delivery of sensory information, automatic detection of differences, and stimulus-driven attentional processes involved in performing this passive task. To our best knowledge, this is the first study to address the passive P300 network. The results of this study may provide a reference for future clinical studies on addressing the network alternations under pathological states of incommunicative patients. However, caution is required when comparing patients' analytic results with this study. For example, the task presented here is not applicable to incommunicative patients.
Neuronal Correlates of a Virtual-Reality-Based Passive Sensory P300 Network
Chen, Chun-Chuan; Syue, Kai-Syun; Li, Kai-Chiun; Yeh, Shih-Ching
2014-01-01
P300, a positive event-related potential (ERP) evoked at around 300 ms after stimulus, can be elicited using an active or passive oddball paradigm. Active P300 requires a person’s intentional response, whereas passive P300 does not require an intentional response. Passive P300 has been used in incommunicative patients for consciousness detection and brain computer interface. Active and passive P300 differ in amplitude, but not in latency or scalp distribution. However, no study has addressed the mechanism underlying the production of passive P300. In particular, it remains unclear whether the passive P300 shares an identical active P300 generating network architecture when no response is required. This study aims to explore the hierarchical network of passive sensory P300 production using dynamic causal modelling (DCM) for ERP and a novel virtual reality (VR)-based passive oddball paradigm. Moreover, we investigated the causal relationship of this passive P300 network and the changes in connection strength to address the possible functional roles. A classical ERP analysis was performed to verify that the proposed VR-based game can reliably elicit passive P300. The DCM results suggested that the passive and active P300 share the same parietal-frontal neural network for attentional control and, underlying the passive network, the feed-forward modulation is stronger than the feed-back one. The functional role of this forward modulation may indicate the delivery of sensory information, automatic detection of differences, and stimulus-driven attentional processes involved in performing this passive task. To our best knowledge, this is the first study to address the passive P300 network. The results of this study may provide a reference for future clinical studies on addressing the network alternations under pathological states of incommunicative patients. However, caution is required when comparing patients’ analytic results with this study. For example, the task presented here is not applicable to incommunicative patients. PMID:25401520
A recurrent neural model for proto-object based contour integration and figure-ground segregation.
Hu, Brian; Niebur, Ernst
2017-12-01
Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.
ERIC Educational Resources Information Center
Chen, Chau-Kuang
2010-01-01
Artificial Neural Network (ANN) and Support Vector Machine (SVM) approaches have been on the cutting edge of science and technology for pattern recognition and data classification. In the ANN model, classification accuracy can be achieved by using the feed-forward of inputs, back-propagation of errors, and the adjustment of connection weights. In…
On the use of ANN interconnection weights in optimal structural design
NASA Technical Reports Server (NTRS)
Hajela, P.; Szewczyk, Z.
1992-01-01
The present paper describes the use of interconnection weights of a multilayer, feedforward network, to extract information pertinent to the mapping space that the network is assumed to represent. In particular, these weights can be used to determine an appropriate network architecture, and an adequate number of training patterns (input-output pairs) have been used for network training. The weight analysis also provides an approach to assess the influence of each input parameter on a selected output component. The paper shows the significance of this information in decomposition driven optimal design.
Forecasting the mortality rates of Indonesian population by using neural network
NASA Astrophysics Data System (ADS)
Safitri, Lutfiani; Mardiyati, Sri; Rahim, Hendrisman
2018-03-01
A model that can represent a problem is required in conducting a forecasting. One of the models that has been acknowledged by the actuary community in forecasting mortality rate is the Lee-Certer model. Lee Carter model supported by Neural Network will be used to calculate mortality forecasting in Indonesia. The type of Neural Network used is feedforward neural network aligned with backpropagation algorithm in python programming language. And the final result of this study is mortality rate in forecasting Indonesia for the next few years
An intelligent control system for failure detection and controller reconfiguration
NASA Technical Reports Server (NTRS)
Biswas, Saroj K.
1994-01-01
We present an architecture of an intelligent restructurable control system to automatically detect failure of system components, assess its impact on system performance and safety, and reconfigure the controller for performance recovery. Fault detection is based on neural network associative memories and pattern classifiers, and is implemented using a multilayer feedforward network. Details of the fault detection network along with simulation results on health monitoring of a dc motor have been presented. Conceptual developments for fault assessment using an expert system and controller reconfiguration using a neural network are outlined.
Mean-field equations for neuronal networks with arbitrary degree distributions.
Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Mean-field equations for neuronal networks with arbitrary degree distributions
NASA Astrophysics Data System (ADS)
Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Neural substrates of visuomotor learning based on improved feedback control and prediction
Grafton, Scott T.; Schmitt, Paul; Horn, John Van; Diedrichsen, Jörn
2008-01-01
Motor skills emerge from learning feedforward commands as well as improvements in feedback control. These two components of learning were investigated in a compensatory visuomotor tracking task on a trial-by-trial basis. Between trial learning was characterized with a state-space model to provide smoothed estimates of feedforward and feedback learning, separable from random fluctuations in motor performance and error. The resultant parameters were correlated with brain activity using magnetic resonance imaging. Learning related to the generation of a feedforward command correlated with activity in dorsal premotor cortex, inferior parietal lobule, supplementary motor area and cingulate motor area, supporting a role of these areas in retrieving and executing a predictive motor command. Modulation of feedback control was associated with activity in bilateral posterior superior parietal lobule as well as right ventral premotor cortex. Performance error correlated with activity in a widespread cortical and subcortical network including bilateral parietal, premotor and rostral anterior cingulate cortex as well as the cerebellar cortex. Finally, trial-by-trial changes of kinematics, as measured by mean absolute hand acceleration, correlated with activity in motor cortex and anterior cerebellum. The results demonstrate that incremental, learning dependent changes can be modeled on a trial-by-trial basis and neural substrates for feedforward control of novel motor programs are localized to secondary motor areas. PMID:18032069
Stienen, Bernard M C; Schindler, Konrad; de Gelder, Beatrice
2012-07-01
Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance.
Rhythmogenic neuronal networks, emergent leaders, and k-cores.
Schwab, David J; Bruinsma, Robijn F; Feldman, Jack L; Levine, Alex J
2010-11-01
Neuronal network behavior results from a combination of the dynamics of individual neurons and the connectivity of the network that links them together. We study a simplified model, based on the proposal of Feldman and Del Negro (FDN) [Nat. Rev. Neurosci. 7, 232 (2006)], of the preBötzinger Complex, a small neuronal network that participates in the control of the mammalian breathing rhythm through periodic firing bursts. The dynamics of this randomly connected network of identical excitatory neurons differ from those of a uniformly connected one. Specifically, network connectivity determines the identity of emergent leader neurons that trigger the firing bursts. When neuronal desensitization is controlled by the number of input signals to the neurons (as proposed by FDN), the network's collective desensitization--required for successful burst termination--is mediated by k-core clusters of neurons.
NASA Technical Reports Server (NTRS)
Kiang, Richard K.
1992-01-01
Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.
Neural networks for function approximation in nonlinear control
NASA Technical Reports Server (NTRS)
Linse, Dennis J.; Stengel, Robert F.
1990-01-01
Two neural network architectures are compared with a classical spline interpolation technique for the approximation of functions useful in a nonlinear control system. A standard back-propagation feedforward neural network and a cerebellar model articulation controller (CMAC) neural network are presented, and their results are compared with a B-spline interpolation procedure that is updated using recursive least-squares parameter identification. Each method is able to accurately represent a one-dimensional test function. Tradeoffs between size requirements, speed of operation, and speed of learning indicate that neural networks may be practical for identification and adaptation in a nonlinear control environment.
A Prior for Neural Networks utilizing Enclosing Spheres for Normalization
NASA Astrophysics Data System (ADS)
v. Toussaint, U.; Gori, S.; Dose, V.
2004-11-01
Neural Networks are famous for their advantageous flexibility for problems when there is insufficient knowledge to set up a proper model. On the other hand this flexibility can cause over-fitting and can hamper the generalization properties of neural networks. Many approaches to regularize NN have been suggested but most of them based on ad-hoc arguments. Employing the principle of transformation invariance we derive a general prior in accordance with the Bayesian probability theory for a class of feedforward networks. Optimal networks are determined by Bayesian model comparison verifying the applicability of this approach.
Kuntanapreeda, S; Fullmer, R R
1996-01-01
A training method for a class of neural network controllers is presented which guarantees closed-loop system stability. The controllers are assumed to be nonlinear, feedforward, sampled-data, full-state regulators implemented as single hidden-layer neural networks. The controlled systems must be locally hermitian and observable. Stability of the closed-loop system is demonstrated by determining a Lyapunov function, which can be used to identify a finite stability region about the regulator point.
Brown, Jennifer; Pan, Wei-Xing; Dudman, Joshua Tate
2014-01-01
Dysfunction of the basal ganglia produces severe deficits in the timing, initiation, and vigor of movement. These diverse impairments suggest a control system gone awry. In engineered systems, feedback is critical for control. By contrast, models of the basal ganglia highlight feedforward circuitry and ignore intrinsic feedback circuits. In this study, we show that feedback via axon collaterals of substantia nigra projection neurons control the gain of the basal ganglia output. Through a combination of physiology, optogenetics, anatomy, and circuit mapping, we elaborate a general circuit mechanism for gain control in a microcircuit lacking interneurons. Our data suggest that diverse tonic firing rates, weak unitary connections and a spatially diffuse collateral circuit with distinct topography and kinetics from feedforward input is sufficient to implement divisive feedback inhibition. The importance of feedback for engineered systems implies that the intranigral microcircuit, despite its absence from canonical models, could be essential to basal ganglia function. DOI: http://dx.doi.org/10.7554/eLife.02397.001 PMID:24849626
Marek, Roger; Jin, Jingji; Goode, Travis D; Giustino, Thomas F; Wang, Qian; Acca, Gillian M; Holehonnur, Roopashri; Ploski, Jonathan E; Fitzgerald, Paul J; Lynagh, Timothy; Lynch, Joseph W; Maren, Stephen; Sah, Pankaj
2018-03-01
The medial prefrontal cortex (mPFC) has been implicated in the extinction of emotional memories, including conditioned fear. We found that ventral hippocampal (vHPC) projections to the infralimbic (IL) cortex recruited parvalbumin-expressing interneurons to counter the expression of extinguished fear and promote fear relapse. Whole-cell recordings ex vivo revealed that optogenetic activation of vHPC input to amygdala-projecting pyramidal neurons in the IL was dominated by feed-forward inhibition. Selectively silencing parvalbumin-expressing, but not somatostatin-expressing, interneurons in the IL eliminated vHPC-mediated inhibition. In behaving rats, pharmacogenetic activation of vHPC→IL projections impaired extinction recall, whereas silencing IL projectors diminished fear renewal. Intra-IL infusion of GABA receptor agonists or antagonists, respectively, reproduced these effects. Together, our findings describe a previously unknown circuit mechanism for the contextual control of fear, and indicate that vHPC-mediated inhibition of IL is an essential neural substrate for fear relapse.
Modulation of frontal effective connectivity during speech.
Holland, Rachel; Leff, Alex P; Penny, William D; Rothwell, John C; Crinion, Jenny
2016-10-15
Noninvasive neurostimulation methods such as transcranial direct current stimulation (tDCS) can elicit long-lasting, polarity-dependent changes in neocortical excitability. In a previous concurrent tDCS-fMRI study of overt picture naming, we reported significant behavioural and regionally specific neural facilitation effects in left inferior frontal cortex (IFC) with anodal tDCS applied to left frontal cortex (Holland et al., 2011). Although distributed connectivity effects of anodal tDCS have been modelled at rest, the mechanism by which 'on-line' tDCS may modulate neuronal connectivity during a task-state remains unclear. Here, we used Dynamic Causal Modelling (DCM) to determine: (i) how neural connectivity within the frontal speech network is modulated during anodal tDCS; and, (ii) how individual variability in behavioural response to anodal tDCS relates to changes in effective connectivity strength. Results showed that compared to sham, anodal tDCS elicited stronger feedback from inferior frontal sulcus (IFS) to ventral premotor (VPM) accompanied by weaker self-connections within VPM, consistent with processes of neuronal adaptation. During anodal tDCS individual variability in the feedforward connection strength from IFS to VPM positively correlated with the degree of facilitation in naming behaviour. These results provide an essential step towards understanding the mechanism of 'online' tDCS paired with a cognitive task. They also identify left IFS as a 'top-down' hub and driver for speech change. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I
2014-06-01
Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.
Synchronization properties of heterogeneous neuronal networks with mixed excitability type
NASA Astrophysics Data System (ADS)
Leone, Michael J.; Schurter, Brandon N.; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G.
2015-03-01
We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.
Dunea, Daniel; Pohoata, Alin; Iordache, Stefania
2015-07-01
The paper presents the screening of various feedforward neural networks (FANN) and wavelet-feedforward neural networks (WFANN) applied to time series of ground-level ozone (O3), nitrogen dioxide (NO2), and particulate matter (PM10 and PM2.5 fractions) recorded at four monitoring stations located in various urban areas of Romania, to identify common configurations with optimal generalization performance. Two distinct model runs were performed as follows: data processing using hourly-recorded time series of airborne pollutants during cold months (O3, NO2, and PM10), when residential heating increases the local emissions, and data processing using 24-h daily averaged concentrations (PM2.5) recorded between 2009 and 2012. Dataset variability was assessed using statistical analysis. Time series were passed through various FANNs. Each time series was decomposed in four time-scale components using three-level wavelets, which have been passed also through FANN, and recomposed into a single time series. The agreement between observed and modelled output was evaluated based on the statistical significance (r coefficient and correlation between errors and data). Daubechies db3 wavelet-Rprop FANN (6-4-1) utilization gave positive results for O3 time series optimizing the exclusive use of the FANN for hourly-recorded time series. NO2 was difficult to model due to time series specificity, but wavelet integration improved FANN performances. Daubechies db3 wavelet did not improve the FANN outputs for PM10 time series. Both models (FANN/WFANN) overestimated PM2.5 forecasted values in the last quarter of time series. A potential improvement of the forecasted values could be the integration of a smoothing algorithm to adjust the PM2.5 model outputs.
NASA Astrophysics Data System (ADS)
Piretzidis, D.; Sra, G.; Sideris, M. G.
2016-12-01
This study explores new methods for identifying correlation errors in harmonic coefficients derived from monthly solutions of the Gravity Recovery and Climate Experiment (GRACE) satellite mission using pattern recognition and neural network algorithms. These correlation errors are evidenced in the differences between monthly solutions and can be suppressed using a de-correlation filter. In all studies so far, the implementation of the de-correlation filter starts from a specific minimum order (i.e., 11 for RL04 and 38 for RL05) until the maximum order of the monthly solution examined. This implementation method has two disadvantages, namely, the omission of filtering correlated coefficients of order less than the minimum order and the filtering of uncorrelated coefficients of order higher than the minimum order. In the first case, the filtered solution is not completely free of correlated errors, whereas the second case results in a monthly solution that suffers from loss of geophysical signal. In the present study, a new method of implementing the de-correlation filter is suggested, by identifying and filtering only the coefficients that show indications of high correlation. Several numerical and geometric properties of the harmonic coefficient series of all orders are examined. Extreme cases of both correlated and uncorrelated coefficients are selected, and their corresponding properties are used to train a two-layer feed-forward neural network. The objective of the neural network is to identify and quantify the correlation by providing the probability of an order of coefficients to be correlated. Results show good performance of the neural network, both in the validation stage of the training procedure and in the subsequent use of the trained network to classify independent coefficients. The neural network is also capable of identifying correlated coefficients even when a small number of training samples and neurons are used (e.g.,100 and 10, respectively).
Computational Account of Spontaneous Activity as a Signature of Predictive Coding
Koren, Veronika
2017-01-01
Spontaneous activity is commonly observed in a variety of cortical states. Experimental evidence suggested that neural assemblies undergo slow oscillations with Up ad Down states even when the network is isolated from the rest of the brain. Here we show that these spontaneous events can be generated by the recurrent connections within the network and understood as signatures of neural circuits that are correcting their internal representation. A noiseless spiking neural network can represent its input signals most accurately when excitatory and inhibitory currents are as strong and as tightly balanced as possible. However, in the presence of realistic neural noise and synaptic delays, this may result in prohibitively large spike counts. An optimal working regime can be found by considering terms that control firing rates in the objective function from which the network is derived and then minimizing simultaneously the coding error and the cost of neural activity. In biological terms, this is equivalent to tuning neural thresholds and after-spike hyperpolarization. In suboptimal working regimes, we observe spontaneous activity even in the absence of feed-forward inputs. In an all-to-all randomly connected network, the entire population is involved in Up states. In spatially organized networks with local connectivity, Up states spread through local connections between neurons of similar selectivity and take the form of a traveling wave. Up states are observed for a wide range of parameters and have similar statistical properties in both active and quiescent state. In the optimal working regime, Up states are vanishing, leaving place to asynchronous activity, suggesting that this working regime is a signature of maximally efficient coding. Although they result in a massive increase in the firing activity, the read-out of spontaneous Up states is in fact orthogonal to the stimulus representation, therefore interfering minimally with the network function. PMID:28114353
A biologically inspired neural net for trajectory formation and obstacle avoidance.
Glasius, R; Komoda, A; Gielen, S C
1996-06-01
In this paper we present a biologically inspired two-layered neural network for trajectory formation and obstacle avoidance. The two topographically ordered neural maps consist of analog neurons having continuous dynamics. The first layer, the sensory map, receives sensory information and builds up an activity pattern which contains the optimal solution (i.e. shortest path without collisions) for any given set of current position, target positions and obstacle positions. Targets and obstacles are allowed to move, in which case the activity pattern in the sensory map will change accordingly. The time evolution of the neural activity in the second layer, the motor map, results in a moving cluster of activity, which can be interpreted as a population vector. Through the feedforward connections between the two layers, input of the sensory map directs the movement of the cluster along the optimal path from the current position of the cluster to the target position. The smooth trajectory is the result of the intrinsic dynamics of the network only. No supervisor is required. The output of the motor map can be used for direct control of an autonomous system in a cluttered environment or for control of the actuators of a biological limb or robot manipulator. The system is able to reach a target even in the presence of an external perturbation. Computer simulations of a point robot and a multi-joint manipulator illustrate the theory.
Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime
2016-01-01
It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.
Khateb, Mohamed; Schiller, Jackie; Schiller, Yitzhak
2017-01-06
The primary vibrissae motor cortex (vM1) is responsible for generating whisking movements. In parallel, vM1 also sends information directly to the sensory barrel cortex (vS1). In this study, we investigated the effects of vM1 activation on processing of vibrissae sensory information in vS1 of the rat. To dissociate the vibrissae sensory-motor loop, we optogenetically activated vM1 and independently passively stimulated principal vibrissae. Optogenetic activation of vM1 supra-linearly amplified the response of vS1 neurons to passive vibrissa stimulation in all cortical layers measured. Maximal amplification occurred when onset of vM1 optogenetic activation preceded vibrissa stimulation by 20 ms. In addition to amplification, vM1 activation also sharpened angular tuning of vS1 neurons in all cortical layers measured. Our findings indicated that in addition to output motor signals, vM1 also sends preparatory signals to vS1 that serve to amplify and sharpen the response of neurons in the barrel cortex to incoming sensory input signals.
Velasco, Silvia; Ibrahim, Mahmoud M; Kakumanu, Akshay; Garipler, Görkem; Aydin, Begüm; Al-Sayegh, Mohamed Ahmed; Hirsekorn, Antje; Abdul-Rahman, Farah; Satija, Rahul; Ohler, Uwe; Mahony, Shaun; Mazzoni, Esteban O
2017-02-02
Direct cell programming via overexpression of transcription factors (TFs) aims to control cell fate with the degree of precision needed for clinical applications. However, the regulatory steps involved in successful terminal cell fate programming remain obscure. We have investigated the underlying mechanisms by looking at gene expression, chromatin states, and TF binding during the uniquely efficient Ngn2, Isl1, and Lhx3 motor neuron programming pathway. Our analysis reveals a highly dynamic process in which Ngn2 and the Isl1/Lhx3 pair initially engage distinct regulatory regions. Subsequently, Isl1/Lhx3 binding shifts from one set of targets to another, controlling regulatory region activity and gene expression as cell differentiation progresses. Binding of Isl1/Lhx3 to later motor neuron enhancers depends on the Ebf and Onecut TFs, which are induced by Ngn2 during the programming process. Thus, motor neuron programming is the product of two initially independent transcriptional modules that converge with a feedforward transcriptional logic. Copyright © 2017 Elsevier Inc. All rights reserved.
Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi
2013-01-01
Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derrida, B.; Meir, R.
We consider the evolution of configurations in a layered feed-forward neural network. Exact expressions for the evolution of the distance between two configurations are obtained in the thermodynamic limit. Our results show that the distance between two arbitrarily close configurations always increases, implying chaotic behavior, even in the phase of good retrieval.
NASA Astrophysics Data System (ADS)
Shiraishi, Yuhki; Takeda, Fumiaki
In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.
Storz, Gisela
2011-01-01
Hfq-binding small RNAs (sRNAs) are critical regulators that form limited base-pairing interactions with target mRNAs in bacteria. These sRNAs have been linked to diverse environmental responses, yet little is known how Hfq-binding sRNAs participate in the regulatory networks associated with each response. We recently described how the Hfq-binding sRNA Spot 42 in Escherichia coli contributes to catabolite repression, a regulatory phenomenon that allows bacteria to consume some carbon sources over others. Spot 42 base pairs with numerous mRNAs encoding enzymes in central and secondary metabolism, redox balancing, and the uptake and consumption of non-preferred carbon sources. Many of the corresponding genes are transcriptionally activated by the Spot 42-repressor CRP, forming a regulatory circuit called a multi-output feedforward loop. We found that this loop influences both the steady-state levels and dynamics of gene regulation. In this article, we discuss how the CRP-Spot 42 feedforward loop is integrated into encompassing networks and how this loop may benefit enteric bacteria facing uncertain and changing nutrient conditions. PMID:21788732
Tian, Long; Xu, Zhongxiao; Chen, Lirong; Ge, Wei; Yuan, Haoxiang; Wen, Yafei; Wang, Shengzhi; Li, Shujing; Wang, Hai
2017-09-29
The light-matter quantum interface that can create quantum correlations or entanglement between a photon and one atomic collective excitation is a fundamental building block for a quantum repeater. The intrinsic limit is that the probability of preparing such nonclassical atom-photon correlations has to be kept low in order to suppress multiexcitation. To enhance this probability without introducing multiexcitation errors, a promising scheme is to apply multimode memories to the interface. Significant progress has been made in temporal, spectral, and spatial multiplexing memories, but the enhanced probability for generating the entangled atom-photon pair has not been experimentally realized. Here, by using six spin-wave-photon entanglement sources, a switching network, and feedforward control, we build a multiplexed light-matter interface and then demonstrate a ∼sixfold (∼fourfold) probability increase in generating entangled atom-photon (photon-photon) pairs. The measured compositive Bell parameter for the multiplexed interface is 2.49±0.03 combined with a memory lifetime of up to ∼51 μs.
An incoherent feedforward loop facilitates adaptive tuning of gene expression.
Hong, Jungeui; Brandt, Nathan; Abdul-Rahman, Farah; Yang, Ally; Hughes, Tim; Gresham, David
2018-04-05
We studied adaptive evolution of gene expression using long-term experimental evolution of Saccharomyces cerevisiae in ammonium-limited chemostats. We found repeated selection for non-synonymous variation in the DNA binding domain of the transcriptional activator, GAT1, which functions with the repressor, DAL80 in an incoherent type-1 feedforward loop (I1-FFL) to control expression of the high affinity ammonium transporter gene, MEP2. Missense mutations in the DNA binding domain of GAT1 reduce its binding to the GATAA consensus sequence. However, we show experimentally, and using mathematical modeling, that decreases in GAT1 binding result in increased expression of MEP2 as a consequence of properties of I1-FFLs. Our results show that I1-FFLs, one of the most commonly occurring network motifs in transcriptional networks, can facilitate adaptive tuning of gene expression through modulation of transcription factor binding affinities. Our findings highlight the importance of gene regulatory architectures in the evolution of gene expression. © 2018, Hong et al.
An H(∞) control approach to robust learning of feedforward neural networks.
Jing, Xingjian
2011-09-01
A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.
Network reconfiguration and neuronal plasticity in rhythm-generating networks.
Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino
2011-12-01
Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.
Sen, Alper; Gümüsay, M Umit; Kavas, Aktül; Bulucu, Umut
2008-09-25
Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN.
Şen, Alper; Gümüşay, M. Ümit; Kavas, Aktül; Bulucu, Umut
2008-01-01
Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs). Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS) and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN) and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz) electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN. PMID:27873854
Hamlet, William R.; Lu, Yong
2016-01-01
Intrinsic plasticity has emerged as an important mechanism regulating neuronal excitability and output under physiological and pathological conditions. Here, we report a novel form of intrinsic plasticity. Using perforated patch clamp recordings, we examined the modulatory effects of group II metabotropic glutamate receptors (mGluR II) on voltage-gated potassium (KV) currents and the firing properties of neurons in the chicken nucleus laminaris (NL), the first central auditory station where interaural time cues are analyzed for sound localization. We found that activation of mGluR II by synthetic agonists resulted in a selective increase of the high threshold KV currents. More importantly, synaptically released glutamate (with reuptake blocked) also enhanced the high threshold KV currents. The enhancement was frequency-coding region dependent, being more pronounced in low frequency neurons compared to middle and high frequency neurons. The intracellular mechanism involved the Gβγ signaling pathway associated with phospholipase C and protein kinase C. The modulation strengthened membrane outward rectification, sharpened action potentials, and improved the ability of NL neurons to follow high frequency inputs. These data suggest that mGluR II provides a feedforward modulatory mechanism that may regulate temporal processing under the condition of heightened synaptic inputs. PMID:26964678
Rich, Scott; Booth, Victoria; Zochowski, Michal
2016-01-01
The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics compared to those in networks of Type I or Type II neurons. To understand these results, we compute neuronal PRCs calculated with a perturbation matching the profile of the synaptic current in our networks. Differences in profiles of these PRCs across the different neuron types reveal mechanisms underlying the divergent network dynamics. PMID:27812323
Double-Barrier Memristive Devices for Unsupervised Learning and Pattern Recognition.
Hansen, Mirko; Zahari, Finn; Ziegler, Martin; Kohlstedt, Hermann
2017-01-01
The use of interface-based resistive switching devices for neuromorphic computing is investigated. In a combined experimental and numerical study, the important device parameters and their impact on a neuromorphic pattern recognition system are studied. The memristive cells consist of a layer sequence Al/Al 2 O 3 /Nb x O y /Au and are fabricated on a 4-inch wafer. The key functional ingredients of the devices are a 1.3 nm thick Al 2 O 3 tunnel barrier and a 2.5 mm thick Nb x O y memristive layer. Voltage pulse measurements are used to study the electrical conditions for the emulation of synaptic functionality of single cells for later use in a recognition system. The results are evaluated and modeled in the framework of the plasticity model of Ziegler et al. Based on this model, which is matched to experimental data from 84 individual devices, the network performance with regard to yield, reliability, and variability is investigated numerically. As the network model, a computing scheme for pattern recognition and unsupervised learning based on the work of Querlioz et al. (2011), Sheridan et al. (2014), Zahari et al. (2015) is employed. This is a two-layer feedforward network with a crossbar array of memristive devices, leaky integrate-and-fire output neurons including a winner-takes-all strategy, and a stochastic coding scheme for the input pattern. As input pattern, the full data set of digits from the MNIST database is used. The numerical investigation indicates that the experimentally obtained yield, reliability, and variability of the memristive cells are suitable for such a network. Furthermore, evidence is presented that their strong I - V non-linearity might avoid the need for selector devices in crossbar array structures.
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
Choe, Eugenie; Lee, Tae Young; Kim, Minah; Hur, Ji-Won; Yoon, Youngwoo Bryan; Cho, Kang-Ik K; Kwon, Jun Soo
2018-03-26
It has been suggested that the mentalizing network and the mirror neuron system network support important social cognitive processes that are impaired in schizophrenia. However, the integrity and interaction of these two networks have not been sufficiently studied, and their effects on social cognition in schizophrenia remain unclear. Our study included 26 first-episode psychosis (FEP) patients and 26 healthy controls. We utilized resting-state functional connectivity to examine the a priori-defined mirror neuron system network and the mentalizing network and to assess the within- and between-network connectivities of the networks in FEP patients. We also assessed the correlation between resting-state functional connectivity measures and theory of mind performance. FEP patients showed altered within-network connectivity of the mirror neuron system network, and aberrant between-network connectivity between the mirror neuron system network and the mentalizing network. The within-network connectivity of the mirror neuron system network was noticeably correlated with theory of mind task performance in FEP patients. The integrity and interaction of the mirror neuron system network and the mentalizing network may be altered during the early stages of psychosis. Additionally, this study suggests that alterations in the integrity of the mirror neuron system network are highly related to deficient theory of mind in schizophrenia, and this problem would be present from the early stage of psychosis. Copyright © 2018 Elsevier B.V. All rights reserved.
Segregation of feedforward and feedback projections in mouse visual cortex
Berezovskii, Vladimir K.; Nassi, Jonathan J.; Born, Richard T.
2011-01-01
Hierarchical organization is a common feature of mammalian neocortex. Neurons that send their axons from lower to higher areas of the hierarchy are referred to as “feedforward” (FF) neurons, whereas those projecting in the opposite direction are called “feedback” (FB) neurons. Anatomical, functional and theoretical studies suggest that these different classes of projections play fundamentally different roles in perception. In primates, laminar differences in projection patterns often distinguish the two projection streams. In rodents, however, these differences are less clear, despite an established hierarchy of visual areas. Thus the rodent provides a strong test of the hypothesis that FF and FB neurons form distinct populations. We tested this hypothesis by injecting retrograde tracers into two different hierarchical levels of mouse visual cortex (areas 17 and AL) and then determining the relative proportions of double-labeled FB and FF neurons in an area intermediate to them (LM). Despite finding singly labeled neurons densely intermingled with no laminar segregation, we found few double-labeled neurons (~5% of each singly labeled population). We also examined the development of FF and FB connections. FF connections were present at the earliest time-point we examined (postnatal day two, P2), while FB connections were not detectable until P11. Our findings indicate that, even in cortices without laminar segregation of FF and FB neurons, the two projection systems are largely distinct at the neuronal level and also differ with respect to the timing of their outgrowth. PMID:21618232
Shepard, Paul D.
2016-01-01
The lateral habenula, a phylogenetically conserved epithalamic structure, is activated by aversive stimuli and reward omission. Excitatory efferents from the lateral habenula predominately inhibit midbrain dopamine neuronal firing through a disynaptic, feedforward inhibitory mechanism involving the rostromedial tegmental nucleus. However, the lateral habenula also directly targets dopamine neurons within the ventral tegmental area, suggesting that opposing actions may result from increased lateral habenula activity. In the present study, we tested the effect of habenular efferent stimulation on dopamine and nondopamine neurons in the ventral tegmental area of Sprague-Dawley rats using a parasagittal brain slice preparation. Single pulse stimulation of the fasciculus retroflexus excited 48% of dopamine neurons and 51% of nondopamine neurons in the ventral tegmental area of rat pups. These proportions were not altered by excision of the rostromedial tegmental nucleus and were evident in both cortical- and striatal-projecting dopamine neurons. Glutamate receptor antagonists blocked this excitation, and fasciculus retroflexus stimulation elicited evoked excitatory postsynaptic potentials with a nearly constant onset latency, indicative of a monosynaptic, glutamatergic connection. Comparison of responses in rat pups and young adults showed no significant difference in the proportion of neurons excited by fasciculus retroflexus stimulation. Our data indicate that the well-known, indirect inhibitory effect of lateral habenula activation on midbrain dopamine neurons is complemented by a significant, direct excitatory effect. This pathway may contribute to the role of midbrain dopamine neurons in processing aversive stimuli and salience. PMID:27358317
Brown, P Leon; Shepard, Paul D
2016-09-01
The lateral habenula, a phylogenetically conserved epithalamic structure, is activated by aversive stimuli and reward omission. Excitatory efferents from the lateral habenula predominately inhibit midbrain dopamine neuronal firing through a disynaptic, feedforward inhibitory mechanism involving the rostromedial tegmental nucleus. However, the lateral habenula also directly targets dopamine neurons within the ventral tegmental area, suggesting that opposing actions may result from increased lateral habenula activity. In the present study, we tested the effect of habenular efferent stimulation on dopamine and nondopamine neurons in the ventral tegmental area of Sprague-Dawley rats using a parasagittal brain slice preparation. Single pulse stimulation of the fasciculus retroflexus excited 48% of dopamine neurons and 51% of nondopamine neurons in the ventral tegmental area of rat pups. These proportions were not altered by excision of the rostromedial tegmental nucleus and were evident in both cortical- and striatal-projecting dopamine neurons. Glutamate receptor antagonists blocked this excitation, and fasciculus retroflexus stimulation elicited evoked excitatory postsynaptic potentials with a nearly constant onset latency, indicative of a monosynaptic, glutamatergic connection. Comparison of responses in rat pups and young adults showed no significant difference in the proportion of neurons excited by fasciculus retroflexus stimulation. Our data indicate that the well-known, indirect inhibitory effect of lateral habenula activation on midbrain dopamine neurons is complemented by a significant, direct excitatory effect. This pathway may contribute to the role of midbrain dopamine neurons in processing aversive stimuli and salience. Copyright © 2016 the American Physiological Society.
Gramatikov, Boris I
2017-04-27
Reliable detection of central fixation and eye alignment is essential in the diagnosis of amblyopia ("lazy eye"), which can lead to blindness. Our lab has developed and reported earlier a pediatric vision screener that performs scanning of the retina around the fovea and analyzes changes in the polarization state of light as the scan progresses. Depending on the direction of gaze and the instrument design, the screener produces several signal frequencies that can be utilized in the detection of central fixation. The objective of this study was to compare artificial neural networks with classical statistical methods, with respect to their ability to detect central fixation reliably. A classical feedforward, pattern recognition, two-layer neural network architecture was used, consisting of one hidden layer and one output layer. The network has four inputs, representing normalized spectral powers at four signal frequencies generated during retinal birefringence scanning. The hidden layer contains four neurons. The output suggests presence or absence of central fixation. Backpropagation was used to train the network, using the gradient descent algorithm and the cross-entropy error as the performance function. The network was trained, validated and tested on a set of controlled calibration data obtained from 600 measurements from ten eyes in a previous study, and was additionally tested on a clinical set of 78 eyes, independently diagnosed by an ophthalmologist. In the first part of this study, a neural network was designed around the calibration set. With a proper architecture and training, the network provided performance that was comparable to classical statistical methods, allowing perfect separation between the central and paracentral fixation data, with both the sensitivity and the specificity of the instrument being 100%. In the second part of the study, the neural network was applied to the clinical data. It allowed reliable separation between normal subjects and affected subjects, its accuracy again matching that of the statistical methods. With a proper choice of a neural network architecture and a good, uncontaminated training data set, the artificial neural network can be an efficient classification tool for detecting central fixation based on retinal birefringence scanning.
Computational exploration of neuron and neural network models in neurobiology.
Prinz, Astrid A
2007-01-01
The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.
NASA Astrophysics Data System (ADS)
Maghrabi, Mahmoud M. T.; Kumar, Shiva; Bakr, Mohamed H.
2018-02-01
This work introduces a powerful digital nonlinear feed-forward equalizer (NFFE), exploiting multilayer artificial neural network (ANN). It mitigates impairments of optical communication systems arising due to the nonlinearity introduced by direct photo-detection. In a direct detection system, the detection process is nonlinear due to the fact that the photo-current is proportional to the absolute square of the electric field intensity. The proposed equalizer provides the most efficient computational cost with high equalization performance. Its performance is comparable to the benchmark compensation performance achieved by maximum-likelihood sequence estimator. The equalizer trains an ANN to act as a nonlinear filter whose impulse response removes the intersymbol interference (ISI) distortions of the optical channel. Owing to the proposed extensive training of the equalizer, it achieves the ultimate performance limit of any feed-forward equalizer (FFE). The performance and efficiency of the equalizer is investigated by applying it to various practical short-reach fiber optic communication system scenarios. These scenarios are extracted from practical metro/media access networks and data center applications. The obtained results show that the ANN-NFFE compensates for the received BER degradation and significantly increases the tolerance to the chromatic dispersion distortion.
From grid cells to place cells with realistic field sizes
2017-01-01
While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005
Comparing feed-forward versus neural gas as estimators: application to coke wastewater treatment.
Machón-González, Iván; López-García, Hilario; Rodríguez-Iglesias, Jesús; Marañón-Maison, Elena; Castrillón-Peláez, Leonor; Fernández-Nava, Yolanda
2013-01-01
Numerous papers related to the estimation of wastewater parameters have used artificial neural networks. Although successful results have been reported, different problems have arisen such as overtraining, local minima and model instability. In this paper, two types of neural networks, feed-forward and neural gas, are trained to obtain a model that estimates the values of nitrates in the effluent stream of a three-step activated sludge system (two oxic and one anoxic). Placing the denitrification (anoxic) step at the head of the process can force denitrifying bacteria to use internal organic carbon. However, methanol has to be added to achieve high denitrification efficiencies in some industrial wastewaters. The aim of this paper is to compare the two networks in addition to suggesting a methodology to validate the models. The influence of the neighbourhood radius is important in the neural gas approach and must be selected correctly. Neural gas performs well due to its cooperation--competition procedure, with no problems of stability or overfitting arising in the experimental results. The neural gas model is also interesting for use as a direct plant model because of its robustness and deterministic behaviour.
Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg
2016-08-15
The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks.
Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.
Li, Xiumin; Small, Michael
2012-06-01
Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.
Yao, Zepeng; Bennett, Amelia J; Clem, Jenna L; Shafer, Orie T
2016-12-13
In animals, networks of clock neurons containing molecular clocks orchestrate daily rhythms in physiology and behavior. However, how various types of clock neurons communicate and coordinate with one another to produce coherent circadian rhythms is not well understood. Here, we investigate clock neuron coupling in the brain of Drosophila and demonstrate that the fly's various groups of clock neurons display unique and complex coupling relationships to core pacemaker neurons. Furthermore, we find that coordinated free-running rhythms require molecular clock synchrony not only within the well-characterized lateral clock neuron classes but also between lateral clock neurons and dorsal clock neurons. These results uncover unexpected patterns of coupling in the clock neuron network and reveal that robust free-running behavioral rhythms require a coherence of molecular oscillations across most of the fly's clock neuron network. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Rawson, Randi L.
2017-01-01
Neural circuits balance excitatory and inhibitory activity and disruptions in this balance are commonly found in neurodevelopmental disorders. Mice lacking the intellectual disability and autism-associated gene Kirrel3 have an excitation-inhibition imbalance in the hippocampus but the precise synaptic changes underlying this functional defect are unknown. Kirrel3 is a homophilic adhesion molecule expressed in dentate gyrus (DG) and GABA neurons. It was suggested that the excitation-inhibition imbalance of hippocampal neurons in Kirrel3 knockout mice is due to loss of mossy fiber (MF) filopodia, which are DG axon protrusions thought to excite GABA neurons and thereby provide feed-forward inhibition to CA3 pyramidal neurons. Fewer filopodial structures were observed in Kirrel3 knockout mice but neither filopodial synapses nor DG en passant synapses, which also excite GABA neurons, were examined. Here, we used serial block-face scanning electron microscopy (SBEM) with 3D reconstruction to define the precise connectivity of MF filopodia and elucidate synaptic changes induced by Kirrel3 loss. Surprisingly, we discovered wildtype MF filopodia do not synapse exclusively onto GABA neurons as previously thought, but instead synapse with similar frequency onto GABA neurons and CA3 neurons. Moreover, Kirrel3 loss selectively reduces MF filopodial synapses onto GABA neurons but not those made onto CA3 neurons or en passant synapses. In sum, the selective loss of MF filopodial synapses with GABA neurons likely underlies the hippocampal activity imbalance observed in Kirrel3 knockout mice and may impact neural function in patients with Kirrel3-dependent neurodevelopmental disorders. PMID:28670619
Ito, Hidekatsu; Minoshima, Wataru; Kudoh, Suguru N
2015-08-01
To investigate relationships between neuronal network activity and electrical stimulus, we analyzed autonomous activity before and after electrical stimulus. Recordings of autonomous activity were performed using dissociated culture of rat hippocampal neurons on a multi-electrodes array (MEA) dish. Single stimulus and pared stimuli were applied to a cultured neuronal network. Single stimulus was applied every 1 min, and paired stimuli was performed by two sequential stimuli every 1 min. As a result, the patterns of synchronized activities of a neuronal network were changed after stimulus. Especially, long range synchronous activities were induced by paired stimuli. When 1 s inter-stimulus-intervals (ISI) and 1.5 s ISI paired stimuli are applied to a neuronal network, relatively long range synchronous activities expressed in case of 1.5 s ISI. Temporal synchronous activity of neuronal network is changed according to inter-stimulus-intervals (ISI) of electrical stimulus. In other words, dissociated neuronal network can maintain given information in temporal pattern and a certain type of an information maintenance mechanism was considered to be implemented in a semi-artificial dissociated neuronal network. The result is useful toward manipulation technology of neuronal activity in a brain system.
The Stratonovich formulation of quantum feedback network rules
NASA Astrophysics Data System (ADS)
Gough, John E.
2016-12-01
We express the rules for forming quantum feedback networks using the Stratonovich form of quantum stochastic calculus rather than the Itō or SLH (J. E. Gough and M. R. James, "Quantum feedback networks: Hamiltonian formulation," Commun. Math. Phys. 287, 1109 (2009), J. E. Gough and M. R. James, "The Series product and its application to quantum feedforward and feedback networks," IEEE Trans. Autom. Control 54, 2530 (2009)) form. Remarkably the feedback reduction rule implies that we obtain the Schur complement of the matrix of Stratonovich coupling operators where we short out the internal input/output coefficients.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Weick, Jason P.; Liu, Yan; Zhang, Su-Chun
2011-01-01
Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298
Tool Steel Heat Treatment Optimization Using Neural Network Modeling
NASA Astrophysics Data System (ADS)
Podgornik, Bojan; Belič, Igor; Leskovšek, Vojteh; Godec, Matjaz
2016-11-01
Optimization of tool steel properties and corresponding heat treatment is mainly based on trial and error approach, which requires tremendous experimental work and resources. Therefore, there is a huge need for tools allowing prediction of mechanical properties of tool steels as a function of composition and heat treatment process variables. The aim of the present work was to explore the potential and possibilities of artificial neural network-based modeling to select and optimize vacuum heat treatment conditions depending on the hot work tool steel composition and required properties. In the current case training of the feedforward neural network with error backpropagation training scheme and four layers of neurons (8-20-20-2) scheme was based on the experimentally obtained tempering diagrams for ten different hot work tool steel compositions and at least two austenitizing temperatures. Results show that this type of modeling can be successfully used for detailed and multifunctional analysis of different influential parameters as well as to optimize heat treatment process of hot work tool steels depending on the composition. In terms of composition, V was found as the most beneficial alloying element increasing hardness and fracture toughness of hot work tool steel; Si, Mn, and Cr increase hardness but lead to reduced fracture toughness, while Mo has the opposite effect. Optimum concentration providing high KIc/HRC ratios would include 0.75 pct Si, 0.4 pct Mn, 5.1 pct Cr, 1.5 pct Mo, and 0.5 pct V, with the optimum heat treatment performed at lower austenitizing and intermediate tempering temperatures.
Cortico-Cortical Connections of Primary Sensory Areas and Associated Symptoms in Migraine.
Hodkinson, Duncan J; Veggeberg, Rosanna; Kucyi, Aaron; van Dijk, Koene R A; Wilcox, Sophie L; Scrivani, Steven J; Burstein, Rami; Becerra, Lino; Borsook, David
2016-01-01
Migraine is a recurring, episodic neurological disorder characterized by headache, nausea, vomiting, and sensory disturbances. These events are thought to arise from the activation and sensitization of neurons along the trigemino-vascular pathway. From animal studies, it is known that thalamocortical projections play an important role in the transmission of nociceptive signals from the meninges to the cortex. However, little is currently known about the potential involvement of cortico-cortical feedback projections from higher-order multisensory areas and/or feedforward projections from principle primary sensory areas or subcortical structures. In a large cohort of human migraine patients ( N = 40) and matched healthy control subjects ( N = 40), we used resting-state intrinsic functional connectivity to examine the cortical networks associated with the three main sensory perceptual modalities of vision, audition, and somatosensation. Specifically, we sought to explore the complexity of the sensory networks as they converge and become functionally coupled in multimodal systems. We also compared self-reported retrospective migraine symptoms in the same patients, examining the prevalence of sensory symptoms across the different phases of the migraine cycle. Our results show widespread and persistent disturbances in the perceptions of multiple sensory modalities. Consistent with this observation, we discovered that primary sensory areas maintain local functional connectivity but express impaired long-range connections to higher-order association areas (including regions of the default mode and salience network). We speculate that cortico-cortical interactions are necessary for the integration of information within and across the sensory modalities and, thus, could play an important role in the initiation of migraine and/or the development of its associated symptoms.
Cultured Neuronal Networks Express Complex Patterns of Activity and Morphological Memory
NASA Astrophysics Data System (ADS)
Raichman, Nadav; Rubinsky, Liel; Shein, Mark; Baruchi, Itay; Volman, Vladislav; Ben-Jacob, Eshel
The following sections are included: * Cultured Neuronal Networks * Recording the Network Activity * Network Engineering * The Formation of Synchronized Bursting Events * The Characterization of the SBEs * Highly-Active Neurons * Function-Form Relations in Cultured Networks * Analyzing the SBEs Motifs * Network Repertoire * Network under Hypothermia * Summary * Acknowledgments * References
A direct translaminar inhibitory circuit tunes cortical output
Pluta, Scott; Naka, Alexander; Veit, Julia; Telian, Gregory; Yao, Lucille; Hakim, Richard; Taylor, David; Adesnik, Hillel
2015-01-01
Summary Anatomical and physiological experiments have outlined a blueprint for the feed-forward flow of activity in cortical circuits: signals are thought to propagate primarily from the middle cortical layer, L4, up to L2/3, and down to the major cortical output layer, L5. Pharmacological manipulations, however, have contested this model and suggested that L4 may not be critical for sensory responses of neurons in either superficial or deep layers. To address these conflicting models we reversibly manipulated L4 activity in awake, behaving mice using cell-type specific optogenetics. In contrast to both prevailing models, we show that activity in L4 directly suppresses L5, in part by activating deep, fast spiking inhibitory neurons. Our data suggest that the net impact of L4 activity is to sharpen the spatial representations of L5 neurons. Thus we establish a novel translaminar inhibitory circuit in the sensory cortex that acts to enhance the feature selectivity of cortical output. PMID:26414615
Bendor, Daniel
2015-01-01
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex. PMID:25879843
Clarke, Aaron M.; Herzog, Michael H.; Francis, Gregory
2014-01-01
Experimentalists tend to classify models of visual perception as being either local or global, and involving either feedforward or feedback processing. We argue that these distinctions are not as helpful as they might appear, and we illustrate these issues by analyzing models of visual crowding as an example. Recent studies have argued that crowding cannot be explained by purely local processing, but that instead, global factors such as perceptual grouping are crucial. Theories of perceptual grouping, in turn, often invoke feedback connections as a way to account for their global properties. We examined three types of crowding models that are representative of global processing models, and two of which employ feedback processing: a model based on Fourier filtering, a feedback neural network, and a specific feedback neural architecture that explicitly models perceptual grouping. Simulations demonstrate that crucial empirical findings are not accounted for by any of the models. We conclude that empirical investigations that reject a local or feedforward architecture offer almost no constraints for model construction, as there are an uncountable number of global and feedback systems. We propose that the identification of a system as being local or global and feedforward or feedback is less important than the identification of a system's computational details. Only the latter information can provide constraints on model development and promote quantitative explanations of complex phenomena. PMID:25374554
NASA Technical Reports Server (NTRS)
Decker, Arthur J. (Inventor)
2006-01-01
An artificial neural network is disclosed that processes holography generated characteristic pattern of vibrating structures along with finite-element models. The present invention provides for a folding operation for conditioning training sets for optimally training forward-neural networks to process characteristic fringe pattern. The folding pattern increases the sensitivity of the feed-forward network for detecting changes in the characteristic pattern The folding routine manipulates input pixels so as to be scaled according to the location in an intensity range rather than the position in the characteristic pattern.
Neural network-based run-to-run controller using exposure and resist thickness adjustment
NASA Astrophysics Data System (ADS)
Geary, Shane; Barry, Ronan
2003-06-01
This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.
Three-dimensional neural cultures produce networks that mimic native brain activity.
Bourke, Justin L; Quigley, Anita F; Duchi, Serena; O'Connell, Cathal D; Crook, Jeremy M; Wallace, Gordon G; Cook, Mark J; Kapsa, Robert M I
2018-02-01
Development of brain function is critically dependent on neuronal networks organized through three dimensions. Culture of central nervous system neurons has traditionally been limited to two dimensions, restricting growth patterns and network formation to a single plane. Here, with the use of multichannel extracellular microelectrode arrays, we demonstrate that neurons cultured in a true three-dimensional environment recapitulate native neuronal network formation and produce functional outcomes more akin to in vivo neuronal network activity. Copyright © 2017 John Wiley & Sons, Ltd.
Numerical simulation of coherent resonance in a model network of Rulkov neurons
NASA Astrophysics Data System (ADS)
Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.
2018-04-01
In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.
New control strategies for neuroprosthetic systems.
Crago, P E; Lan, N; Veltink, P H; Abbas, J J; Kantor, C
1996-04-01
The availability of techniques to artificially excite paralyzed muscles opens enormous potential for restoring both upper and lower extremity movements with neuroprostheses. Neuroprostheses must stimulate muscle, and control and regulate the artificial movements produced. Control methods to accomplish these tasks include feedforward (open-loop), feedback, and adaptive control. Feedforward control requires a great deal of information about the biomechanical behavior of the limb. For the upper extremity, an artificial motor program was developed to provide such movement program input to a neuroprosthesis. In lower extremity control, one group achieved their best results by attempting to meet naturally perceived gait objectives rather than to follow an exact joint angle trajectory. Adaptive feedforward control, as implemented in the cycle-to-cycle controller, gave good compensation for the gradual decrease in performance observed with open-loop control. A neural network controller was able to control its system to customize stimulation parameters in order to generate a desired output trajectory in a given individual and to maintain tracking performance in the presence of muscle fatigue. The authors believe that practical FNS control systems must exhibit many of these features of neurophysiological systems.
Jing, Jian; Sweedler, Jonathan V; Cropper, Elizabeth C; Alexeeva, Vera; Park, Ji-Ho; Romanova, Elena V.; Xie, Fang; Dembrow, Nikolai C.; Ludwar, Bjoern C.; Weiss, Klaudiusz R; Vilim, Ferdinand S
2010-01-01
Compensatory mechanisms are often used to achieve stability by reducing variance, which can be accomplished via negative feedback during homeostatic regulation. In principle, compensation can also be implemented through feedforward mechanisms where a regulator acts to offset the anticipated output variation; however, few such neural mechanisms have been demonstrated. We provide evidence that an Aplysia neuropeptide, identified using an enhanced representational difference analysis procedure, implements feedforward compensation within the feeding network. We named the novel peptide allatotropin-related peptide (ATRP) because of its similarity to insect allatotropin. Mass spectrometry confirmed the peptide's identity, and in situ hybridization and immunostaining mapped its distribution in the Aplysia CNS. ATRP is present in the higher-order cerebral-buccal interneuron (CBI), CBI-4, but not in CBI-2. Previous work showed that CBI-4-elicited motor programs have a shorter protraction duration than those elicited by CBI-2. Here we show that ATRP shortens protraction duration of CBI-2-elicited ingestive programs, suggesting a contribution of ATRP to the parametric differences between CBI-4- and CBI-2-evoked programs. Importantly, because Aplysia muscle contractions are a graded function of motoneuronal activity, one consequence of the shortening of protraction is that it can weaken protraction movements. However, this potential weakening is offset by feedforward compensatory actions exerted by ATRP. Centrally, ATRP increases the activity of protraction motoneurons. Moreover, ATRP is present in peripheral varicosities of protraction motoneurons and enhances peripheral motoneuron-elicited protraction muscle contractions. Therefore, feedforward compensatory mechanisms mediated by ATRP make it possible to generate a faster movement with an amplitude that is not greatly reduced, thereby producing stability. PMID:21147994
Liu, Peter Y; Pincus, Steven M; Keenan, Daniel M; Roelfsema, Ferdinand; Veldhuis, Johannes D
2005-02-01
The hypothalamo-pituitary-testicular and hypothalamo-pituitary-adrenal axes are prototypical coupled neuroendocrine systems. In the present study, we contrasted in vivo linkages within and between these two axes using methods without linearity assumptions. We examined 11 young (21-31 yr) and 8 older (62-74 yr) men who underwent frequent (every 2.5 min) blood sampling overnight for paired measurement of LH and testosterone and 35 adults (17 women and 18 men; 26-77 yr old) who underwent adrenocorticotropic hormone (ACTH) and cortisol measurements every 10 min for 24 h. To mirror physiological interactions, hormone secretion was first deconvolved from serial concentrations with a waveform-independent biexponential elimination model. Feedforward synchrony, feedback synchrony, and the difference in feedforward-feedback synchrony were quantified by the cross-approximate entropy (X-ApEn) statistic. These were applied in a forward (LH concentration template, examining pattern recurrence in testosterone secretion), reverse (testosterone concentration template, examining pattern recurrence in LH secretion), and differential (forward minus reverse) manner, respectively. Analogous concentration-secretion X-ApEn estimates were calculated from ACTH-cortisol pairs. X-ApEn, a scale- and model-independent measure of pattern reproducibility, disclosed 1) greater testosterone-LH feedback coordination than LH-testosterone feedforward synchrony in healthy men and significant and symmetric erosion of both feedforward and feedback linkages with aging; 2) more synchronous ACTH concentration-dependent feedforward than feedback drive of cortisol secretion, independent of gender and age; and 3) enhanced detection of bidirectional physiological regulation by in vivo pairwise concentration-secretion compared with concentration-concentration analyses. The linking of relevant biological input to output signals and vice versa should be useful in the dissection of the reciprocal control of neuroendocrine systems or even in the analysis of other nonendocrine networks.
Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg
2016-01-01
The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks. DOI: http://dx.doi.org/10.7554/eLife.15719.001 PMID:27525488
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
Computational model of electrically coupled, intrinsically distinct pacemaker neurons.
Soto-Treviño, Cristina; Rabbah, Pascale; Marder, Eve; Nadim, Farzan
2005-07-01
Electrical coupling between neurons with similar properties is often studied. Nonetheless, the role of electrical coupling between neurons with widely different intrinsic properties also occurs, but is less well understood. Inspired by the pacemaker group of the crustacean pyloric network, we developed a multicompartment, conductance-based model of a small network of intrinsically distinct, electrically coupled neurons. In the pyloric network, a small intrinsically bursting neuron, through gap junctions, drives 2 larger, tonically spiking neurons to reliably burst in-phase with it. Each model neuron has 2 compartments, one responsible for spike generation and the other for producing a slow, large-amplitude oscillation. We illustrate how these compartments interact and determine the dynamics of the model neurons. Our model captures the dynamic oscillation range measured from the isolated and coupled biological neurons. At the network level, we explore the range of coupling strengths for which synchronous bursting oscillations are possible. The spatial segregation of ionic currents significantly enhances the ability of the 2 neurons to burst synchronously, and the oscillation range of the model pacemaker network depends not only on the strength of the electrical synapse but also on the identity of the neuron receiving inputs. We also compare the activity of the electrically coupled, distinct neurons with that of a network of coupled identical bursting neurons. For small to moderate coupling strengths, the network of identical elements, when receiving asymmetrical inputs, can have a smaller dynamic range of oscillation than that of its constituent neurons in isolation.
Shaping Neuronal Network Activity by Presynaptic Mechanisms
Ashery, Uri
2015-01-01
Neuronal microcircuits generate oscillatory activity, which has been linked to basic functions such as sleep, learning and sensorimotor gating. Although synaptic release processes are well known for their ability to shape the interaction between neurons in microcircuits, most computational models do not simulate the synaptic transmission process directly and hence cannot explain how changes in synaptic parameters alter neuronal network activity. In this paper, we present a novel neuronal network model that incorporates presynaptic release mechanisms, such as vesicle pool dynamics and calcium-dependent release probability, to model the spontaneous activity of neuronal networks. The model, which is based on modified leaky integrate-and-fire neurons, generates spontaneous network activity patterns, which are similar to experimental data and robust under changes in the model's primary gain parameters such as excitatory postsynaptic potential and connectivity ratio. Furthermore, it reliably recreates experimental findings and provides mechanistic explanations for data obtained from microelectrode array recordings, such as network burst termination and the effects of pharmacological and genetic manipulations. The model demonstrates how elevated asynchronous release, but not spontaneous release, synchronizes neuronal network activity and reveals that asynchronous release enhances utilization of the recycling vesicle pool to induce the network effect. The model further predicts a positive correlation between vesicle priming at the single-neuron level and burst frequency at the network level; this prediction is supported by experimental findings. Thus, the model is utilized to reveal how synaptic release processes at the neuronal level govern activity patterns and synchronization at the network level. PMID:26372048
Connexin-Dependent Neuroglial Networking as a New Therapeutic Target.
Charvériat, Mathieu; Naus, Christian C; Leybaert, Luc; Sáez, Juan C; Giaume, Christian
2017-01-01
Astrocytes and neurons dynamically interact during physiological processes, and it is now widely accepted that they are both organized in plastic and tightly regulated networks. Astrocytes are connected through connexin-based gap junction channels, with brain region specificities, and those networks modulate neuronal activities, such as those involved in sleep-wake cycle, cognitive, or sensory functions. Additionally, astrocyte domains have been involved in neurogenesis and neuronal differentiation during development; they participate in the "tripartite synapse" with both pre-synaptic and post-synaptic neurons by tuning down or up neuronal activities through the control of neuronal synaptic strength. Connexin-based hemichannels are also involved in those regulations of neuronal activities, however, this feature will not be considered in the present review. Furthermore, neuronal processes, transmitting electrical signals to chemical synapses, stringently control astroglial connexin expression, and channel functions. Long-range energy trafficking toward neurons through connexin-coupled astrocytes and plasticity of those networks are hence largely dependent on neuronal activity. Such reciprocal interactions between neurons and astrocyte networks involve neurotransmitters, cytokines, endogenous lipids, and peptides released by neurons but also other brain cell types, including microglial and endothelial cells. Over the past 10 years, knowledge about neuroglial interactions has widened and now includes effects of CNS-targeting drugs such as antidepressants, antipsychotics, psychostimulants, or sedatives drugs as potential modulators of connexin function and thus astrocyte networking activity. In physiological situations, neuroglial networking is consequently resulting from a two-way interaction between astrocyte gap junction-mediated networks and those made by neurons. As both cell types are modulated by CNS drugs we postulate that neuroglial networking may emerge as new therapeutic targets in neurological and psychiatric disorders.
Analysis of Network Topologies Underlying Ethylene Growth Response Kinetics
Prescott, Aaron M.; McCollough, Forest W.; Eldreth, Bryan L.; Binder, Brad M.; Abel, Steven M.
2016-01-01
Most models for ethylene signaling involve a linear pathway. However, measurements of seedling growth kinetics when ethylene is applied and removed have resulted in more complex network models that include coherent feedforward, negative feedback, and positive feedback motifs. The dynamical responses of the proposed networks have not been explored in a quantitative manner. Here, we explore (i) whether any of the proposed models are capable of producing growth-response behaviors consistent with experimental observations and (ii) what mechanistic roles various parts of the network topologies play in ethylene signaling. To address this, we used computational methods to explore two general network topologies: The first contains a coherent feedforward loop that inhibits growth and a negative feedback from growth onto itself (CFF/NFB). In the second, ethylene promotes the cleavage of EIN2, with the product of the cleavage inhibiting growth and promoting the production of EIN2 through a positive feedback loop (PFB). Since few network parameters for ethylene signaling are known in detail, we used an evolutionary algorithm to explore sets of parameters that produce behaviors similar to experimental growth response kinetics of both wildtype and mutant seedlings. We generated a library of parameter sets by independently running the evolutionary algorithm many times. Both network topologies produce behavior consistent with experimental observations, and analysis of the parameter sets allows us to identify important network interactions and parameter constraints. We additionally screened these parameter sets for growth recovery in the presence of sub-saturating ethylene doses, which is an experimentally-observed property that emerges in some of the evolved parameter sets. Finally, we probed simplified networks maintaining key features of the CFF/NFB and PFB topologies. From this, we verified observations drawn from the larger networks about mechanisms underlying ethylene signaling. Analysis of each network topology results in predictions about changes that occur in network components that can be experimentally tested to give insights into which, if either, network underlies ethylene responses. PMID:27625669
Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?
Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R
2014-01-08
How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.
Wyart, Claire; Ybert, Christophe; Bourdieu, Laurent; Herr, Catherine; Prinz, Christelle; Chatenay, Didier
2002-06-30
The use of ordered neuronal networks in vitro is a promising approach to study the development and the activity of small neuronal assemblies. However, in previous attempts, sufficient growth control and physiological maturation of neurons could not be achieved. Here we describe an original protocol in which polylysine patterns confine the adhesion of cellular bodies to prescribed spots and the neuritic growth to thin lines. Hippocampal neurons in these networks are maintained healthy in serum free medium up to 5 weeks in vitro. Electrophysiology and immunochemistry show that neurons exhibit mature excitatory and inhibitory synapses and calcium imaging reveals spontaneous activity of neurons in isolated networks. We demonstrate that neurons in these geometrical networks form functional synapses preferentially to their first neighbors. We have, therefore, established a simple and robust protocol to constrain both the location of neuronal cell bodies and their pattern of connectivity. Moreover, the long term maintenance of the geometry and the physiology of the networks raises the possibility of new applications for systematic screening of pharmacological agents and for electronic to neuron devices.
2018-01-01
Abstract It is widely assumed that distributed neuronal networks are fundamental to the functioning of the brain. Consistent spike timing between neurons is thought to be one of the key principles for the formation of these networks. This can involve synchronous spiking or spiking with time delays, forming spike sequences when the order of spiking is consistent. Finding networks defined by their sequence of time-shifted spikes, denoted here as spike timing networks, is a tremendous challenge. As neurons can participate in multiple spike sequences at multiple between-spike time delays, the possible complexity of networks is prohibitively large. We present a novel approach that is capable of (1) extracting spike timing networks regardless of their sequence complexity, and (2) that describes their spiking sequences with high temporal precision. We achieve this by decomposing frequency-transformed neuronal spiking into separate networks, characterizing each network’s spike sequence by a time delay per neuron, forming a spike sequence timeline. These networks provide a detailed template for an investigation of the experimental relevance of their spike sequences. Using simulated spike timing networks, we show network extraction is robust to spiking noise, spike timing jitter, and partial occurrences of the involved spike sequences. Using rat multineuron recordings, we demonstrate the approach is capable of revealing real spike timing networks with sub-millisecond temporal precision. By uncovering spike timing networks, the prevalence, structure, and function of complex spike sequences can be investigated in greater detail, allowing us to gain a better understanding of their role in neuronal functioning. PMID:29789811
Learning representations for the early detection of sepsis with deep neural networks.
Kam, Hye Jin; Kim, Ha Young
2017-10-01
Sepsis is one of the leading causes of death in intensive care unit patients. Early detection of sepsis is vital because mortality increases as the sepsis stage worsens. This study aimed to develop detection models for the early stage of sepsis using deep learning methodologies, and to compare the feasibility and performance of the new deep learning methodology with those of the regression method with conventional temporal feature extraction. Study group selection adhered to the InSight model. The results of the deep learning-based models and the InSight model were compared. With deep feedforward networks, the area under the ROC curve (AUC) of the models were 0.887 and 0.915 for the InSight and the new feature sets, respectively. For the model with the combined feature set, the AUC was the same as that of the basic feature set (0.915). For the long short-term memory model, only the basic feature set was applied and the AUC improved to 0.929 compared with the existing 0.887 of the InSight model. The contributions of this paper can be summarized in three ways: (i) improved performance without feature extraction using domain knowledge, (ii) verification of feature extraction capability of deep neural networks through comparison with reference features, and (iii) improved performance with feedforward neural networks using long short-term memory, a neural network architecture that can learn sequential patterns. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert
2017-01-01
Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks. PMID:28932180
Mostafa, Hesham; Pedroni, Bruno; Sheik, Sadique; Cauwenberghs, Gert
2017-01-01
Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
Interconnected network motifs control podocyte morphology and kidney function.
Azeloglu, Evren U; Hardy, Simon V; Eungdamrong, Narat John; Chen, Yibang; Jayaraman, Gomathi; Chuang, Peter Y; Fang, Wei; Xiong, Huabao; Neves, Susana R; Jain, Mohit R; Li, Hong; Ma'ayan, Avi; Gordon, Ronald E; He, John Cijiang; Iyengar, Ravi
2014-02-04
Podocytes are kidney cells with specialized morphology that is required for glomerular filtration. Diseases, such as diabetes, or drug exposure that causes disruption of the podocyte foot process morphology results in kidney pathophysiology. Proteomic analysis of glomeruli isolated from rats with puromycin-induced kidney disease and control rats indicated that protein kinase A (PKA), which is activated by adenosine 3',5'-monophosphate (cAMP), is a key regulator of podocyte morphology and function. In podocytes, cAMP signaling activates cAMP response element-binding protein (CREB) to enhance expression of the gene encoding a differentiation marker, synaptopodin, a protein that associates with actin and promotes its bundling. We constructed and experimentally verified a β-adrenergic receptor-driven network with multiple feedback and feedforward motifs that controls CREB activity. To determine how the motifs interacted to regulate gene expression, we mapped multicompartment dynamical models, including information about protein subcellular localization, onto the network topology using Petri net formalisms. These computational analyses indicated that the juxtaposition of multiple feedback and feedforward motifs enabled the prolonged CREB activation necessary for synaptopodin expression and actin bundling. Drug-induced modulation of these motifs in diseased rats led to recovery of normal morphology and physiological function in vivo. Thus, analysis of regulatory motifs using network dynamics can provide insights into pathophysiology that enable predictions for drug intervention strategies to treat kidney disease.