Cohen, Yaniv; Wilson, Donald A.; Barkai, Edi
2015-01-01
Learning of a complex olfactory discrimination (OD) task results in acquisition of rule learning after prolonged training. Previously, we demonstrated enhanced synaptic connectivity between the piriform cortex (PC) and its ascending and descending inputs from the olfactory bulb (OB) and orbitofrontal cortex (OFC) following OD rule learning. Here, using recordings of evoked field postsynaptic potentials in behaving animals, we examined the dynamics by which these synaptic pathways are modified during rule acquisition. We show profound differences in synaptic connectivity modulation between the 2 input sources. During rule acquisition, the ascending synaptic connectivity from the OB to the anterior and posterior PC is simultaneously enhanced. Furthermore, post-training stimulation of the OB enhanced learning rate dramatically. In sharp contrast, the synaptic input in the descending pathway from the OFC was significantly reduced until training completion. Once rule learning was established, the strength of synaptic connectivity in the 2 pathways resumed its pretraining values. We suggest that acquisition of olfactory rule learning requires a transient enhancement of ascending inputs to the PC, synchronized with a parallel decrease in the descending inputs. This combined short-lived modulation enables the PC network to reorganize in a manner that enables it to first acquire and then maintain the rule. PMID:23960200
Reuveni, Iris; Lin, Longnian; Barkai, Edi
2018-06-15
Following training in a difficult olfactory-discrimination (OD) task rats acquire the capability to perform the task easily, with little effort. This new acquired skill, of 'learning how to learn' is termed 'rule learning'. At the single-cell level, rule learning is manifested in long-term enhancement of intrinsic neuronal excitability of piriform cortex (PC) pyramidal neurons, and in excitatory synaptic connections between these neurons to maintain cortical stability, such long-lasting increase in excitability must be accompanied by paralleled increase in inhibitory processes that would prevent hyper-excitable activation. In this review we describe the cellular and molecular mechanisms underlying complex-learning-induced long-lasting modifications in GABA A -receptors and GABA B -receptor-mediated synaptic inhibition. Subsequently we discuss how such modifications support the induction and preservation of long-term memories in the in the mammalian brain. Based on experimental results, computational analysis and modeling, we propose that rule learning is maintained by doubling the strength of synaptic inputs, excitatory as well as inhibitory, in a sub-group of neurons. This enhanced synaptic transmission, which occurs in all (or almost all) synaptic inputs onto these neurons, activates specific stored memories. At the molecular level, such rule-learning-relevant synaptic strengthening is mediated by doubling the conductance of synaptic channels, but not their numbers. This post synaptic process is controlled by a whole-cell mechanism via particular second messenger systems. This whole-cell mechanism enables memory amplification when required and memory extinction when not relevant. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level.
Bono, Jacopo; Clopath, Claudia
2017-09-26
Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites.Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.
Distributed synaptic weights in a LIF neural network and learning rules
NASA Astrophysics Data System (ADS)
Perthame, Benoît; Salort, Delphine; Wainrib, Gilles
2017-09-01
Leaky integrate-and-fire (LIF) models are mean-field limits, with a large number of neurons, used to describe neural networks. We consider inhomogeneous networks structured by a connectivity parameter (strengths of the synaptic weights) with the effect of processing the input current with different intensities. We first study the properties of the network activity depending on the distribution of synaptic weights and in particular its discrimination capacity. Then, we consider simple learning rules and determine the synaptic weight distribution it generates. We outline the role of noise as a selection principle and the capacity to memorize a learned signal.
Butts, Daniel A; Kanold, Patrick O; Shatz, Carla J
2007-01-01
Patterned spontaneous activity in the developing retina is necessary to drive synaptic refinement in the lateral geniculate nucleus (LGN). Using perforated patch recordings from neurons in LGN slices during the period of eye segregation, we examine how such burst-based activity can instruct this refinement. Retinogeniculate synapses have a novel learning rule that depends on the latencies between pre- and postsynaptic bursts on the order of one second: coincident bursts produce long-lasting synaptic enhancement, whereas non-overlapping bursts produce mild synaptic weakening. It is consistent with “Hebbian” development thought to exist at this synapse, and we demonstrate computationally that such a rule can robustly use retinal waves to drive eye segregation and retinotopic refinement. Thus, by measuring plasticity induced by natural activity patterns, synaptic learning rules can be linked directly to their larger role in instructing the patterning of neural connectivity. PMID:17341130
Somato-dendritic Synaptic Plasticity and Error-backpropagation in Active Dendrites
Schiess, Mathieu; Urbanczik, Robert; Senn, Walter
2016-01-01
In the last decade dendrites of cortical neurons have been shown to nonlinearly combine synaptic inputs by evoking local dendritic spikes. It has been suggested that these nonlinearities raise the computational power of a single neuron, making it comparable to a 2-layer network of point neurons. But how these nonlinearities can be incorporated into the synaptic plasticity to optimally support learning remains unclear. We present a theoretically derived synaptic plasticity rule for supervised and reinforcement learning that depends on the timing of the presynaptic, the dendritic and the postsynaptic spikes. For supervised learning, the rule can be seen as a biological version of the classical error-backpropagation algorithm applied to the dendritic case. When modulated by a delayed reward signal, the same plasticity is shown to maximize the expected reward in reinforcement learning for various coding scenarios. Our framework makes specific experimental predictions and highlights the unique advantage of active dendrites for implementing powerful synaptic plasticity rules that have access to downstream information via backpropagation of action potentials. PMID:26841235
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons
Burbank, Kendra S.
2015-01-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe. PMID:24223789
Implementation of a spike-based perceptron learning rule using TiO2-x memristors.
Mostafa, Hesham; Khiat, Ali; Serb, Alexander; Mayr, Christian G; Indiveri, Giacomo; Prodromakis, Themis
2015-01-01
Synaptic plasticity plays a crucial role in allowing neural networks to learn and adapt to various input environments. Neuromorphic systems need to implement plastic synapses to obtain basic "cognitive" capabilities such as learning. One promising and scalable approach for implementing neuromorphic synapses is to use nano-scale memristors as synaptic elements. In this paper we propose a hybrid CMOS-memristor system comprising CMOS neurons interconnected through TiO2-x memristors, and spike-based learning circuits that modulate the conductance of the memristive synapse elements according to a spike-based Perceptron plasticity rule. We highlight a number of advantages for using this spike-based plasticity rule as compared to other forms of spike timing dependent plasticity (STDP) rules. We provide experimental proof-of-concept results with two silicon neurons connected through a memristive synapse that show how the CMOS plasticity circuits can induce stable changes in memristor conductances, giving rise to increased synaptic strength after a potentiation episode and to decreased strength after a depression episode.
The Convallis Rule for Unsupervised Learning in Cortical Networks
Yger, Pierre; Harris, Kenneth D.
2013-01-01
The phenomenology and cellular mechanisms of cortical synaptic plasticity are becoming known in increasing detail, but the computational principles by which cortical plasticity enables the development of sensory representations are unclear. Here we describe a framework for cortical synaptic plasticity termed the “Convallis rule”, mathematically derived from a principle of unsupervised learning via constrained optimization. Implementation of the rule caused a recurrent cortex-like network of simulated spiking neurons to develop rate representations of real-world speech stimuli, enabling classification by a downstream linear decoder. Applied to spike patterns used in in vitro plasticity experiments, the rule reproduced multiple results including and beyond STDP. However STDP alone produced poorer learning performance. The mathematical form of the rule is consistent with a dual coincidence detector mechanism that has been suggested by experiments in several synaptic classes of juvenile neocortex. Based on this confluence of normative, phenomenological, and mechanistic evidence, we suggest that the rule may approximate a fundamental computational principle of the neocortex. PMID:24204224
Genetic attack on neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
Genetic attack on neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka
2006-03-15
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold formore » the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.« less
Genetic attack on neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism. PMID:27532262
Dynamic Hebbian Cross-Correlation Learning Resolves the Spike Timing Dependent Plasticity Conundrum.
Olde Scheper, Tjeerd V; Meredith, Rhiannon M; Mansvelder, Huibert D; van Pelt, Jaap; van Ooyen, Arjen
2017-01-01
Spike Timing-Dependent Plasticity has been found to assume many different forms. The classic STDP curve, with one potentiating and one depressing window, is only one of many possible curves that describe synaptic learning using the STDP mechanism. It has been shown experimentally that STDP curves may contain multiple LTP and LTD windows of variable width, and even inverted windows. The underlying STDP mechanism that is capable of producing such an extensive, and apparently incompatible, range of learning curves is still under investigation. In this paper, it is shown that STDP originates from a combination of two dynamic Hebbian cross-correlations of local activity at the synapse. The correlation of the presynaptic activity with the local postsynaptic activity is a robust and reliable indicator of the discrepancy between the presynaptic neuron and the postsynaptic neuron's activity. The second correlation is between the local postsynaptic activity with dendritic activity which is a good indicator of matching local synaptic and dendritic activity. We show that this simple time-independent learning rule can give rise to many forms of the STDP learning curve. The rule regulates synaptic strength without the need for spike matching or other supervisory learning mechanisms. Local differences in dendritic activity at the synapse greatly affect the cross-correlation difference which determines the relative contributions of different neural activity sources. Dendritic activity due to nearby synapses, action potentials, both forward and back-propagating, as well as inhibitory synapses will dynamically modify the local activity at the synapse, and the resulting STDP learning rule. The dynamic Hebbian learning rule ensures furthermore, that the resulting synaptic strength is dynamically stable, and that interactions between synapses do not result in local instabilities. The rule clearly demonstrates that synapses function as independent localized computational entities, each contributing to the global activity, not in a simply linear fashion, but in a manner that is appropriate to achieve local and global stability of the neuron and the entire dendritic structure.
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
Neftci, Emre O.; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. PMID:28680387
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.
Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
NASA Astrophysics Data System (ADS)
Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing
2016-10-01
Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.
Inter-synaptic learning of combination rules in a cortical network model
Lavigne, Frédéric; Avnaïm, Francis; Dumercy, Laurent
2014-01-01
Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network. PMID:25221529
Energy Efficient Sparse Connectivity from Imbalanced Synaptic Plasticity Rules
Sacramento, João; Wichert, Andreas; van Rossum, Mark C. W.
2015-01-01
It is believed that energy efficiency is an important constraint in brain evolution. As synaptic transmission dominates energy consumption, energy can be saved by ensuring that only a few synapses are active. It is therefore likely that the formation of sparse codes and sparse connectivity are fundamental objectives of synaptic plasticity. In this work we study how sparse connectivity can result from a synaptic learning rule of excitatory synapses. Information is maximised when potentiation and depression are balanced according to the mean presynaptic activity level and the resulting fraction of zero-weight synapses is around 50%. However, an imbalance towards depression increases the fraction of zero-weight synapses without significantly affecting performance. We show that imbalanced plasticity corresponds to imposing a regularising constraint on the L 1-norm of the synaptic weight vector, a procedure that is well-known to induce sparseness. Imbalanced plasticity is biophysically plausible and leads to more efficient synaptic configurations than a previously suggested approach that prunes synapses after learning. Our framework gives a novel interpretation to the high fraction of silent synapses found in brain regions like the cerebellum. PMID:26046817
Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.
Albers, Christian; Westkott, Maren; Pawelzik, Klaus
2016-01-01
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.
Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity
Albers, Christian; Westkott, Maren; Pawelzik, Klaus
2016-01-01
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns. PMID:26900845
Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan C.; van Schaik, André
2015-01-01
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform. PMID:26041985
Learning rules for spike timing-dependent plasticity depend on dendritic synapse location.
Letzkus, Johannes J; Kampa, Björn M; Stuart, Greg J
2006-10-11
Previous studies focusing on the temporal rules governing changes in synaptic strength during spike timing-dependent synaptic plasticity (STDP) have paid little attention to the fact that synaptic inputs are distributed across complex dendritic trees. During STDP, propagation of action potentials (APs) back to the site of synaptic input is thought to trigger plasticity. However, in pyramidal neurons, backpropagation of single APs is decremental, whereas high-frequency bursts lead to generation of distal dendritic calcium spikes. This raises the question whether STDP learning rules depend on synapse location and firing mode. Here, we investigate this issue at synapses between layer 2/3 and layer 5 pyramidal neurons in somatosensory cortex. We find that low-frequency pairing of single APs at positive times leads to a distance-dependent shift to long-term depression (LTD) at distal inputs. At proximal sites, this LTD could be converted to long-term potentiation (LTP) by dendritic depolarizations suprathreshold for BAC-firing or by high-frequency AP bursts. During AP bursts, we observed a progressive, distance-dependent shift in the timing requirements for induction of LTP and LTD, such that distal synapses display novel timing rules: they potentiate when inputs are activated after burst onset (negative timing) but depress when activated before burst onset (positive timing). These findings could be explained by distance-dependent differences in the underlying dendritic voltage waveforms driving NMDA receptor activation during STDP induction. Our results suggest that synapse location within the dendritic tree is a crucial determinant of STDP, and that synapses undergo plasticity according to local rather than global learning rules.
Depression-Biased Reverse Plasticity Rule Is Required for Stable Learning at Top-Down Connections
Burbank, Kendra S.; Kreiman, Gabriel
2012-01-01
Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body. PMID:22396630
A Model of Self-Organizing Head-Centered Visual Responses in Primate Parietal Areas
Mender, Bedeho M. W.; Stringer, Simon M.
2013-01-01
We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization. PMID:24349064
Competitive STDP Learning of Overlapping Spatial Patterns.
Krunglevicius, Dalius
2015-08-01
Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.
Spiking neuron network Helmholtz machine.
Sountsov, Pavel; Miller, Paul
2015-01-01
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.
Spiking neuron network Helmholtz machine
Sountsov, Pavel; Miller, Paul
2015-01-01
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191
Verhoog, Matthijs B; Mansvelder, Huibert D
2011-01-01
Throughout life, activity-dependent changes in neuronal connection strength enable the brain to refine neural circuits and learn based on experience. In line with predictions made by Hebb, synapse strength can be modified depending on the millisecond timing of action potential firing (STDP). The sign of synaptic plasticity depends on the spike order of presynaptic and postsynaptic neurons. Ionotropic neurotransmitter receptors, such as NMDA receptors and nicotinic acetylcholine receptors, are intimately involved in setting the rules for synaptic strengthening and weakening. In addition, timing rules for STDP within synapses are not fixed. They can be altered by activation of ionotropic receptors located at, or close to, synapses. Here, we will highlight studies that uncovered how network actions control and modulate timing rules for STDP by activating presynaptic ionotropic receptors. Furthermore, we will discuss how interaction between different types of ionotropic receptors may create "timing" windows during which particular timing rules lead to synaptic changes.
Adaptive WTA with an analog VLSI neuromorphic learning chip.
Häfliger, Philipp
2007-03-01
In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.
A theory of local learning, the learning channel, and the optimality of backpropagation.
Baldi, Pierre; Sadowski, Peter
2016-11-01
In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
Zanutto, B. Silvano
2017-01-01
Animals are proposed to learn the latent rules governing their environment in order to maximize their chances of survival. However, rules may change without notice, forcing animals to keep a memory of which one is currently at work. Rule switching can lead to situations in which the same stimulus/response pairing is positively and negatively rewarded in the long run, depending on variables that are not accessible to the animal. This fact raises questions on how neural systems are capable of reinforcement learning in environments where the reinforcement is inconsistent. Here we address this issue by asking about which aspects of connectivity, neural excitability and synaptic plasticity are key for a very general, stochastic spiking neural network model to solve a task in which rules change without being cued, taking the serial reversal task (SRT) as paradigm. Contrary to what could be expected, we found strong limitations for biologically plausible networks to solve the SRT. Especially, we proved that no network of neurons can learn a SRT if it is a single neural population that integrates stimuli information and at the same time is responsible of choosing the behavioural response. This limitation is independent of the number of neurons, neuronal dynamics or plasticity rules, and arises from the fact that plasticity is locally computed at each synapse, and that synaptic changes and neuronal activity are mutually dependent processes. We propose and characterize a spiking neural network model that solves the SRT, which relies on separating the functions of stimuli integration and response selection. The model suggests that experimental efforts to understand neural function should focus on the characterization of neural circuits according to their connectivity, neural dynamics, and the degree of modulation of synaptic plasticity with reward. PMID:29077735
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
Self-organised criticality via retro-synaptic signals
NASA Astrophysics Data System (ADS)
Hernandez-Urbina, Victor; Herrmann, J. Michael
2016-12-01
The brain is a complex system par excellence. In the last decade the observation of neuronal avalanches in neocortical circuits suggested the presence of self-organised criticality in brain networks. The occurrence of this type of dynamics implies several benefits to neural computation. However, the mechanisms that give rise to critical behaviour in these systems, and how they interact with other neuronal processes such as synaptic plasticity are not fully understood. In this paper, we present a long-term plasticity rule based on retro-synaptic signals that allows the system to reach a critical state in which clusters of activity are distributed as a power-law, among other observables. Our synaptic plasticity rule coexists with other synaptic mechanisms such as spike-timing-dependent plasticity, which implies that the resulting synaptic modulation captures not only the temporal correlations between spiking times of pre- and post-synaptic units, which has been suggested as requirement for learning and memory in neural systems, but also drives the system to a state of optimal neural information processing.
A network model of behavioural performance in a rule learning task.
Hasselmo, Michael E; Stern, Chantal E
2018-04-19
Humans demonstrate differences in performance on cognitive rule learning tasks which could involve differences in properties of neural circuits. An example model is presented to show how gating of the spread of neural activity could underlie rule learning and the generalization of rules to previously unseen stimuli. This model uses the activity of gating units to regulate the pattern of connectivity between neurons responding to sensory input and subsequent gating units or output units. This model allows analysis of network parameters that could contribute to differences in cognitive rule learning. These network parameters include differences in the parameters of synaptic modification and presynaptic inhibition of synaptic transmission that could be regulated by neuromodulatory influences on neural circuits. Neuromodulatory receptors play an important role in cognitive function, as demonstrated by the fact that drugs that block cholinergic muscarinic receptors can cause cognitive impairments. In discussions of the links between neuromodulatory systems and biologically based traits, the issue of mechanisms through which these linkages are realized is often missing. This model demonstrates potential roles of neural circuit parameters regulated by acetylcholine in learning context-dependent rules, and demonstrates the potential contribution of variation in neural circuit properties and neuromodulatory function to individual differences in cognitive function.This article is part of the theme issue 'Diverse perspectives on diversity: multi-disciplinary approaches to taxonomies of individual differences'. © 2018 The Author(s).
Ruan, Hongyu; Yao, Wei-Dong
2017-01-25
Addictive drugs usurp neural plasticity mechanisms that normally serve reward-related learning and memory, primarily by evoking changes in glutamatergic synaptic strength in the mesocorticolimbic dopamine circuitry. Here, we show that repeated cocaine exposure in vivo does not alter synaptic strength in the mouse prefrontal cortex during an early period of withdrawal, but instead modifies a Hebbian quantitative synaptic learning rule by broadening the temporal window and lowers the induction threshold for spike-timing-dependent LTP (t-LTP). After repeated, but not single, daily cocaine injections, t-LTP in layer V pyramidal neurons is induced at +30 ms, a normally ineffective timing interval for t-LTP induction in saline-exposed mice. This cocaine-induced, extended-timing t-LTP lasts for ∼1 week after terminating cocaine and is accompanied by an increased susceptibility to potentiation by fewer pre-post spike pairs, indicating a reduced t-LTP induction threshold. Basal synaptic strength and the maximal attainable t-LTP magnitude remain unchanged after cocaine exposure. We further show that the cocaine facilitation of t-LTP induction is caused by sensitized D1-cAMP/protein kinase A dopamine signaling in pyramidal neurons, which then pathologically recruits voltage-gated l-type Ca 2+ channels that synergize with GluN2A-containing NMDA receptors to drive t-LTP at extended timing. Our results illustrate a mechanism by which cocaine, acting on a key neuromodulation pathway, modifies the coincidence detection window during Hebbian plasticity to facilitate associative synaptic potentiation in prefrontal excitatory circuits. By modifying rules that govern activity-dependent synaptic plasticity, addictive drugs can derail the experience-driven neural circuit remodeling process important for executive control of reward and addiction. It is believed that addictive drugs often render an addict's brain reward system hypersensitive, leaving the individual more susceptible to relapse. We found that repeated cocaine exposure alters a Hebbian associative synaptic learning rule that governs activity-dependent synaptic plasticity in the mouse prefrontal cortex, characterized by a broader temporal window and a lower threshold for spike-timing-dependent LTP (t-LTP), a cellular form of learning and memory. This rule change is caused by cocaine-exacerbated D1-cAMP/protein kinase A dopamine signaling in pyramidal neurons that in turn pathologically recruits l-type Ca 2+ channels to facilitate coincidence detection during t-LTP induction. Our study provides novel insights on how cocaine, even with only brief exposure, may prime neural circuits for subsequent experience-dependent remodeling that may underlie certain addictive behavior. Copyright © 2017 the authors 0270-6474/17/370986-12$15.00/0.
Matching tutors and students: effective strategies for information transfer between circuits
NASA Astrophysics Data System (ADS)
Tesileanu, Tiberiu; Balasubramanian, Vijay; Olveczky, Bence
Many neural circuits transfer learned information to downstream circuits: hippocampal-dependent memories are consolidated into long-term memories elsewhere; motor cortex is essential for skill learning but dispensable for execution; anterior forebrain pathway (AFP) in songbirds drives short-term improvements in song that are later consolidated in pre-motor area RA. We show how to match instructive signals from tutor circuits to synaptic plasticity rules in student circuits to achieve effective two-stage learning. We focus on learning sequential patterns where a timebase is transformed into motor commands by connectivity with a `student' area. If the sign of the synaptic change is given by the magnitude of tutor input, a good teaching strategy uses a strong (weak) tutor signal if student output is below (above) its target. If instead timing of tutor input relative to the timebase determines the sign of synaptic modifications, a good instructive signal accumulates the errors in student output as the motor program progresses. We demonstrate song learning in a biologically-plausible model of the songbird circuit given diverse plasticity rules interpolating between those described above. The model also reproduces qualitative firing statistics of RA neurons in juveniles and adults. Also affiliated to CUNY - Graduate Center.
Verhoog, Matthijs B.; Mansvelder, Huibert D.
2011-01-01
Throughout life, activity-dependent changes in neuronal connection strength enable the brain to refine neural circuits and learn based on experience. In line with predictions made by Hebb, synapse strength can be modified depending on the millisecond timing of action potential firing (STDP). The sign of synaptic plasticity depends on the spike order of presynaptic and postsynaptic neurons. Ionotropic neurotransmitter receptors, such as NMDA receptors and nicotinic acetylcholine receptors, are intimately involved in setting the rules for synaptic strengthening and weakening. In addition, timing rules for STDP within synapses are not fixed. They can be altered by activation of ionotropic receptors located at, or close to, synapses. Here, we will highlight studies that uncovered how network actions control and modulate timing rules for STDP by activating presynaptic ionotropic receptors. Furthermore, we will discuss how interaction between different types of ionotropic receptors may create “timing” windows during which particular timing rules lead to synaptic changes. PMID:21941664
Excitement and synchronization of small-world neuronal networks with short-term synaptic plasticity.
Han, Fang; Wiercigroch, Marian; Fang, Jian-An; Wang, Zhijie
2011-10-01
Excitement and synchronization of electrically and chemically coupled Newman-Watts (NW) small-world neuronal networks with a short-term synaptic plasticity described by a modified Oja learning rule are investigated. For each type of neuronal network, the variation properties of synaptic weights are examined first. Then the effects of the learning rate, the coupling strength and the shortcut-adding probability on excitement and synchronization of the neuronal network are studied. It is shown that the synaptic learning suppresses the over-excitement, helps synchronization for the electrically coupled network but impairs synchronization for the chemically coupled one. Both the introduction of shortcuts and the increase of the coupling strength improve synchronization and they are helpful in increasing the excitement for the chemically coupled network, but have little effect on the excitement of the electrically coupled one.
Neural learning circuits utilizing nano-crystalline silicon transistors and memristors.
Cantley, Kurtis D; Subramaniam, Anand; Stiegler, Harvey J; Chapman, Richard A; Vogel, Eric M
2012-04-01
Properties of neural circuits are demonstrated via SPICE simulations and their applications are discussed. The neuron and synapse subcircuits include ambipolar nano-crystalline silicon transistor and memristor device models based on measured data. Neuron circuit characteristics and the Hebbian synaptic learning rule are shown to be similar to biology. Changes in the average firing rate learning rule depending on various circuit parameters are also presented. The subcircuits are then connected into larger neural networks that demonstrate fundamental properties including associative learning and pulse coincidence detection. Learned extraction of a fundamental frequency component from noisy inputs is demonstrated. It is then shown that if the fundamental sinusoid of one neuron input is out of phase with the rest, its synaptic connection changes differently than the others. Such behavior indicates that the system can learn to detect which signals are important in the general population, and that there is a spike-timing-dependent component of the learning mechanism. Finally, future circuit design and considerations are discussed, including requirements for the memristive device.
Spatial features of synaptic adaptation affecting learning performance.
Berger, Damian L; de Arcangelis, Lucilla; Herrmann, Hans J
2017-09-08
Recent studies have proposed that the diffusion of messenger molecules, such as monoamines, can mediate the plastic adaptation of synapses in supervised learning of neural networks. Based on these findings we developed a model for neural learning, where the signal for plastic adaptation is assumed to propagate through the extracellular space. We investigate the conditions allowing learning of Boolean rules in a neural network. Even fully excitatory networks show very good learning performances. Moreover, the investigation of the plastic adaptation features optimizing the performance suggests that learning is very sensitive to the extent of the plastic adaptation and the spatial range of synaptic connections.
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-01-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns. PMID:26291608
A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks.
Alemi, Alireza; Baldassi, Carlo; Brunel, Nicolas; Zecchina, Riccardo
2015-08-01
Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules.
Frémaux, Nicolas; Gerstner, Wulfram
2015-01-01
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide "when" to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.
Novel plasticity rule can explain the development of sensorimotor intelligence
Der, Ralf; Martius, Georg
2015-01-01
Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher-level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive rhythmic behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system-specific modifications of the DEP rule. They rather arise from the underlying mechanism of spontaneous symmetry breaking, which is due to the tight brain body environment coupling. The new synaptic rule is biologically plausible and would be an interesting target for neurobiological investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution. PMID:26504200
Ellipsoidal fuzzy learning for smart car platoons
NASA Astrophysics Data System (ADS)
Dickerson, Julie A.; Kosko, Bart
1993-12-01
A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.
Neuromodulated Synaptic Plasticity on the SpiNNaker Neuromorphic System
Mikaitis, Mantas; Pineda García, Garibaldi; Knight, James C.; Furber, Steve B.
2018-01-01
SpiNNaker is a digital neuromorphic architecture, designed specifically for the low power simulation of large-scale spiking neural networks at speeds close to biological real-time. Unlike other neuromorphic systems, SpiNNaker allows users to develop their own neuron and synapse models as well as specify arbitrary connectivity. As a result SpiNNaker has proved to be a powerful tool for studying different neuron models as well as synaptic plasticity—believed to be one of the main mechanisms behind learning and memory in the brain. A number of Spike-Timing-Dependent-Plasticity(STDP) rules have already been implemented on SpiNNaker and have been shown to be capable of solving various learning tasks in real-time. However, while STDP is an important biological theory of learning, it is a form of Hebbian or unsupervised learning and therefore does not explain behaviors that depend on feedback from the environment. Instead, learning rules based on neuromodulated STDP (three-factor learning rules) have been shown to be capable of solving reinforcement learning tasks in a biologically plausible manner. In this paper we demonstrate for the first time how a model of three-factor STDP, with the third-factor representing spikes from dopaminergic neurons, can be implemented on the SpiNNaker neuromorphic system. Using this learning rule we first show how reward and punishment signals can be delivered to a single synapse before going on to demonstrate it in a larger network which solves the credit assignment problem in a Pavlovian conditioning experiment. Because of its extra complexity, we find that our three-factor learning rule requires approximately 2× as much processing time as the existing SpiNNaker STDP learning rules. However, we show that it is still possible to run our Pavlovian conditioning model with up to 1 × 104 neurons in real-time, opening up new research opportunities for modeling behavioral learning on SpiNNaker. PMID:29535600
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.
Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650
Huertas, Marco A; Schwettmann, Sarah E; Shouval, Harel Z
2016-01-01
The ability to maximize reward and avoid punishment is essential for animal survival. Reinforcement learning (RL) refers to the algorithms used by biological or artificial systems to learn how to maximize reward or avoid negative outcomes based on past experiences. While RL is also important in machine learning, the types of mechanistic constraints encountered by biological machinery might be different than those for artificial systems. Two major problems encountered by RL are how to relate a stimulus with a reinforcing signal that is delayed in time (temporal credit assignment), and how to stop learning once the target behaviors are attained (stopping rule). To address the first problem synaptic eligibility traces were introduced, bridging the temporal gap between a stimulus and its reward. Although, these were mere theoretical constructs, recent experiments have provided evidence of their existence. These experiments also reveal that the presence of specific neuromodulators converts the traces into changes in synaptic efficacy. A mechanistic implementation of the stopping rule usually assumes the inhibition of the reward nucleus; however, recent experimental results have shown that learning terminates at the appropriate network state even in setups where the reward nucleus cannot be inhibited. In an effort to describe a learning rule that solves the temporal credit assignment problem and implements a biologically plausible stopping rule, we proposed a model based on two separate synaptic eligibility traces, one for long-term potentiation (LTP) and one for long-term depression (LTD), each obeying different dynamics and having different effective magnitudes. The model has been shown to successfully generate stable learning in recurrent networks. Although, the model assumes the presence of a single neuromodulator, evidence indicates that there are different neuromodulators for expressing the different traces. What could be the role of different neuromodulators for expressing the LTP and LTD traces? Here we expand on our previous model to include several neuromodulators, and illustrate through various examples how different these contribute to learning reward-timing within a wide set of training paradigms and propose further roles that multiple neuromodulators can play in encoding additional information of the rewarding signal.
Whittington, James C. R.; Bogacz, Rafal
2017-01-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583
Whittington, James C R; Bogacz, Rafal
2017-05-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Frémaux, Nicolas; Gerstner, Wulfram
2016-01-01
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide “when” to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators. PMID:26834568
A framework for plasticity implementation on the SpiNNaker neural architecture.
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B
2014-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
A framework for plasticity implementation on the SpiNNaker neural architecture
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A.; Furber, Steve B.; Benosman, Ryad B.
2015-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system. PMID:25653580
Bazhenov, Maxim; Huerta, Ramon; Smith, Brian H.
2013-01-01
Nonassociative and associative learning rules simultaneously modify neural circuits. However, it remains unclear how these forms of plasticity interact to produce conditioned responses. Here we integrate nonassociative and associative conditioning within a uniform model of olfactory learning in the honeybee. Honeybees show a fairly abrupt increase in response after a number of conditioning trials. The occurrence of this abrupt change takes many more trials after exposure to nonassociative trials than just using associative conditioning. We found that the interaction of unsupervised and supervised learning rules is critical for explaining latent inhibition phenomenon. Associative conditioning combined with the mutual inhibition between the output neurons produces an abrupt increase in performance despite smooth changes of the synaptic weights. The results show that an integrated set of learning rules implemented using fan-out connectivities together with neural inhibition can explain the broad range of experimental data on learning behaviors. PMID:23536082
E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks.
Trapp, Philip; Echeveste, Rodrigo; Gros, Claudius
2018-06-12
Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.
Bamford, Simeon A; Murray, Alan F; Willshaw, David J
2010-02-01
A distributed and locally reprogrammable address-event receiver has been designed, in which incoming address-events are monitored simultaneously by all synapses, allowing for arbitrarily large axonal fan-out without reducing channel capacity. Synapses can change the address of their presynaptic neuron, allowing the distributed implementation of a biologically realistic learning rule, with both synapse formation and elimination (synaptic rewiring). Probabilistic synapse formation leads to topographic map development, made possible by a cross-chip current-mode calculation of Euclidean distance. As well as synaptic plasticity in rewiring, synapses change weights using a competitive Hebbian learning rule (spike-timing-dependent plasticity). The weight plasticity allows receptive fields to be modified based on spatio-temporal correlations in the inputs, and the rewiring plasticity allows these modifications to become embedded in the network topology.
Nothing can be coincidence: synaptic inhibition and plasticity in the cerebellar nuclei
Pugh, Jason R.; Raman, Indira M.
2009-01-01
Many cerebellar neurons fire spontaneously, generating 10–100 action potentials per second even without synaptic input. This high basal activity correlates with information-coding mechanisms that differ from those of cells that are quiescent until excited synaptically. For example, in the deep cerebellar nuclei, Hebbian patterns of coincident synaptic excitation and postsynaptic firing fail to induce long-term increases in the strength of excitatory inputs. Instead, excitatory synaptic currents are potentiated by combinations of inhibition and excitation that resemble the activity of Purkinje and mossy fiber afferents that is predicted to occur during cerebellar associative learning tasks. Such results indicate that circuits with intrinsically active neurons have rules for information transfer and storage that distinguish them from other brain regions. PMID:19178955
Ultrastructure of Dendritic Spines: Correlation Between Synaptic and Spine Morphologies
Arellano, Jon I.; Benavides-Piccione, Ruth; DeFelipe, Javier; Yuste, Rafael
2007-01-01
Dendritic spines are critical elements of cortical circuits, since they establish most excitatory synapses. Recent studies have reported correlations between morphological and functional parameters of spines. Specifically, the spine head volume is correlated with the area of the postsynaptic density (PSD), the number of postsynaptic receptors and the ready-releasable pool of transmitter, whereas the length of the spine neck is proportional to the degree of biochemical and electrical isolation of the spine from its parent dendrite. Therefore, the morphology of a spine could determine its synaptic strength and learning rules. To better understand the natural variability of neocortical spine morphologies, we used a combination of gold-toned Golgi impregnations and serial thin-section electron microscopy and performed three-dimensional reconstructions of spines from layer 2/3 pyramidal cells from mouse visual cortex. We characterized the structure and synaptic features of 144 completed reconstructed spines, and analyzed their morphologies according to their positions. For all morphological parameters analyzed, spines exhibited a continuum of variability, without clearly distinguishable subtypes of spines or clear dependence of their morphologies on their distance to the soma. On average, the spine head volume was correlated strongly with PSD area and weakly with neck diameter, but not with neck length. The large morphological diversity suggests an equally large variability of synaptic strength and learning rules. PMID:18982124
A supervised learning rule for classification of spatiotemporal spike patterns.
Lilin Guo; Zhenzhong Wang; Adjouadi, Malek
2016-08-01
This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.
Neural networks and logical reasoning systems: a translation table.
Martins, J; Mendes, R V
2001-04-01
A correspondence is established between the basic elements of logic reasoning systems (knowledge bases, rules, inference and queries) and the structure and dynamical evolution laws of neural networks. The correspondence is pictured as a translation dictionary which might allow to go back and forth between symbolic and network formulations, a desirable step in learning-oriented systems and multicomputer networks. In the framework of Horn clause logics, it is found that atomic propositions with n arguments correspond to nodes with nth order synapses, rules to synaptic intensity constraints, forward chaining to synaptic dynamics and queries either to simple node activation or to a query tensor dynamics.
Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?
Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610
Feedforward inhibition and synaptic scaling--two sides of the same coin?
Keck, Christian; Savin, Cristina; Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
Learning and coding in biological neural networks
NASA Astrophysics Data System (ADS)
Fiete, Ila Rani
How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and theoretical results on the scalability of this rule show that learning with stochastic gradient ascent may be adequately fast to explain learning in the bird. Finally, we address the more general issue of the scalability of stochastic gradient learning on quadratic cost surfaces in linear systems, as a function of system size and task characteristics, by deriving analytical expressions for the learning curves.
Precise Synaptic Efficacy Alignment Suggests Potentiation Dominated Learning.
Hartmann, Christoph; Miner, Daniel C; Triesch, Jochen
2015-01-01
Recent evidence suggests that parallel synapses from the same axonal branch onto the same dendritic branch have almost identical strength. It has been proposed that this alignment is only possible through learning rules that integrate activity over long time spans. However, learning mechanisms such as spike-timing-dependent plasticity (STDP) are commonly assumed to be temporally local. Here, we propose that the combination of temporally local STDP and a multiplicative synaptic normalization mechanism is sufficient to explain the alignment of parallel synapses. To address this issue, we introduce three increasingly complex models: First, we model the idealized interaction of STDP and synaptic normalization in a single neuron as a simple stochastic process and derive analytically that the alignment effect can be described by a so-called Kesten process. From this we can derive that synaptic efficacy alignment requires potentiation-dominated learning regimes. We verify these conditions in a single-neuron model with independent spiking activities but more realistic synapses. As expected, we only observe synaptic efficacy alignment for long-term potentiation-biased STDP. Finally, we explore how well the findings transfer to recurrent neural networks where the learning mechanisms interact with the correlated activity of the network. We find that due to the self-reinforcing correlations in recurrent circuits under STDP, alignment occurs for both long-term potentiation- and depression-biased STDP, because the learning will be potentiation dominated in both cases due to the potentiating events induced by correlated activity. This is in line with recent results demonstrating a dominance of potentiation over depression during waking and normalization during sleep. This leads us to predict that individual spine pairs will be more similar after sleep compared to after sleep deprivation. In conclusion, we show that synaptic normalization in conjunction with coordinated potentiation--in this case, from STDP in the presence of correlated pre- and post-synaptic activity--naturally leads to an alignment of parallel synapses.
The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns
Florian, Răzvan V.
2012-01-01
In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm. PMID:22879876
A differential memristive synapse circuit for on-line learning in neuromorphic computing systems
NASA Astrophysics Data System (ADS)
Nair, Manu V.; Muller, Lorenz K.; Indiveri, Giacomo
2017-12-01
Spike-based learning with memristive devices in neuromorphic computing architectures typically uses learning circuits that require overlapping pulses from pre- and post-synaptic nodes. This imposes severe constraints on the length of the pulses transmitted in the network, and on the network’s throughput. Furthermore, most of these circuits do not decouple the currents flowing through memristive devices from the one stimulating the target neuron. This can be a problem when using devices with high conductance values, because of the resulting large currents. In this paper, we propose a novel circuit that decouples the current produced by the memristive device from the one used to stimulate the post-synaptic neuron, by using a novel differential scheme based on the Gilbert normalizer circuit. We show how this circuit is useful for reducing the effect of variability in the memristive devices, and how it is ideally suited for spike-based learning mechanisms that do not require overlapping pre- and post-synaptic pulses. We demonstrate the features of the proposed synapse circuit with SPICE simulations, and validate its learning properties with high-level behavioral network simulations which use a stochastic gradient descent learning rule in two benchmark classification tasks.
A Synaptic Basis for Memory Storage in the Cerebral Cortex
NASA Astrophysics Data System (ADS)
Bear, Mark F.
1996-11-01
A cardinal feature of neurons in the cerebral cortex is stimulus selectivity, and experience-dependent shifts in selectivity are a common correlate of memory formation. We have used a theoretical ``learning rule,'' devised to account for experience-dependent shifts in neuronal selectivity, to guide experiments on the elementary mechanisms of synaptic plasticity in hippocampus and neocortex. These experiments reveal that many synapses in hippocampus and neocortex are bidirectionally modifiable, that the modifications persist long enough to contribute to long-term memory storage, and that key variables governing the sign of synaptic plasticity are the amount of NMDA receptor activation and the recent history of cortical activity.
Unsupervised learning in neural networks with short range synapses
NASA Astrophysics Data System (ADS)
Brunnet, L. G.; Agnes, E. J.; Mizusaki, B. E. P.; Erichsen, R., Jr.
2013-01-01
Different areas of the brain are involved in specific aspects of the information being processed both in learning and in memory formation. For example, the hippocampus is important in the consolidation of information from short-term memory to long-term memory, while emotional memory seems to be dealt by the amygdala. On the microscopic scale the underlying structures in these areas differ in the kind of neurons involved, in their connectivity, or in their clustering degree but, at this level, learning and memory are attributed to neuronal synapses mediated by longterm potentiation and long-term depression. In this work we explore the properties of a short range synaptic connection network, a nearest neighbor lattice composed mostly by excitatory neurons and a fraction of inhibitory ones. The mechanism of synaptic modification responsible for the emergence of memory is Spike-Timing-Dependent Plasticity (STDP), a Hebbian-like rule, where potentiation/depression is acquired when causal/non-causal spikes happen in a synapse involving two neurons. The system is intended to store and recognize memories associated to spatial external inputs presented as simple geometrical forms. The synaptic modifications are continuously applied to excitatory connections, including a homeostasis rule and STDP. In this work we explore the different scenarios under which a network with short range connections can accomplish the task of storing and recognizing simple connected patterns.
Synaptic and nonsynaptic plasticity approximating probabilistic inference
Tully, Philip J.; Hennig, Matthias H.; Lansner, Anders
2014-01-01
Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert. PMID:24782758
Criterion learning in rule-based categorization: Simulation of neural mechanism and new data
Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd
2015-01-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349
Criterion learning in rule-based categorization: simulation of neural mechanism and new data.
Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd
2015-04-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.
Origin of the spike-timing-dependent plasticity rule
NASA Astrophysics Data System (ADS)
Cho, Myoung Won; Choi, M. Y.
2016-08-01
A biological synapse changes its efficacy depending on the difference between pre- and post-synaptic spike timings. Formulating spike-timing-dependent interactions in terms of the path integral, we establish a neural-network model, which makes it possible to predict relevant quantities rigorously by means of standard methods in statistical mechanics and field theory. In particular, the biological synaptic plasticity rule is shown to emerge as the optimal form for minimizing the free energy. It is further revealed that maximization of the entropy of neural activities gives rise to the competitive behavior of biological learning. This demonstrates that statistical mechanics helps to understand rigorously key characteristic behaviors of a neural network, thus providing the possibility of physics serving as a useful and relevant framework for probing life.
A saturation hypothesis to explain both enhanced and impaired learning with enhanced plasticity
Nguyen-Vu, TD Barbara; Zhao, Grace Q; Lahiri, Subhaneil; Kimpo, Rhea R; Lee, Hanmi; Ganguli, Surya; Shatz, Carla J; Raymond, Jennifer L
2017-01-01
Across many studies, animals with enhanced synaptic plasticity exhibit either enhanced or impaired learning, raising a conceptual puzzle: how enhanced plasticity can yield opposite learning outcomes? Here, we show that the recent history of experience can determine whether mice with enhanced plasticity exhibit enhanced or impaired learning in response to the same training. Mice with enhanced cerebellar LTD, due to double knockout (DKO) of MHCI H2-Kb/H2-Db (KbDb−/−), exhibited oculomotor learning deficits. However, the same mice exhibited enhanced learning after appropriate pre-training. Theoretical analysis revealed that synapses with history-dependent learning rules could recapitulate the data, and suggested that saturation may be a key factor limiting the ability of enhanced plasticity to enhance learning. Optogenetic stimulation designed to saturate LTD produced the same impairment in WT as observed in DKO mice. Overall, our results suggest that the recent history of activity and the threshold for synaptic plasticity conspire to effect divergent learning outcomes. DOI: http://dx.doi.org/10.7554/eLife.20147.001 PMID:28234229
Cyr, André; Boukadoum, Mounir
2013-03-01
This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.
Synchrony detection and amplification by silicon neurons with STDP synapses.
Bofill-i-petit, Adria; Murray, Alan F
2004-09-01
Spike-timing dependent synaptic plasticity (STDP) is a form of plasticity driven by precise spike-timing differences between presynaptic and postsynaptic spikes. Thus, the learning rules underlying STDP are suitable for learning neuronal temporal phenomena such as spike-timing synchrony. It is well known that weight-independent STDP creates unstable learning processes resulting in balanced bimodal weight distributions. In this paper, we present a neuromorphic analog very large scale integration (VLSI) circuit that contains a feedforward network of silicon neurons with STDP synapses. The learning rule implemented can be tuned to have a moderate level of weight dependence. This helps stabilise the learning process and still generates binary weight distributions. From on-chip learning experiments we show that the chip can detect and amplify hierarchical spike-timing synchrony structures embedded in noisy spike trains. The weight distributions of the network emerging from learning are bimodal.
Characterization of emergent synaptic topologies in noisy neural networks
NASA Astrophysics Data System (ADS)
Miller, Aaron James
Learned behaviors are one of the key contributors to an animal's ultimate survival. It is widely believed that the brain's microcircuitry undergoes structural changes when a new behavior is learned. In particular, motor learning, during which an animal learns a sequence of muscular movements, often requires precisely-timed coordination between muscles and becomes very natural once ingrained. Experiments show that neurons in the motor cortex exhibit precisely-timed spike activity when performing a learned motor behavior, and constituent stereotypical elements of the behavior can last several hundred milliseconds. The subject of this manuscript concerns how organized synaptic structures that produce stereotypical spike sequences emerge from random, dynamical networks. After a brief introduction in Chapter 1, we begin Chapter 2 by introducing a spike-timing-dependent plasticity (STDP) rule that defines how the activity of the network drives changes in network topology. The rule is then applied to idealized networks of leaky integrate-and-fire neurons (LIF). These neurons are not subjected to the variability that typically characterize neurons in vivo. In noiseless networks, synapses develop closed loops of strong connectivity that reproduce stereotypical, precisely-timed spike patterns from an initially random network. We demonstrate the characteristics of the asymptotic synaptic configuration are dependent on the statistics of the initial random network. The spike timings of the neurons simulated in Chapter 2 are generated exactly by a computationally economical, nonlinear mapping which is extended to LIF neurons injected with fluctuating current in Chapter 3. Development of an economical mapping that incorporates noise provides a practical solution to the long simulation times required to produce asymptotic synaptic topologies in networks with STDP in the presence of realistic neuronal variability. The mapping relies on generating numerical solutions to the dynamics of a LIF neuron subjected to Gaussian white noise (GWN). The system reduces to the Ornstein-Uhlenbeck first passage time problem, the solution of which we build into the mapping method of Chapter 2. We demonstrate that simulations using the stochastic mapping have reduced computation time compared to traditional Runge-Kutta methods by more than a factor of 150. In Chapter 4, we use the stochastic mapping to study the dynamics of emerging synaptic topologies in noisy networks. With the addition of membrane noise, networks with dynamical synapses can admit states in which the distribution of the synaptic weights is static under spontaneous activity, but the random connectivity between neurons is dynamical. The widely cited problem of instabilities in networks with STDP is avoided with the implementation of a synaptic decay and an activation threshold on each synapse. When such networks are presented with stimulus modeled by a focused excitatory current, chain-like networks can emerge with the addition of an axon-remodeling plasticity rule, a topological constraint on the connectivity modeling the finite resources available to each neuron. The emergent topologies are the result of an iterative stochastic process. The dynamics of the growth process suggest a strong interplay between the network topology and the spike sequences they produce during development. Namely, the existence of an embedded spike sequence alters the distribution of synaptic weights through the entire network. The roles of model parameters that affect the interplay between network structure and activity are elucidated. Finally, we propose two mathematical growth models, which are complementary, that capture the essence of the growth dynamics observed in simulations. In Chapter 5, we present an extension of the stochastic mapping that allows the possibility of neuronal cooperation. We demonstrate that synaptic topologies admitting stereotypical sequences can emerge in yet higher, biologically realistic levels of membrane potential variability when neurons cooperate to innervate shared targets. The structure that is most robust to the variability is that of a synfire chain. The principles of growth dynamics detailed in Chapter 4 are the same that sculpt the emergent synfire topologies. We conclude by discussing avenues for extensions of these results.
Brzosko, Zuzanna; Zannone, Sara; Schultz, Wolfram
2017-01-01
Spike timing-dependent plasticity (STDP) is under neuromodulatory control, which is correlated with distinct behavioral states. Previously, we reported that dopamine, a reward signal, broadens the time window for synaptic potentiation and modulates the outcome of hippocampal STDP even when applied after the plasticity induction protocol (Brzosko et al., 2015). Here, we demonstrate that sequential neuromodulation of STDP by acetylcholine and dopamine offers an efficacious model of reward-based navigation. Specifically, our experimental data in mouse hippocampal slices show that acetylcholine biases STDP toward synaptic depression, whilst subsequent application of dopamine converts this depression into potentiation. Incorporating this bidirectional neuromodulation-enabled correlational synaptic learning rule into a computational model yields effective navigation toward changing reward locations, as in natural foraging behavior. Thus, temporally sequenced neuromodulation of STDP enables associations to be made between actions and outcomes and also provides a possible mechanism for aligning the time scales of cellular and behavioral learning. DOI: http://dx.doi.org/10.7554/eLife.27756.001 PMID:28691903
Li, Yi; Zhong, Yingpeng; Zhang, Jinjian; Xu, Lei; Wang, Qing; Sun, Huajun; Tong, Hao; Cheng, Xiaoming; Miao, Xiangshui
2014-05-09
Nanoscale inorganic electronic synapses or synaptic devices, which are capable of emulating the functions of biological synapses of brain neuronal systems, are regarded as the basic building blocks for beyond-Von Neumann computing architecture, combining information storage and processing. Here, we demonstrate a Ag/AgInSbTe/Ag structure for chalcogenide memristor-based electronic synapses. The memristive characteristics with reproducible gradual resistance tuning are utilised to mimic the activity-dependent synaptic plasticity that serves as the basis of memory and learning. Bidirectional long-term Hebbian plasticity modulation is implemented by the coactivity of pre- and postsynaptic spikes, and the sign and degree are affected by assorted factors including the temporal difference, spike rate and voltage. Moreover, synaptic saturation is observed to be an adjustment of Hebbian rules to stabilise the growth of synaptic weights. Our results may contribute to the development of highly functional plastic electronic synapses and the further construction of next-generation parallel neuromorphic computing architecture.
Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias
2008-12-01
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
Selective synaptic remodeling of amygdalocortical connections associated with fear memory.
Yang, Yang; Liu, Dan-Qian; Huang, Wei; Deng, Juan; Sun, Yangang; Zuo, Yi; Poo, Mu-Ming
2016-10-01
Neural circuits underlying auditory fear conditioning have been extensively studied. Here we identified a previously unexplored pathway from the lateral amygdala (LA) to the auditory cortex (ACx) and found that selective silencing of this pathway using chemo- and optogenetic approaches impaired fear memory retrieval. Dual-color in vivo two-photon imaging of mouse ACx showed pathway-specific increases in the formation of LA axon boutons, dendritic spines of ACx layer 5 pyramidal cells, and putative LA-ACx synaptic pairs after auditory fear conditioning. Furthermore, joint imaging of pre- and postsynaptic structures showed that essentially all new synaptic contacts were made by adding new partners to existing synaptic elements. Together, these findings identify an amygdalocortical projection that is important to fear memory expression and is selectively modified by associative fear learning, and unravel a distinct architectural rule for synapse formation in the adult brain.
Network Supervision of Adult Experience and Learning Dependent Sensory Cortical Plasticity.
Blake, David T
2017-06-18
The brain is capable of remodeling throughout life. The sensory cortices provide a useful preparation for studying neuroplasticity both during development and thereafter. In adulthood, sensory cortices change in the cortical area activated by behaviorally relevant stimuli, by the strength of response within that activated area, and by the temporal profiles of those responses. Evidence supports forms of unsupervised, reinforcement, and fully supervised network learning rules. Studies on experience-dependent plasticity have mostly not controlled for learning, and they find support for unsupervised learning mechanisms. Changes occur with greatest ease in neurons containing α-CamKII, which are pyramidal neurons in layers II/III and layers V/VI. These changes use synaptic mechanisms including long term depression. Synaptic strengthening at NMDA-containing synapses does occur, but its weak association with activity suggests other factors also initiate changes. Studies that control learning find support of reinforcement learning rules and limited evidence of other forms of supervised learning. Behaviorally associating a stimulus with reinforcement leads to a strengthening of cortical response strength and enlarging of response area with poor selectivity. Associating a stimulus with omission of reinforcement leads to a selective weakening of responses. In some preparations in which these associations are not as clearly made, neurons with the most informative discharges are relatively stronger after training. Studies analyzing the temporal profile of responses associated with omission of reward, or of plasticity in studies with different discriminanda but statistically matched stimuli, support the existence of limited supervised network learning. © 2017 American Physiological Society. Compr Physiol 7:977-1008, 2017. Copyright © 2017 John Wiley & Sons, Inc.
A forecast-based STDP rule suitable for neuromorphic implementation.
Davies, S; Galluppi, F; Rast, A D; Furber, S B
2012-08-01
Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that the STDP TTS algorithm allows the neural network to adapt and detect the incoming pattern with improvements both in the reliability of, and the time required for, consistent output. Through the approximations we suggest in this paper, we introduce a learning rule that is easy to implement both in event-driven simulators and in dedicated hardware, reducing computational complexity relative to the standard STDP rule. Such a rule offers a promising solution, complementary to standard STDP evaluation algorithms, for real-time learning using spiking neural networks in time-critical applications. Copyright © 2012 Elsevier Ltd. All rights reserved.
Neurons with two sites of synaptic integration learn invariant representations.
Körding, K P; König, P
2001-12-01
Neurons in mammalian cerebral cortex combine specific responses with respect to some stimulus features with invariant responses to other stimulus features. For example, in primary visual cortex, complex cells code for orientation of a contour but ignore its position to a certain degree. In higher areas, such as the inferotemporal cortex, translation-invariant, rotation-invariant, and even view point-invariant responses can be observed. Such properties are of obvious interest to artificial systems performing tasks like pattern recognition. It remains to be resolved how such response properties develop in biological systems. Here we present an unsupervised learning rule that addresses this problem. It is based on a neuron model with two sites of synaptic integration, allowing qualitatively different effects of input to basal and apical dendritic trees, respectively. Without supervision, the system learns to extract invariance properties using temporal or spatial continuity of stimuli. Furthermore, top-down information can be smoothly integrated in the same framework. Thus, this model lends a physiological implementation to approaches of unsupervised learning of invariant-response properties.
How Attention Can Create Synaptic Tags for the Learning of Working Memories in Sequential Tasks
Rombouts, Jaldert O.; Bohte, Sander M.; Roelfsema, Pieter R.
2015-01-01
Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions. PMID:25742003
Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M
2017-03-01
We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modulated Hebb-Oja learning rule--a method for principal subspace analysis.
Jankovic, Marko V; Ogawa, Hidemitsu
2006-03-01
This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.
González-Rueda, Ana; Pedrosa, Victor; Feord, Rachael C; Clopath, Claudia; Paulsen, Ole
2018-03-21
Activity-dependent synaptic plasticity is critical for cortical circuit refinement. The synaptic homeostasis hypothesis suggests that synaptic connections are strengthened during wake and downscaled during sleep; however, it is not obvious how the same plasticity rules could explain both outcomes. Using whole-cell recordings and optogenetic stimulation of presynaptic input in urethane-anesthetized mice, which exhibit slow-wave-sleep (SWS)-like activity, we show that synaptic plasticity rules are gated by cortical dynamics in vivo. While Down states support conventional spike timing-dependent plasticity, Up states are biased toward depression such that presynaptic stimulation alone leads to synaptic depression, while connections contributing to postsynaptic spiking are protected against this synaptic weakening. We find that this novel activity-dependent and input-specific downscaling mechanism has two important computational advantages: (1) improved signal-to-noise ratio, and (2) preservation of previously stored information. Thus, these synaptic plasticity rules provide an attractive mechanism for SWS-related synaptic downscaling and circuit refinement. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.
Brito, Carlos S N; Gerstner, Wulfram
2016-09-01
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.
Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation
Gerstner, Wulfram
2016-01-01
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. PMID:27690349
Spike timing analysis in neural networks with unsupervised synaptic plasticity
NASA Astrophysics Data System (ADS)
Mizusaki, B. E. P.; Agnes, E. J.; Brunnet, L. G.; Erichsen, R., Jr.
2013-01-01
The synaptic plasticity rules that sculpt a neural network architecture are key elements to understand cortical processing, as they may explain the emergence of stable, functional activity, while avoiding runaway excitation. For an associative memory framework, they should be built in a way as to enable the network to reproduce a robust spatio-temporal trajectory in response to an external stimulus. Still, how these rules may be implemented in recurrent networks and the way they relate to their capacity of pattern recognition remains unclear. We studied the effects of three phenomenological unsupervised rules in sparsely connected recurrent networks for associative memory: spike-timing-dependent-plasticity, short-term-plasticity and an homeostatic scaling. The system stability is monitored during the learning process of the network, as the mean firing rate converges to a value determined by the homeostatic scaling. Afterwards, it is possible to measure the recovery efficiency of the activity following each initial stimulus. This is evaluated by a measure of the correlation between spike fire timings, and we analysed the full memory separation capacity and limitations of this system.
Time-Warp–Invariant Neuronal Processing
Gütig, Robert; Sompolinsky, Haim
2009-01-01
Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities. PMID:19582146
Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L
2013-12-01
Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.
Acetylcholine-modulated plasticity in reward-driven navigation: a computational study.
Zannone, Sara; Brzosko, Zuzanna; Paulsen, Ole; Clopath, Claudia
2018-06-21
Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules.
Zhang, Yong; Li, Peng; Jin, Yingyezhe; Choe, Yoonsuck
2015-11-01
This paper presents a bioinspired digital liquid-state machine (LSM) for low-power very-large-scale-integration (VLSI)-based machine learning applications. To the best of the authors' knowledge, this is the first work that employs a bioinspired spike-based learning algorithm for the LSM. With the proposed online learning, the LSM extracts information from input patterns on the fly without needing intermediate data storage as required in offline learning methods such as ridge regression. The proposed learning rule is local such that each synaptic weight update is based only upon the firing activities of the corresponding presynaptic and postsynaptic neurons without incurring global communications across the neural network. Compared with the backpropagation-based learning, the locality of computation in the proposed approach lends itself to efficient parallel VLSI implementation. We use subsets of the TI46 speech corpus to benchmark the bioinspired digital LSM. To reduce the complexity of the spiking neural network model without performance degradation for speech recognition, we study the impacts of synaptic models on the fading memory of the reservoir and hence the network performance. Moreover, we examine the tradeoffs between synaptic weight resolution, reservoir size, and recognition performance and present techniques to further reduce the overhead of hardware implementation. Our simulation results show that in terms of isolated word recognition evaluated using the TI46 speech corpus, the proposed digital LSM rivals the state-of-the-art hidden Markov-model-based recognizer Sphinx-4 and outperforms all other reported recognizers including the ones that are based upon the LSM or neural networks.
Dynamical model of long-term synaptic plasticity
Abarbanel, Henry D. I.; Huerta, R.; Rabinovich, M. I.
2002-01-01
Long-term synaptic plasticity leading to enhancement in synaptic efficacy (long-term potentiation, LTP) or decrease in synaptic efficacy (long-term depression, LTD) is widely regarded as underlying learning and memory in nervous systems. LTP and LTD at excitatory neuronal synapses are observed to be induced by precise timing of pre- and postsynaptic events. Modification of synaptic transmission in long-term plasticity is a complex process involving many pathways; for example, it is also known that both forms of synaptic plasticity can be induced by various time courses of Ca2+ introduction into the postsynaptic cell. We present a phenomenological description of a two-component process for synaptic plasticity. Our dynamical model reproduces the spike time-dependent plasticity of excitatory synapses as a function of relative timing between pre- and postsynaptic events, as observed in recent experiments. The model accounts for LTP and LTD when the postsynaptic cell is voltage clamped and depolarized (LTP) or hyperpolarized (LTD) and no postsynaptic action potentials are evoked. We are also able to connect our model with the Bienenstock, Cooper, and Munro rule. We give model predictions for changes in synaptic strength when periodic spike trains of varying frequency and Poisson distributed spike trains with varying average frequency are presented pre- and postsynaptically. When the frequency of spike presentation exceeds ≈30–40 Hz, only LTP is induced. PMID:12114531
Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo
2013-12-01
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
Towards a general theory of neural computation based on prediction by single neurons.
Fiorillo, Christopher D
2008-10-01
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.
Bilinearity in Spatiotemporal Integration of Synaptic Inputs
Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David
2014-01-01
Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832
Prospective Coding by Spiking Neurons
Brea, Johanni; Gaál, Alexisz Tamás; Senn, Walter
2016-01-01
Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron’s firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ). PMID:27341100
Impaired associative learning in schizophrenia: behavioral and computational studies
Diwadkar, Vaibhav A.; Flaugher, Brad; Jones, Trevor; Zalányi, László; Ujfalussy, Balázs; Keshavan, Matcheri S.
2008-01-01
Associative learning is a central building block of human cognition and in large part depends on mechanisms of synaptic plasticity, memory capacity and fronto–hippocampal interactions. A disorder like schizophrenia is thought to be characterized by altered plasticity, and impaired frontal and hippocampal function. Understanding the expression of this dysfunction through appropriate experimental studies, and understanding the processes that may give rise to impaired behavior through biologically plausible computational models will help clarify the nature of these deficits. We present a preliminary computational model designed to capture learning dynamics in healthy control and schizophrenia subjects. Experimental data was collected on a spatial-object paired-associate learning task. The task evinces classic patterns of negatively accelerated learning in both healthy control subjects and patients, with patients demonstrating lower rates of learning than controls. Our rudimentary computational model of the task was based on biologically plausible assumptions, including the separation of dorsal/spatial and ventral/object visual streams, implementation of rules of learning, the explicit parameterization of learning rates (a plausible surrogate for synaptic plasticity), and learning capacity (a plausible surrogate for memory capacity). Reductions in learning dynamics in schizophrenia were well-modeled by reductions in learning rate and learning capacity. The synergy between experimental research and a detailed computational model of performance provides a framework within which to infer plausible biological bases of impaired learning dynamics in schizophrenia. PMID:19003486
Energy-efficient neuron, synapse and STDP integrated circuits.
Cruz-Albrecht, Jose M; Yung, Michael W; Srinivasa, Narayan
2012-06-01
Ultra-low energy biologically-inspired neuron and synapse integrated circuits are presented. The synapse includes a spike timing dependent plasticity (STDP) learning rule circuit. These circuits have been designed, fabricated and tested using a 90 nm CMOS process. Experimental measurements demonstrate proper operation. The neuron and the synapse with STDP circuits have an energy consumption of around 0.4 pJ per spike and synaptic operation respectively.
Eguchi, Akihiro; Walters, Daniel; Peerenboom, Nele; Dury, Hannah; Fox, Elaine; Stringer, Simon
2017-03-01
[Correction Notice: An Erratum for this article was reported in Vol 85(3) of Journal of Consulting and Clinical Psychology (see record 2017-07144-002). In the article, there was an error in the Discussion section's first paragraph for Implications and Future Work. The in-text reference citation for Penton-Voak et al. (2013) was incorrectly listed as "Blumenfeld, Preminger, Sagi, and Tsodyks (2006)". All versions of this article have been corrected.] Objective: Cognitive bias modification (CBM) eliminates cognitive biases toward negative information and is efficacious in reducing depression recurrence, but the mechanisms behind the bias elimination are not fully understood. The present study investigated, through computer simulation of neural network models, the neural dynamics underlying the use of CBM in eliminating the negative biases in the way that depressed patients evaluate facial expressions. We investigated 2 new CBM methodologies using biologically plausible synaptic learning mechanisms-continuous transformation learning and trace learning-which guide learning by exploiting either the spatial or temporal continuity between visual stimuli presented during training. We first describe simulations with a simplified 1-layer neural network, and then we describe simulations in a biologically detailed multilayer neural network model of the ventral visual pathway. After training with either the continuous transformation learning rule or the trace learning rule, the 1-layer neural network eliminated biases in interpreting neutral stimuli as sad. The multilayer neural network trained with realistic face stimuli was also shown to be able to use continuous transformation learning or trace learning to reduce biases in the interpretation of neutral stimuli. The simulation results suggest 2 biologically plausible synaptic learning mechanisms, continuous transformation learning and trace learning, that may subserve CBM. The results are highly informative for the development of experimental protocols to produce optimal CBM training methodologies with human participants. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Endocannabinoid signaling and memory dynamics: A synaptic perspective.
Drumond, Ana; Madeira, Natália; Fonseca, Rosalina
2017-02-01
Memory acquisition is a key brain feature in which our human nature relies on. Memories evolve over time. Initially after learning, memories are labile and sensitive to disruption by the interference of concurrent events. Later on, after consolidation, memories are resistant to disruption. However, reactivation of previously consolidated memories renders them again in an unstable state and therefore susceptible to perturbation. Additionally, and depending on the characteristics of the stimuli, a parallel process may be initiated which ultimately leads to the extinction of the previously acquired response. This dynamic aspect of memory maintenance opens the possibility for an updating of previously acquired memories but it also creates several conceptual challenges. What is the time window for memory updating? What determines whether reconsolidation or extinction is triggered? In this review, we tried to re-examine the relationship between consolidation, reconsolidation and extinction, aiming for a unifying view of memory dynamics. Since cellular models of memory share common principles, we present the evidence that similar rules apply to the maintenance of synaptic plasticity. Recently, a new function of the endocannabinoid (eCB) signaling system has been described for associative forms of synaptic plasticity in amygdala synapses. The eCB system has emerged as a key modulator of memory dynamics by adjusting the outcome to stimuli intensity. We propose a key function of eCB in discriminative forms of learning by restricting associative plasticity in amygdala synapses. Since many neuropsychiatric disorders are associated with a dysregulation in memory dynamics, understanding the rules underlying memory maintenance paves the path to better clinical interventions. Copyright © 2016 Elsevier Inc. All rights reserved.
The Brain as an Efficient and Robust Adaptive Learner.
Denève, Sophie; Alemi, Alireza; Bourdoukan, Ralph
2017-06-07
Understanding how the brain learns to compute functions reliably, efficiently, and robustly with noisy spiking activity is a fundamental challenge in neuroscience. Most sensory and motor tasks can be described as dynamical systems and could presumably be learned by adjusting connection weights in a recurrent biological neural network. However, this is greatly complicated by the credit assignment problem for learning in recurrent networks, e.g., the contribution of each connection to the global output error cannot be determined based only on locally accessible quantities to the synapse. Combining tools from adaptive control theory and efficient coding theories, we propose that neural circuits can indeed learn complex dynamic tasks with local synaptic plasticity rules as long as they associate two experimentally established neural mechanisms. First, they should receive top-down feedbacks driving both their activity and their synaptic plasticity. Second, inhibitory interneurons should maintain a tight balance between excitation and inhibition in the circuit. The resulting networks could learn arbitrary dynamical systems and produce irregular spike trains as variable as those observed experimentally. Yet, this variability in single neurons may hide an extremely efficient and robust computation at the population level. Copyright © 2017 Elsevier Inc. All rights reserved.
Spontaneous neuronal activity as a self-organized critical phenomenon
NASA Astrophysics Data System (ADS)
de Arcangelis, L.; Herrmann, H. J.
2013-01-01
Neuronal avalanches are a novel mode of activity in neuronal networks, experimentally found in vitro and in vivo, and exhibit a robust critical behaviour. Avalanche activity can be modelled within the self-organized criticality framework, including threshold firing, refractory period and activity-dependent synaptic plasticity. The size and duration distributions confirm that the system acts in a critical state, whose scaling behaviour is very robust. Next, we discuss the temporal organization of neuronal avalanches. This is given by the alternation between states of high and low activity, named up and down states, leading to a balance between excitation and inhibition controlled by a single parameter. During these periods both the single neuron state and the network excitability level, keeping memory of past activity, are tuned by homeostatic mechanisms. Finally, we verify if a system with no characteristic response can ever learn in a controlled and reproducible way. Learning in the model occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. Learning is a truly collective process and the learning dynamics exhibits universal features. Even complex rules can be learned provided that the plastic adaptation is sufficiently slow.
Theory of optimal balance predicts and explains the amplitude and decay time of synaptic inhibition
Kim, Jaekyung K.; Fiorillo, Christopher D.
2017-01-01
Synaptic inhibition counterbalances excitation, but it is not known what constitutes optimal inhibition. We previously proposed that perfect balance is achieved when the peak of an excitatory postsynaptic potential (EPSP) is exactly at spike threshold, so that the slightest variation in excitation determines whether a spike is generated. Using simulations, we show that the optimal inhibitory postsynaptic conductance (IPSG) increases in amplitude and decay rate as synaptic excitation increases from 1 to 800 Hz. As further proposed by theory, we show that optimal IPSG parameters can be learned through anti-Hebbian rules. Finally, we compare our theoretical optima to published experimental data from 21 types of neurons, in which rates of synaptic excitation and IPSG decay times vary by factors of about 100 (5–600 Hz) and 50 (1–50 ms), respectively. From an infinite range of possible decay times, theory predicted experimental decay times within less than a factor of 2. Across a distinct set of 15 types of neuron recorded in vivo, theory predicted the amplitude of synaptic inhibition within a factor of 1.7. Thus, the theory can explain biophysical quantities from first principles. PMID:28281523
Theory of optimal balance predicts and explains the amplitude and decay time of synaptic inhibition
NASA Astrophysics Data System (ADS)
Kim, Jaekyung K.; Fiorillo, Christopher D.
2017-03-01
Synaptic inhibition counterbalances excitation, but it is not known what constitutes optimal inhibition. We previously proposed that perfect balance is achieved when the peak of an excitatory postsynaptic potential (EPSP) is exactly at spike threshold, so that the slightest variation in excitation determines whether a spike is generated. Using simulations, we show that the optimal inhibitory postsynaptic conductance (IPSG) increases in amplitude and decay rate as synaptic excitation increases from 1 to 800 Hz. As further proposed by theory, we show that optimal IPSG parameters can be learned through anti-Hebbian rules. Finally, we compare our theoretical optima to published experimental data from 21 types of neurons, in which rates of synaptic excitation and IPSG decay times vary by factors of about 100 (5-600 Hz) and 50 (1-50 ms), respectively. From an infinite range of possible decay times, theory predicted experimental decay times within less than a factor of 2. Across a distinct set of 15 types of neuron recorded in vivo, theory predicted the amplitude of synaptic inhibition within a factor of 1.7. Thus, the theory can explain biophysical quantities from first principles.
Functional requirements for reward-modulated spike-timing-dependent plasticity.
Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram
2010-10-06
Recent experiments have shown that spike-timing-dependent plasticity is influenced by neuromodulation. We derive theoretical conditions for successful learning of reward-related behavior for a large class of learning rules where Hebbian synaptic plasticity is conditioned on a global modulatory factor signaling reward. We show that all learning rules in this class can be separated into a term that captures the covariance of neuronal firing and reward and a second term that presents the influence of unsupervised learning. The unsupervised term, which is, in general, detrimental for reward-based learning, can be suppressed if the neuromodulatory signal encodes the difference between the reward and the expected reward-but only if the expected reward is calculated for each task and stimulus separately. If several tasks are to be learned simultaneously, the nervous system needs an internal critic that is able to predict the expected reward for arbitrary stimuli. We show that, with a critic, reward-modulated spike-timing-dependent plasticity is capable of learning motor trajectories with a temporal resolution of tens of milliseconds. The relation to temporal difference learning, the relevance of block-based learning paradigms, and the limitations of learning with a critic are discussed.
Heteroassociative storage of hippocampal pattern sequences in the CA3 subregion
Recio, Renan S.; Reyes, Marcelo B.
2018-01-01
Background Recent research suggests that the CA3 subregion of the hippocampus has properties of both autoassociative network, due to its ability to complete partial cues, tolerate noise, and store associations between memories, and heteroassociative one, due to its ability to store and retrieve sequences of patterns. Although there are several computational models of the CA3 as an autoassociative network, more detailed evaluations of its heteroassociative properties are missing. Methods We developed a model of the CA3 subregion containing 10,000 integrate-and-fire neurons with both recurrent excitatory and inhibitory connections, and which exhibits coupled oscillations in the gamma and theta ranges. We stored thousands of pattern sequences using a heteroassociative learning rule with competitive synaptic scaling. Results We showed that a purely heteroassociative network model can (i) retrieve pattern sequences from partial cues with external noise and incomplete connectivity, (ii) achieve homeostasis regarding the number of connections per neuron when many patterns are stored when using synaptic scaling, (iii) continuously update the set of retrievable patterns, guaranteeing that the last stored patterns can be retrieved and older ones can be forgotten. Discussion Heteroassociative networks with synaptic scaling rules seem sufficient to achieve many desirable features regarding connectivity homeostasis, pattern sequence retrieval, noise tolerance and updating of the set of retrievable patterns. PMID:29312826
Biologically Inspired SNN for Robot Control.
Nichols, Eric; McDaid, Liam J; Siddique, Nazmul
2013-02-01
This paper proposes a spiking-neural-network-based robot controller inspired by the control structures of biological systems. Information is routed through the network using facilitating dynamic synapses with short-term plasticity. Learning occurs through long-term synaptic plasticity which is implemented using the temporal difference learning rule to enable the robot to learn to associate the correct movement with the appropriate input conditions. The network self-organizes to provide memories of environments that the robot encounters. A Pioneer robot simulator with laser and sonar proximity sensors is used to verify the performance of the network with a wall-following task, and the results are presented.
Wilmes, Katharina Anna; Schleimer, Jan-Hendrik; Schreiber, Susanne
2017-04-01
Inhibition is known to influence the forward-directed flow of information within neurons. However, also regulation of backward-directed signals, such as backpropagating action potentials (bAPs), can enrich the functional repertoire of local circuits. Inhibitory control of bAP spread, for example, can provide a switch for the plasticity of excitatory synapses. Although such a mechanism is possible, it requires a precise timing of inhibition to annihilate bAPs without impairment of forward-directed excitatory information flow. Here, we propose a specific learning rule for inhibitory synapses to automatically generate the correct timing to gate bAPs in pyramidal cells when embedded in a local circuit of feedforward inhibition. Based on computational modeling of multi-compartmental neurons with physiological properties, we demonstrate that a learning rule with anti-Hebbian shape can establish the required temporal precision. In contrast to classical spike-timing dependent plasticity of excitatory synapses, the proposed inhibitory learning mechanism does not necessarily require the definition of an upper bound of synaptic weights because of its tendency to self-terminate once annihilation of bAPs has been reached. Our study provides a functional context in which one of the many time-dependent learning rules that have been observed experimentally - specifically, a learning rule with anti-Hebbian shape - is assigned a relevant role for inhibitory synapses. Moreover, the described mechanism is compatible with an upregulation of excitatory plasticity by disinhibition. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Berthet, Pierre; Hellgren-Kotaleski, Jeanette; Lansner, Anders
2012-01-01
Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e., the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behavior to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian–Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo, and RP system were utilized, e.g., using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioral data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available. PMID:23060764
Suen, Jonathan Y; Navlakha, Saket
2017-05-01
Controlling the flow and routing of data is a fundamental problem in many distributed networks, including transportation systems, integrated circuits, and the Internet. In the brain, synaptic plasticity rules have been discovered that regulate network activity in response to environmental inputs, which enable circuits to be stable yet flexible. Here, we develop a new neuro-inspired model for network flow control that depends only on modifying edge weights in an activity-dependent manner. We show how two fundamental plasticity rules, long-term potentiation and long-term depression, can be cast as a distributed gradient descent algorithm for regulating traffic flow in engineered networks. We then characterize, both by simulation and analytically, how different forms of edge-weight-update rules affect network routing efficiency and robustness. We find a close correspondence between certain classes of synaptic weight update rules derived experimentally in the brain and rules commonly used in engineering, suggesting common principles to both.
Potjans, Wiebke; Morrison, Abigail; Diesmann, Markus
2010-01-01
A major puzzle in the field of computational neuroscience is how to relate system-level learning in higher organisms to synaptic plasticity. Recently, plasticity rules depending not only on pre- and post-synaptic activity but also on a third, non-local neuromodulatory signal have emerged as key candidates to bridge the gap between the macroscopic and the microscopic level of learning. Crucial insights into this topic are expected to be gained from simulations of neural systems, as these allow the simultaneous study of the multiple spatial and temporal scales that are involved in the problem. In particular, synaptic plasticity can be studied during the whole learning process, i.e., on a time scale of minutes to hours and across multiple brain areas. Implementing neuromodulated plasticity in large-scale network simulations where the neuromodulatory signal is dynamically generated by the network itself is challenging, because the network structure is commonly defined purely by the connectivity graph without explicit reference to the embedding of the nodes in physical space. Furthermore, the simulation of networks with realistic connectivity entails the use of distributed computing. A neuromodulated synapse must therefore be informed in an efficient way about the neuromodulatory signal, which is typically generated by a population of neurons located on different machines than either the pre- or post-synaptic neuron. Here, we develop a general framework to solve the problem of implementing neuromodulated plasticity in a time-driven distributed simulation, without reference to a particular implementation language, neuromodulator, or neuromodulated plasticity mechanism. We implement our framework in the simulator NEST and demonstrate excellent scaling up to 1024 processors for simulations of a recurrent network incorporating neuromodulated spike-timing dependent plasticity. PMID:21151370
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences
Lindén, Henrik; Lansner, Anders
2016-01-01
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. PMID:27213810
Maass, Wolfgang
2008-01-01
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics. PMID:18846203
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Chih; Roy, Anupam; Chang, Yao-Feng; Shahrjerdi, Davood; Banerjee, Sanjay K.
2016-11-01
Nanoscale metal oxide memristors have potential in the development of brain-inspired computing systems that are scalable and efficient. In such systems, memristors represent the native electronic analogues of the biological synapses. In this work, we show cerium oxide based bilayer memristors that are forming-free, low-voltage (˜|0.8 V|), energy-efficient (full on/off switching at ˜8 pJ with 20 ns pulses, intermediate states switching at ˜fJ), and reliable. Furthermore, pulse measurements reveal the analog nature of the memristive device; that is, it can directly be programmed to intermediate resistance states. Leveraging this finding, we demonstrate spike-timing-dependent plasticity, a spike-based Hebbian learning rule. In those experiments, the memristor exhibits a marked change in the normalized synaptic strength (>30 times), when the pre- and post-synaptic neural spikes overlap. This demonstration is an important step towards the physical construction of high density and high connectivity neural networks.
Rapid, parallel path planning by propagating wavefronts of spiking neural activity
Ponulak, Filip; Hopfield, John J.
2013-01-01
Efficient path planning and navigation is critical for animals, robotics, logistics and transportation. We study a model in which spatial navigation problems can rapidly be solved in the brain by parallel mental exploration of alternative routes using propagating waves of neural activity. A wave of spiking activity propagates through a hippocampus-like network, altering the synaptic connectivity. The resulting vector field of synaptic change then guides a simulated animal to the appropriate selected target locations. We demonstrate that the navigation problem can be solved using realistic, local synaptic plasticity rules during a single passage of a wavefront. Our model can find optimal solutions for competing possible targets or learn and navigate in multiple environments. The model provides a hypothesis on the possible computational mechanisms for optimal path planning in the brain, at the same time it is useful for neuromorphic implementations, where the parallelism of information processing proposed here can fully be harnessed in hardware. PMID:23882213
Pattern classification by memristive crossbar circuits using ex situ and in situ training.
Alibart, Fabien; Zamanidoost, Elham; Strukov, Dmitri B
2013-01-01
Memristors are memory resistors that promise the efficient implementation of synaptic weights in artificial neural networks. Whereas demonstrations of the synaptic operation of memristors already exist, the implementation of even simple networks is more challenging and has yet to be reported. Here we demonstrate pattern classification using a single-layer perceptron network implemented with a memrisitive crossbar circuit and trained using the perceptron learning rule by ex situ and in situ methods. In the first case, synaptic weights, which are realized as conductances of titanium dioxide memristors, are calculated on a precursor software-based network and then imported sequentially into the crossbar circuit. In the second case, training is implemented in situ, so the weights are adjusted in parallel. Both methods work satisfactorily despite significant variations in the switching behaviour of the memristors. These results give hope for the anticipated efficient implementation of artificial neuromorphic networks and pave the way for dense, high-performance information processing systems.
Pattern classification by memristive crossbar circuits using ex situ and in situ training
NASA Astrophysics Data System (ADS)
Alibart, Fabien; Zamanidoost, Elham; Strukov, Dmitri B.
2013-06-01
Memristors are memory resistors that promise the efficient implementation of synaptic weights in artificial neural networks. Whereas demonstrations of the synaptic operation of memristors already exist, the implementation of even simple networks is more challenging and has yet to be reported. Here we demonstrate pattern classification using a single-layer perceptron network implemented with a memrisitive crossbar circuit and trained using the perceptron learning rule by ex situ and in situ methods. In the first case, synaptic weights, which are realized as conductances of titanium dioxide memristors, are calculated on a precursor software-based network and then imported sequentially into the crossbar circuit. In the second case, training is implemented in situ, so the weights are adjusted in parallel. Both methods work satisfactorily despite significant variations in the switching behaviour of the memristors. These results give hope for the anticipated efficient implementation of artificial neuromorphic networks and pave the way for dense, high-performance information processing systems.
A new supervised learning algorithm for spiking neurons.
Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming
2013-06-01
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.
Miner, Daniel; Triesch, Jochen
2016-01-01
Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring. PMID:26866369
Information-driven self-organization: the dynamical system approach to autonomous robot behavior.
Ay, Nihat; Bernigau, Holger; Der, Ralf; Prokopenko, Mikhail
2012-09-01
In recent years, information theory has come into the focus of researchers interested in the sensorimotor dynamics of both robots and living beings. One root for these approaches is the idea that living beings are information processing systems and that the optimization of these processes should be an evolutionary advantage. Apart from these more fundamental questions, there is much interest recently in the question how a robot can be equipped with an internal drive for innovation or curiosity that may serve as a drive for an open-ended, self-determined development of the robot. The success of these approaches depends essentially on the choice of a convenient measure for the information. This article studies in some detail the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process. The PI of a process quantifies the total information of past experience that can be used for predicting future events. However, the application of information theoretic measures in robotics mostly is restricted to the case of a finite, discrete state-action space. This article aims at applying the PI in the dynamical systems approach to robot control. We study linear systems as a first step and derive exact results for the PI together with explicit learning rules for the parameters of the controller. Interestingly, these learning rules are of Hebbian nature and local in the sense that the synaptic update is given by the product of activities available directly at the pertinent synaptic ports. The general findings are exemplified by a number of case studies. In particular, in a two-dimensional system, designed at mimicking embodied systems with latent oscillatory locomotion patterns, it is shown that maximizing the PI means to recognize and amplify the latent modes of the robotic system. This and many other examples show that the learning rules derived from the maximum PI principle are a versatile tool for the self-organization of behavior in complex robotic systems.
[Involvement of aquaporin-4 in synaptic plasticity, learning and memory].
Wu, Xin; Gao, Jian-Feng
2017-06-25
Aquaporin-4 (AQP-4) is the predominant water channel in the central nervous system (CNS) and primarily expressed in astrocytes. Astrocytes have been generally believed to play important roles in regulating synaptic plasticity and information processing. However, the role of AQP-4 in regulating synaptic plasticity, learning and memory, cognitive function is only beginning to be investigated. It is well known that synaptic plasticity is the prime candidate for mediating of learning and memory. Long term potentiation (LTP) and long term depression (LTD) are two forms of synaptic plasticity, and they share some but not all the properties and mechanisms. Hippocampus is a part of limbic system that is particularly important in regulation of learning and memory. This article is to review some research progresses of the function of AQP-4 in synaptic plasticity, learning and memory, and propose the possible role of AQP-4 as a new target in the treatment of cognitive dysfunction.
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2009-01-01
Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly’s halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support. PMID:20396612
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2009-06-01
Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.
Higher-order neural networks, Polyà polynomials, and Fermi cluster diagrams
NASA Astrophysics Data System (ADS)
Kürten, K. E.; Clark, J. W.
2003-09-01
The problem of controlling higher-order interactions in neural networks is addressed with techniques commonly applied in the cluster analysis of quantum many-particle systems. For multineuron synaptic weights chosen according to a straightforward extension of the standard Hebbian learning rule, we show that higher-order contributions to the stimulus felt by a given neuron can be readily evaluated via Polyà’s combinatoric group-theoretical approach or equivalently by exploiting a precise formal analogy with fermion diagrammatics.
Dynamically stable associative learning: a neurobiologically based ANN and its applications
NASA Astrophysics Data System (ADS)
Vogl, Thomas P.; Blackwell, Kim L.; Barbour, Garth; Alkon, Daniel L.
1992-07-01
Most currently popular artificial neural networks (ANN) are based on conceptions of neuronal properties that date back to the 1940s and 50s, i.e., to the ideas of McCullough, Pitts, and Hebb. Dystal is an ANN based on current knowledge of neurobiology at the cellular and subcellular level. Networks based on these neurobiological insights exhibit the following advantageous properties: (1) A theoretical storage capacity of bN non-orthogonal memories, where N is the number of output neurons sharing common inputs and b is the number of distinguishable (gray shade) levels. (2) The ability to learn, store, and recall associations among noisy, arbitrary patterns. (3) A local synaptic learning rule (learning depends neither on the output of the post-synaptic neuron nor on a global error term), some of whose consequences are: (4) Feed-forward, lateral, and feed-back connections (as well as time-sensitive connections) are possible without alteration of the learning algorithm; (5) Storage allocation (patch creation) proceeds dynamically as associations are learned (self- organizing); (6) The number of training set presentations required for learning is small (< 10) and does not change with pattern size or content; and (7) The network exhibits monotonic convergence, reaching equilibrium (fully trained) values without oscillating. The performance of Dystal on pattern completion tasks such as faces with different expressions and/or corrupted by noise, and on reading hand-written digits (98% accuracy) and hand-printed Japanese Kanji (90% accuracy) is demonstrated.
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
Sinapayen, Lana; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.
Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.
John, Rohit Abraham; Ko, Jieun; Kulkarni, Mohit R; Tiwari, Naveen; Chien, Nguyen Anh; Ing, Ng Geok; Leong, Wei Lin; Mathews, Nripan
2017-08-01
Emulation of biological synapses is necessary for future brain-inspired neuromorphic computational systems that could look beyond the standard von Neuman architecture. Here, artificial synapses based on ionic-electronic hybrid oxide-based transistors on rigid and flexible substrates are demonstrated. The flexible transistors reported here depict a high field-effect mobility of ≈9 cm 2 V -1 s -1 with good mechanical performance. Comprehensive learning abilities/synaptic rules like paired-pulse facilitation, excitatory and inhibitory postsynaptic currents, spike-time-dependent plasticity, consolidation, superlinear amplification, and dynamic logic are successfully established depicting concurrent processing and memory functionalities with spatiotemporal correlation. The results present a fully solution processable approach to fabricate artificial synapses for next-generation transparent neural circuits. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Reducing the computational footprint for real-time BCPNN learning
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. PMID:25657618
Reducing the computational footprint for real-time BCPNN learning.
Vogginger, Bernhard; Schüffny, René; Lansner, Anders; Cederström, Love; Partzsch, Johannes; Höppner, Sebastian
2015-01-01
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware.
Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz
2013-01-01
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project.
Friedmann, Simon; Frémaux, Nicolas; Schemmel, Johannes; Gerstner, Wulfram; Meier, Karlheinz
2013-01-01
In this study, we propose and analyze in simulations a new, highly flexible method of implementing synaptic plasticity in a wafer-scale, accelerated neuromorphic hardware system. The study focuses on globally modulated STDP, as a special use-case of this method. Flexibility is achieved by embedding a general-purpose processor dedicated to plasticity into the wafer. To evaluate the suitability of the proposed system, we use a reward modulated STDP rule in a spike train learning task. A single layer of neurons is trained to fire at specific points in time with only the reward as feedback. This model is simulated to measure its performance, i.e., the increase in received reward after learning. Using this performance as baseline, we then simulate the model with various constraints imposed by the proposed implementation and compare the performance. The simulated constraints include discretized synaptic weights, a restricted interface between analog synapses and embedded processor, and mismatch of analog circuits. We find that probabilistic updates can increase the performance of low-resolution weights, a simple interface between analog synapses and processor is sufficient for learning, and performance is insensitive to mismatch. Further, we consider communication latency between wafer and the conventional control computer system that is simulating the environment. This latency increases the delay, with which the reward is sent to the embedded processor. Because of the time continuous operation of the analog synapses, delay can cause a deviation of the updates as compared to the not delayed situation. We find that for highly accelerated systems latency has to be kept to a minimum. This study demonstrates the suitability of the proposed implementation to emulate the selected reward modulated STDP learning rule. It is therefore an ideal candidate for implementation in an upgraded version of the wafer-scale system developed within the BrainScaleS project. PMID:24065877
Balanced excitation and inhibition are required for high-capacity, noise-robust neuronal selectivity
Abbott, L. F.; Sompolinsky, Haim
2017-01-01
Neurons and networks in the cerebral cortex must operate reliably despite multiple sources of noise. To evaluate the impact of both input and output noise, we determine the robustness of single-neuron stimulus selective responses, as well as the robustness of attractor states of networks of neurons performing memory tasks. We find that robustness to output noise requires synaptic connections to be in a balanced regime in which excitation and inhibition are strong and largely cancel each other. We evaluate the conditions required for this regime to exist and determine the properties of networks operating within it. A plausible synaptic plasticity rule for learning that balances weight configurations is presented. Our theory predicts an optimal ratio of the number of excitatory and inhibitory synapses for maximizing the encoding capacity of balanced networks for given statistics of afferent activations. Previous work has shown that balanced networks amplify spatiotemporal variability and account for observed asynchronous irregular states. Here we present a distinct type of balanced network that amplifies small changes in the impinging signals and emerges automatically from learning to perform neuronal and network functions robustly. PMID:29042519
Models of Acetylcholine and Dopamine Signals Differentially Improve Neural Representations
Holca-Lamarre, Raphaël; Lücke, Jörg; Obermayer, Klaus
2017-01-01
Biological and artificial neural networks (ANNs) represent input signals as patterns of neural activity. In biology, neuromodulators can trigger important reorganizations of these neural representations. For instance, pairing a stimulus with the release of either acetylcholine (ACh) or dopamine (DA) evokes long lasting increases in the responses of neurons to the paired stimulus. The functional roles of ACh and DA in rearranging representations remain largely unknown. Here, we address this question using a Hebbian-learning neural network model. Our aim is both to gain a functional understanding of ACh and DA transmission in shaping biological representations and to explore neuromodulator-inspired learning rules for ANNs. We model the effects of ACh and DA on synaptic plasticity and confirm that stimuli coinciding with greater neuromodulator activation are over represented in the network. We then simulate the physiological release schedules of ACh and DA. We measure the impact of neuromodulator release on the network's representation and on its performance on a classification task. We find that ACh and DA trigger distinct changes in neural representations that both improve performance. The putative ACh signal redistributes neural preferences so that more neurons encode stimulus classes that are challenging for the network. The putative DA signal adapts synaptic weights so that they better match the classes of the task at hand. Our model thus offers a functional explanation for the effects of ACh and DA on cortical representations. Additionally, our learning algorithm yields performances comparable to those of state-of-the-art optimisation methods in multi-layer perceptrons while requiring weaker supervision signals and interacting with synaptically-local weight updates. PMID:28690509
Neuronal avalanches and learning
NASA Astrophysics Data System (ADS)
de Arcangelis, Lucilla
2011-05-01
Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.
Synaptic plasticity associated with a memory engram in the basolateral amygdala.
Nonaka, Ayako; Toyoda, Takeshi; Miura, Yuki; Hitora-Imamura, Natsuko; Naka, Masamitsu; Eguchi, Megumi; Yamaguchi, Shun; Ikegaya, Yuji; Matsuki, Norio; Nomura, Hiroshi
2014-07-09
Synaptic plasticity is a cellular mechanism putatively underlying learning and memory. However, it is unclear whether learning induces synaptic modification globally or only in a subset of neurons in associated brain regions. In this study, we genetically identified neurons activated during contextual fear learning and separately recorded synaptic efficacy from recruited and nonrecruited neurons in the mouse basolateral amygdala (BLA). We found that the fear learning induces presynaptic potentiation, which was reflected by an increase in the miniature EPSC frequency and by a decrease in the paired-pulse ratio. Changes occurred only in the cortical synapses targeting the BLA neurons that were recruited into the fear memory trace. Furthermore, we found that fear learning reorganizes the neuronal ensemble responsive to the conditioning context in conjunction with the synaptic plasticity. In particular, the neuronal activity during learning was associated with the neuronal recruitment into the context-responsive ensemble. These findings suggest that synaptic plasticity in a subset of BLA neurons contributes to fear memory expression through ensemble reorganization. Copyright © 2014 the authors 0270-6474/14/349305-05$15.00/0.
Learning through ferroelectric domain dynamics in solid-state synapses
NASA Astrophysics Data System (ADS)
Boyn, Sören; Grollier, Julie; Lecerf, Gwendal; Xu, Bin; Locatelli, Nicolas; Fusil, Stéphane; Girod, Stéphanie; Carrétéro, Cécile; Garcia, Karin; Xavier, Stéphane; Tomas, Jean; Bellaiche, Laurent; Bibes, Manuel; Barthélémy, Agnès; Saïghi, Sylvain; Garcia, Vincent
2017-04-01
In the brain, learning is achieved through the ability of synapses to reconfigure the strength by which they connect neurons (synaptic plasticity). In promising solid-state synapses called memristors, conductance can be finely tuned by voltage pulses and set to evolve according to a biological learning rule called spike-timing-dependent plasticity (STDP). Future neuromorphic architectures will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity. Here we report on synapses based on ferroelectric tunnel junctions and show that STDP can be harnessed from inhomogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, we demonstrate that conductance variations can be modelled by the nucleation-dominated reversal of domains. Based on this physical model, our simulations show that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning in spiking neural networks.
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
Hussain, Shaista; Basu, Arindam
2016-01-01
The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule for multiclass classification is proposed which modifies a connectivity matrix of binary synaptic connections by choosing the best “k” out of “d” inputs to make connections on every dendritic branch (k < < d). Because learning only modifies connectivity, the model is well suited for implementation in neuromorphic systems using address-event representation (AER). We develop an ensemble method which combines several dendritic classifiers to achieve enhanced generalization over individual classifiers. We have two major findings: (1) Our results demonstrate that an ensemble created with classifiers comprising moderate number of dendrites performs better than both ensembles of perceptrons and of complex dendritic trees. (2) In order to determine the moderate number of dendrites required for a specific classification problem, a two-step solution is proposed. First, an adaptive approach is proposed which scales the relative size of the dendritic trees of neurons for each class. It works by progressively adding dendrites with fixed number of synapses to the network, thereby allocating synaptic resources as per the complexity of the given problem. As a second step, theoretical capacity calculations are used to convert each neuronal dendritic tree to its optimal topology where dendrites of each class are assigned different number of synapses. The performance of the model is evaluated on classification of handwritten digits from the benchmark MNIST dataset and compared with other spike classifiers. We show that our system can achieve classification accuracy within 1 − 2% of other reported spike-based classifiers while using much less synaptic resources (only 7%) compared to that used by other methods. Further, an ensemble classifier created with adaptively learned sizes can attain accuracy of 96.4% which is at par with the best reported performance of spike-based classifiers. Moreover, the proposed method achieves this by using about 20% of the synapses used by other spike algorithms. We also present results of applying our algorithm to classify the MNIST-DVS dataset collected from a real spike-based image sensor and show results comparable to the best reported ones (88.1% accuracy). For VLSI implementations, we show that the reduced synaptic memory can save upto 4X area compared to conventional crossbar topologies. Finally, we also present a biologically realistic spike-based version for calculating the correlations required by the structural learning rule and demonstrate the correspondence between the rate-based and spike-based methods of learning. PMID:27065782
Spike-timing dependent plasticity in primate corticospinal connections induced during free behavior
Nishimura, Yukio; Perlmutter, Steve I.; Eaton, Ryan W.; Fetz, Eberhard E.
2014-01-01
Motor learning and functional recovery from brain damage involve changes in the strength of synaptic connections between neurons. Relevant in vivo evidence on the underlying cellular mechanisms remains limited and indirect. We found that the strength of neural connections between motor cortex and spinal cord in monkeys can be modified with an autonomous recurrent neural interface that delivers electrical stimuli in the spinal cord triggered by action potentials of corticospinal cells during free behavior. The activity-dependent stimulation modified the strength of the terminal connections of single corticomotoneuronal cells, consistent with a bidirectional spike-timing dependent plasticity rule previously derived from in vitro experiments. For some cells the changes lasted for days after the end of conditioning, but most effects eventually reverted to preconditioning levels. These results provide the first direct evidence of corticospinal synaptic plasticity in vivo at the level of single neurons induced by normal firing patterns during free behavior. PMID:24210907
NASA Astrophysics Data System (ADS)
de Arcangelis, L.; Lombardi, F.; Herrmann, H. J.
2014-03-01
Spontaneous brain activity has been recently characterized by avalanche dynamics with critical features for systems in vitro and in vivo. In this contribution we present a review of experimental results on neuronal avalanches in cortex slices, together with numerical results from a neuronal model implementing several physiological properties of living neurons. Numerical data reproduce experimental results for avalanche statistics. The temporal organization of avalanches can be characterized by the distribution of waiting times between successive avalanches. Experimental measurements exhibit a non-monotonic behaviour, not usually found in other natural processes. Numerical simulations provide evidence that this behaviour is a consequence of the alternation between states of high and low activity, leading to a balance between excitation and inhibition controlled by a single parameter. During these periods both the single neuron state and the network excitability level, keeping memory of past activity, are tuned by homoeostatic mechanisms. Interestingly, the same homoeostatic balance is detected for neuronal activity at the scale of the whole brain. We finally review the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules and the learning dynamics exhibits universal features as a function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.
Neural Synchronization and Cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas
2007-11-01
Neural networks can synchronize by learning from each other. In the case of discrete weights full synchronization is achieved in a finite number of steps. Additional networks can be trained by using the inputs and outputs generated during this process as examples. Several learning rules for both tasks are presented and analyzed. In the case of Tree Parity Machines synchronization is much faster than learning. Scaling laws for the number of steps needed for full synchronization and successful learning are derived using analytical models. They indicate that the difference between both processes can be controlled by changing the synaptic depth. In the case of bidirectional interaction the synchronization time increases proportional to the square of this parameter, but it grows exponentially, if information is transmitted in one direction only. Because of this effect neural synchronization can be used to construct a cryptographic key-exchange protocol. Here the partners benefit from mutual interaction, so that a passive attacker is usually unable to learn the generated key in time. The success probabilities of different attack methods are determined by numerical simulations and scaling laws are derived from the data. They show that the partners can reach any desired level of security by just increasing the synaptic depth. Then the complexity of a successful attack grows exponentially, but there is only a polynomial increase of the effort needed to generate a key. Further improvements of security are possible by replacing the random inputs with queries generated by the partners.
Webber, C J
2001-05-01
This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.
Zinc Transporter 3 Is Involved in Learned Fear and Extinction, but Not in Innate Fear
ERIC Educational Resources Information Center
Martel, Guillaume; Hevi, Charles; Friebely, Olivia; Baybutt, Trevor; Shumyatsky, Gleb P.
2010-01-01
Synaptically released Zn[superscript 2+] is a potential modulator of neurotransmission and synaptic plasticity in fear-conditioning pathways. Zinc transporter 3 (ZnT3) knock-out (KO) mice are well suited to test the role of zinc in learned fear, because ZnT3 is colocalized with synaptic zinc, responsible for its transport to synaptic vesicles,…
Synaptic Ensemble Underlying the Selection and Consolidation of Neuronal Circuits during Learning.
Hoshiba, Yoshio; Wada, Takeyoshi; Hayashi-Takagi, Akiko
2017-01-01
Memories are crucial to the cognitive essence of who we are as human beings. Accumulating evidence has suggested that memories are stored as a subset of neurons that probably fire together in the same ensemble. Such formation of cell ensembles must meet contradictory requirements of being plastic and responsive during learning, but also stable in order to maintain the memory. Although synaptic potentiation is presumed to be the cellular substrate for this process, the link between the two remains correlational. With the application of the latest optogenetic tools, it has been possible to collect direct evidence of the contributions of synaptic potentiation in the formation and consolidation of cell ensemble in a learning task specific manner. In this review, we summarize the current view of the causative role of synaptic plasticity as the cellular mechanism underlying the encoding of memory and recalling of learned memories. In particular, we will be focusing on the latest optoprobe developed for the visualization of such "synaptic ensembles." We further discuss how a new synaptic ensemble could contribute to the formation of cell ensembles during learning and memory. With the development and application of novel research tools in the future, studies on synaptic ensembles will pioneer new discoveries, eventually leading to a comprehensive understanding of how the brain works.
Neural Mechanism for Stochastic Behavior During a Competitive Game
Soltani, Alireza; Lee, Daeyeol; Wang, Xiao-Jing
2006-01-01
Previous studies have shown that non-human primates can generate highly stochastic choice behavior, especially when this is required during a competitive interaction with another agent. To understand the neural mechanism of such dynamic choice behavior, we propose a biologically plausible model of decision making endowed with synaptic plasticity that follows a reward-dependent stochastic Hebbian learning rule. This model constitutes a biophysical implementation of reinforcement learning, and it reproduces salient features of behavioral data from an experiment with monkeys playing a matching pennies game. Due to interaction with an opponent and learning dynamics, the model generates quasi-random behavior robustly in spite of intrinsic biases. Furthermore, non-random choice behavior can also emerge when the model plays against a non-interactive opponent, as observed in the monkey experiment. Finally, when combined with a meta-learning algorithm, our model accounts for the slow drift in the animal’s strategy based on a process of reward maximization. PMID:17015181
The penumbra of learning: a statistical theory of synaptic tagging and capture.
Gershman, Samuel J
2014-01-01
Learning in humans and animals is accompanied by a penumbra: Learning one task benefits from learning an unrelated task shortly before or after. At the cellular level, the penumbra of learning appears when weak potentiation of one synapse is amplified by strong potentiation of another synapse on the same neuron during a critical time window. Weak potentiation sets a molecular tag that enables the synapse to capture plasticity-related proteins synthesized in response to strong potentiation at another synapse. This paper describes a computational model which formalizes synaptic tagging and capture in terms of statistical learning mechanisms. According to this model, synaptic strength encodes a probabilistic inference about the dynamically changing association between pre- and post-synaptic firing rates. The rate of change is itself inferred, coupling together different synapses on the same neuron. When the inputs to one synapse change rapidly, the inferred rate of change increases, amplifying learning at other synapses.
Fast Learning with Weak Synaptic Plasticity.
Yger, Pierre; Stimberg, Marcel; Brette, Romain
2015-09-30
New sensory stimuli can be learned with a single or a few presentations. Similarly, the responses of cortical neurons to a stimulus have been shown to increase reliably after just a few repetitions. Long-term memory is thought to be mediated by synaptic plasticity, but in vitro experiments in cortical cells typically show very small changes in synaptic strength after a pair of presynaptic and postsynaptic spikes. Thus, it is traditionally thought that fast learning requires stronger synaptic changes, possibly because of neuromodulation. Here we show theoretically that weak synaptic plasticity can, in fact, support fast learning, because of the large number of synapses N onto a cortical neuron. In the fluctuation-driven regime characteristic of cortical neurons in vivo, the size of membrane potential fluctuations grows only as √N, whereas a single output spike leads to potentiation of a number of synapses proportional to N. Therefore, the relative effect of a single spike on synaptic potentiation grows as √N. This leverage effect requires precise spike timing. Thus, the large number of synapses onto cortical neurons allows fast learning with very small synaptic changes. Significance statement: Long-term memory is thought to rely on the strengthening of coactive synapses. This physiological mechanism is generally considered to be very gradual, and yet new sensory stimuli can be learned with just a few presentations. Here we show theoretically that this apparent paradox can be solved when there is a tight balance between excitatory and inhibitory input. In this case, small synaptic modifications applied to the many synapses onto a given neuron disrupt that balance and produce a large effect even for modifications induced by a single stimulus. This effect makes fast learning possible with small synaptic changes and reconciles physiological and behavioral observations. Copyright © 2015 the authors 0270-6474/15/3513351-12$15.00/0.
ERIC Educational Resources Information Center
Cohen-Matsliah, Sivan Ida; Seroussi, Yaron; Rosenblum, Kobi; Barkai, Edi
2008-01-01
Pyramidal neurons in the piriform cortex from olfactory-discrimination (OD) trained rats undergo synaptic modifications that last for days after learning. A particularly intriguing modification is reduced paired-pulse facilitation (PPF) in the synapses interconnecting these cells; a phenomenon thought to reflect enhanced synaptic release. The…
Role of motor cortex NMDA receptors in learning-dependent synaptic plasticity of behaving mice
Hasan, Mazahir T.; Hernández-González, Samuel; Dogbevia, Godwin; Treviño, Mario; Bertocchi, Ilaria; Gruart, Agnès; Delgado-García, José M.
2013-01-01
The primary motor cortex has an important role in the precise execution of learned motor responses. During motor learning, synaptic efficacy between sensory and primary motor cortical neurons is enhanced, possibly involving long-term potentiation and N-methyl-D-aspartate (NMDA)-specific glutamate receptor function. To investigate whether NMDA receptor in the primary motor cortex can act as a coincidence detector for activity-dependent changes in synaptic strength and associative learning, here we generate mice with deletion of the Grin1 gene, encoding the essential NMDA receptor subunit 1 (GluN1), specifically in the primary motor cortex. The loss of NMDA receptor function impairs primary motor cortex long-term potentiation in vivo. Importantly, it impairs the synaptic efficacy between the primary somatosensory and primary motor cortices and significantly reduces classically conditioned eyeblink responses. Furthermore, compared with wild-type littermates, mice lacking primary motor cortex show slower learning in Skinner-box tasks. Thus, primary motor cortex NMDA receptors are necessary for activity-dependent synaptic strengthening and associative learning. PMID:23978820
GRASP1 regulates synaptic plasticity and learning through endosomal recycling of AMPA receptors
Chiu, Shu-Ling; Diering, Graham Hugh; Ye, Bing; Takamiya, Kogo; Chen, Chih-Ming; Jiang, Yuwu; Niranjan, Tejasvi; Schwartz, Charles E.; Wang, Tao; Huganir, Richard L.
2017-01-01
Summary Learning depends on experience-dependent modification of synaptic efficacy and neuronal connectivity in the brain. We provide direct evidence for physiological roles of the recycling endosome protein GRASP1 in glutamatergic synapse function and animal behavior. Mice lacking GRASP1 showed abnormal excitatory synapse number, synaptic plasticity and hippocampal-dependent learning and memory due to a failure in learning-induced synaptic AMPAR incorporation. We identified two GRASP1 point mutations from intellectual disability (ID) patients that showed convergent disruptive effects on AMPAR recycling and glutamate uncaging-induced structural and functional plasticity. Wild-type GRASP1, but not ID mutants, rescues spine loss in hippocampal CA1 neurons of Grasp1 knockout mice. Together, these results demonstrate a requirement for normal recycling endosome function in AMPAR-dependent synaptic function and neuronal connectivity in vivo, and suggest a potential role for GRASP1 in the pathophysiology of human cognitive disorders. PMID:28285821
NASA Astrophysics Data System (ADS)
Das, Mangal; Kumar, Amitesh; Singh, Rohit; Than Htay, Myo; Mukherjee, Shaibal
2018-02-01
Single synaptic device with inherent learning and memory functions is demonstrated based on a forming-free amorphous Y2O3 (yttria) memristor fabricated by dual ion beam sputtering system. Synaptic functions such as nonlinear transmission characteristics, long-term plasticity, short-term plasticity and ‘learning behavior (LB)’ are achieved using a single synaptic device based on cost-effective metal-insulator-semiconductor (MIS) structure. An ‘LB’ function is demonstrated, for the first time in the literature, for a yttria based memristor, which bears a resemblance to certain memory functions of biological systems. The realization of key synaptic functions in a cost-effective MIS structure would promote much cheaper synapse for artificial neural network.
Garden, Derek L. F.; Rinaldi, Arianna
2016-01-01
Key points We establish experimental preparations for optogenetic investigation of glutamatergic input to the inferior olive.Neurones in the principal olivary nucleus receive monosynaptic extra‐somatic glutamatergic input from the neocortex.Glutamatergic inputs to neurones in the inferior olive generate bidirectional postsynaptic potentials (PSPs), with a fast excitatory component followed by a slower inhibitory component.Small conductance calcium‐activated potassium (SK) channels are required for the slow inhibitory component of glutamatergic PSPs and oppose temporal summation of inputs at intervals ≤ 20 ms.Active integration of synaptic input within the inferior olive may play a central role in control of olivo‐cerebellar climbing fibre signals. Abstract The inferior olive plays a critical role in motor coordination and learning by integrating diverse afferent signals to generate climbing fibre inputs to the cerebellar cortex. While it is well established that climbing fibre signals are important for motor coordination, the mechanisms by which neurones in the inferior olive integrate synaptic inputs and the roles of particular ion channels are unclear. Here, we test the hypothesis that neurones in the inferior olive actively integrate glutamatergic synaptic inputs. We demonstrate that optogenetically activated long‐range synaptic inputs to the inferior olive, including projections from the motor cortex, generate rapid excitatory potentials followed by slower inhibitory potentials. Synaptic projections from the motor cortex preferentially target the principal olivary nucleus. We show that inhibitory and excitatory components of the bidirectional synaptic potentials are dependent upon AMPA (GluA) receptors, are GABAA independent, and originate from the same presynaptic axons. Consistent with models that predict active integration of synaptic inputs by inferior olive neurones, we find that the inhibitory component is reduced by blocking large conductance calcium‐activated potassium channels with iberiotoxin, and is abolished by blocking small conductance calcium‐activated potassium channels with apamin. Summation of excitatory components of synaptic responses to inputs at intervals ≤ 20 ms is increased by apamin, suggesting a role for the inhibitory component of glutamatergic responses in temporal integration. Our results indicate that neurones in the inferior olive implement novel rules for synaptic integration and suggest new principles for the contribution of inferior olive neurones to coordinated motor behaviours. PMID:27767209
Learning through ferroelectric domain dynamics in solid-state synapses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyn, Soren; Grollier, Julie; Lecerf, Gwendal
In the brain, learning is achieved through the ability of synapses to reconfigure the strength by which they connect neurons (synaptic plasticity). In promising solid-state synapses called memristors, conductance can be finely tuned by voltage pulses and set to evolve according to a biological learning rule called spike-timing-dependent plasticity (STDP). Future neuromorphic architectures will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity. Here we report on synapses based on ferroelectric tunnel junctions and show that STDP can be harnessed from inhomogeneous polarization switching. Through combined scanning probe imaging, electrical transport andmore » atomic-scale molecular dynamics, we demonstrate that conductance variations can be modelled by the nucleation-dominated reversal of domains. Finally, based on this physical model, our simulations show that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning in spiking neural networks.« less
Learning through ferroelectric domain dynamics in solid-state synapses
Boyn, Soren; Grollier, Julie; Lecerf, Gwendal; ...
2017-04-03
In the brain, learning is achieved through the ability of synapses to reconfigure the strength by which they connect neurons (synaptic plasticity). In promising solid-state synapses called memristors, conductance can be finely tuned by voltage pulses and set to evolve according to a biological learning rule called spike-timing-dependent plasticity (STDP). Future neuromorphic architectures will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity. Here we report on synapses based on ferroelectric tunnel junctions and show that STDP can be harnessed from inhomogeneous polarization switching. Through combined scanning probe imaging, electrical transport andmore » atomic-scale molecular dynamics, we demonstrate that conductance variations can be modelled by the nucleation-dominated reversal of domains. Finally, based on this physical model, our simulations show that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning in spiking neural networks.« less
Waddington, Amelia; Appleby, Peter A.; De Kamps, Marc; Cohen, Netta
2012-01-01
Synfire chains have long been proposed to generate precisely timed sequences of neural activity. Such activity has been linked to numerous neural functions including sensory encoding, cognitive and motor responses. In particular, it has been argued that synfire chains underlie the precise spatiotemporal firing patterns that control song production in a variety of songbirds. Previous studies have suggested that the development of synfire chains requires either initial sparse connectivity or strong topological constraints, in addition to any synaptic learning rules. Here, we show that this necessity can be removed by using a previously reported but hitherto unconsidered spike-timing-dependent plasticity (STDP) rule and activity-dependent excitability. Under this rule the network develops stable synfire chains that possess a non-trivial, scalable multi-layer structure, in which relative layer sizes appear to follow a universal function. Using computational modeling and a coarse grained random walk model, we demonstrate the role of the STDP rule in growing, molding and stabilizing the chain, and link model parameters to the resulting structure. PMID:23162457
Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex.
Steyn-Ross, Moira L; Steyn-Ross, D A; Sleigh, J W; Wilson, M T; Wilcocks, Lara C
2005-12-01
Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer's dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.
Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex
NASA Astrophysics Data System (ADS)
Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Sleigh, J. W.; Wilson, M. T.; Wilcocks, Lara C.
2005-12-01
Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer’s dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.
Synaptic Tagging During Memory Allocation
Rogerson, Thomas; Cai, Denise; Frank, Adam; Sano, Yoshitake; Shobe, Justin; Aranda, Manuel L.; Silva, Alcino J.
2014-01-01
There is now compelling evidence that the allocation of memory to specific neurons (neuronal allocation) and synapses (synaptic allocation) in a neurocircuit is not random and that instead specific mechanisms, such as increases in neuronal excitability and synaptic tagging and capture, determine the exact sites where memories are stored. We propose an integrated view of these processes, such that neuronal allocation, synaptic tagging and capture, spine clustering and metaplasticity reflect related aspects of memory allocation mechanisms. Importantly, the properties of these mechanisms suggest a set of rules that profoundly affect how memories are stored and recalled. PMID:24496410
Stably maintained dendritic spines are associated with lifelong memories
Yang, Guang; Pan, Feng; Gan, Wen-Biao
2016-01-01
Changes in synaptic connections are considered essential for learning and memory formation1–6. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex7–8, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation and storage. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks. PMID:19946265
NASA Astrophysics Data System (ADS)
Hogri, Roni; Bamford, Simeon A.; Taub, Aryeh H.; Magal, Ari; Giudice, Paolo Del; Mintz, Matti
2015-02-01
Neuroprostheses could potentially recover functions lost due to neural damage. Typical neuroprostheses connect an intact brain with the external environment, thus replacing damaged sensory or motor pathways. Recently, closed-loop neuroprostheses, bidirectionally interfaced with the brain, have begun to emerge, offering an opportunity to substitute malfunctioning brain structures. In this proof-of-concept study, we demonstrate a neuro-inspired model-based approach to neuroprostheses. A VLSI chip was designed to implement essential cerebellar synaptic plasticity rules, and was interfaced with cerebellar input and output nuclei in real time, thus reproducing cerebellum-dependent learning in anesthetized rats. Such a model-based approach does not require prior system identification, allowing for de novo experience-based learning in the brain-chip hybrid, with potential clinical advantages and limitations when compared to existing parametric ``black box'' models.
Influence of Synaptic Depression on Memory Storage Capacity
NASA Astrophysics Data System (ADS)
Otsubo, Yosuke; Nagata, Kenji; Oizumi, Masafumi; Okada, Masato
2011-08-01
Synaptic efficacy between neurons is known to change within a short time scale dynamically. Neurophysiological experiments show that high-frequency presynaptic inputs decrease synaptic efficacy between neurons. This phenomenon is called synaptic depression, a short term synaptic plasticity. Many researchers have investigated how the synaptic depression affects the memory storage capacity. However, the noise has not been taken into consideration in their analysis. By introducing ``temperature'', which controls the level of the noise, into an update rule of neurons, we investigate the effects of synaptic depression on the memory storage capacity in the presence of the noise. We analytically compute the storage capacity by using a statistical mechanics technique called Self Consistent Signal to Noise Analysis (SCSNA). We find that the synaptic depression decreases the storage capacity in the case of finite temperature in contrast to the case of the low temperature limit, where the storage capacity does not change.
Learning induces the translin/trax RNase complex to express activin receptors for persistent memory.
Park, Alan Jung; Havekes, Robbert; Fu, Xiuping; Hansen, Rolf; Tudor, Jennifer C; Peixoto, Lucia; Li, Zhi; Wu, Yen-Ching; Poplawski, Shane G; Baraban, Jay M; Abel, Ted
2017-09-20
Long-lasting forms of synaptic plasticity and memory require de novo protein synthesis. Yet, how learning triggers this process to form memory is unclear. Translin/trax is a candidate to drive this learning-induced memory mechanism by suppressing microRNA-mediated translational silencing at activated synapses. We find that mice lacking translin/trax display defects in synaptic tagging, which requires protein synthesis at activated synapses, and long-term memory. Hippocampal samples harvested from these mice following learning show increases in several disease-related microRNAs targeting the activin A receptor type 1C (ACVR1C), a component of the transforming growth factor-β receptor superfamily. Furthermore, the absence of translin/trax abolishes synaptic upregulation of ACVR1C protein after learning. Finally, synaptic tagging and long-term memory deficits in mice lacking translin/trax are mimicked by ACVR1C inhibition. Thus, we define a new memory mechanism by which learning reverses microRNA-mediated silencing of the novel plasticity protein ACVR1C via translin/trax.
Pedretti, G; Milo, V; Ambrogio, S; Carboni, R; Bianchi, S; Calderoni, A; Ramaswamy, N; Spinelli, A S; Ielmini, D
2017-07-13
Brain-inspired computation can revolutionize information technology by introducing machines capable of recognizing patterns (images, speech, video) and interacting with the external world in a cognitive, humanlike way. Achieving this goal requires first to gain a detailed understanding of the brain operation, and second to identify a scalable microelectronic technology capable of reproducing some of the inherent functions of the human brain, such as the high synaptic connectivity (~10 4 ) and the peculiar time-dependent synaptic plasticity. Here we demonstrate unsupervised learning and tracking in a spiking neural network with memristive synapses, where synaptic weights are updated via brain-inspired spike timing dependent plasticity (STDP). The synaptic conductance is updated by the local time-dependent superposition of pre- and post-synaptic spikes within a hybrid one-transistor/one-resistor (1T1R) memristive synapse. Only 2 synaptic states, namely the low resistance state (LRS) and the high resistance state (HRS), are sufficient to learn and recognize patterns. Unsupervised learning of a static pattern and tracking of a dynamic pattern of up to 4 × 4 pixels are demonstrated, paving the way for intelligent hardware technology with up-scaled memristive neural networks.
Synchronization in a noise-driven developing neural network
NASA Astrophysics Data System (ADS)
Lin, I.-H.; Wu, R.-K.; Chen, C.-M.
2011-11-01
We use computer simulations to investigate the structural and dynamical properties of a developing neural network whose activity is driven by noise. Structurally, the constructed neural networks in our simulations exhibit the small-world properties that have been observed in several neural networks. The dynamical change of neuronal membrane potential is described by the Hodgkin-Huxley model, and two types of learning rules, including spike-timing-dependent plasticity (STDP) and inverse STDP, are considered to restructure the synaptic strength between neurons. Clustered synchronized firing (SF) of the network is observed when the network connectivity (number of connections/maximal connections) is about 0.75, in which the firing rate of neurons is only half of the network frequency. At the connectivity of 0.86, all neurons fire synchronously at the network frequency. The network SF frequency increases logarithmically with the culturing time of a growing network and decreases exponentially with the delay time in signal transmission. These conclusions are consistent with experimental observations. The phase diagrams of SF in a developing network are investigated for both learning rules.
Mender, Bedeho M W; Stringer, Simon M
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions.
Mender, Bedeho M. W.; Stringer, Simon M.
2015-01-01
We propose and examine a model for how perisaccadic visual receptive field dynamics, observed in a range of primate brain areas such as LIP, FEF, SC, V3, V3A, V2, and V1, may develop through a biologically plausible process of unsupervised visually guided learning. These dynamics are associated with remapping, which is the phenomenon where receptive fields anticipate the consequences of saccadic eye movements. We find that a neural network model using a local associative synaptic learning rule, when exposed to visual scenes in conjunction with saccades, can account for a range of associated phenomena. In particular, our model demonstrates predictive and pre-saccadic remapping, responsiveness shifts around the time of saccades, and remapping from multiple directions. PMID:25717301
Different propagation speeds of recalled sequences in plastic spiking neural networks
NASA Astrophysics Data System (ADS)
Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.
2015-03-01
Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in experiments.
Synaptic electronics: materials, devices and applications.
Kuzum, Duygu; Yu, Shimeng; Wong, H-S Philip
2013-09-27
In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented.
Sandi, Carmen; Davies, Heather A; Cordero, M Isabel; Rodriguez, Jose J; Popov, Victor I; Stewart, Michael G
2003-06-01
The impact was examined of exposing rats to two life experiences of a very different nature (stress and learning) on synaptic structures in hippocampal area CA3. Rats were subjected to either (i) chronic restraint stress for 21 days, and/or (ii) spatial training in a Morris water maze. At the behavioural level, restraint stress induced an impairment of acquisition of the spatial response. Moreover, restraint stress and water maze training had contrasting impacts on CA3 synaptic morphometry. Chronic stress induced a loss of simple asymmetric synapses [those with an unperforated postsynaptic density (PSD)], whilst water maze learning reversed this effect, promoting a rapid recovery of stress-induced synaptic loss within 2-3 days following stress. In addition, in unstressed animals a correlation was found between learning efficiency and the density of synapses with an unperforated PSD: the better the performance in the water maze, the lower the synaptic density. Water maze training increased the number of perforated synapses (those with a segmented PSD) in CA3, both in stressed and, more notably, in unstressed rats. The distinct effects of stress and learning on CA3 synapses reported here provide a neuroanatomical basis for the reported divergent effects of these experiences on hippocampal synaptic activity, i.e. stress as a suppressor and learning as a promoter of synaptic plasticity.
[Progress on metaplasticity and its role in learning and memory].
Wang, Shao-Li; Lu, Wei
2016-08-25
Long-term potentiation (LTP) and long-term depression (LTD) are two major forms of synaptic plasticity that are widely considered as important cellular models of learning and memory. Metaplasticity is defined as the plasticity of synaptic plasticity and thus is an advanced form of plasticity. The history of synaptic activity can affect the subsequent synaptic plasticity induction. Therefore, it is important to study metaplasticity to explore new mechanisms underlying various brain functions including learning and memory. Since the concept of metaplasticity was proposed, it has aroused widespread concerns and attracted numerous researchers to dig more details on this topic. These new-found experimental phenomena and cellular mechanisms have established the basis of theoretical studies on metaplasticity. In recent years, researchers have found that metaplasticity can not only affect the synaptic plasticity, but also regulate the neural network to encode specific content and enhance the learning and memory. These findings have greatly enriched our knowledge on plasticity and opened a new route to study the mechanism of learning and memory. In this review, we discuss the recent progress on metaplasticity on following three aspects: (1) the molecular mechanisms of metaplasticity; (2) the role of metaplasticity in learning and memory; and (3) the outlook of future study on metaplasticity.
From modulated Hebbian plasticity to simple behavior learning through noise and weight saturation.
Soltoggio, Andrea; Stanley, Kenneth O
2012-10-01
Synaptic plasticity is a major mechanism for adaptation, learning, and memory. Yet current models struggle to link local synaptic changes to the acquisition of behaviors. The aim of this paper is to demonstrate a computational relationship between local Hebbian plasticity and behavior learning by exploiting two traditionally unwanted features: neural noise and synaptic weight saturation. A modulation signal is employed to arbitrate the sign of plasticity: when the modulation is positive, the synaptic weights saturate to express exploitative behavior; when it is negative, the weights converge to average values, and neural noise reconfigures the network's functionality. This process is demonstrated through simulating neural dynamics in the autonomous emergence of fearful and aggressive navigating behaviors and in the solution to reward-based problems. The neural model learns, memorizes, and modifies different behaviors that lead to positive modulation in a variety of settings. The algorithm establishes a simple relationship between local plasticity and behavior learning by demonstrating the utility of noise and weight saturation. Moreover, it provides a new tool to simulate adaptive behavior, and contributes to bridging the gap between synaptic changes and behavior in neural computation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sieling, Fred; Bédécarrats, Alexis; Simmers, John; Prinz, Astrid A; Nargeot, Romuald
2014-05-05
Rewarding stimuli in associative learning can transform the irregularly and infrequently generated motor patterns underlying motivated behaviors into output for accelerated and stereotyped repetitive action. This transition to compulsive behavioral expression is associated with modified synaptic and membrane properties of central neurons, but establishing the causal relationships between cellular plasticity and motor adaptation has remained a challenge. We found previously that changes in the intrinsic excitability and electrical synapses of identified neurons in Aplysia's central pattern-generating network for feeding are correlated with a switch to compulsive-like motor output expression induced by in vivo operant conditioning. Here, we used specific computer-simulated ionic currents in vitro to selectively replicate or suppress the membrane and synaptic plasticity resulting from this learning. In naive in vitro preparations, such experimental manipulation of neuronal membrane properties alone increased the frequency but not the regularity of feeding motor output found in preparations from operantly trained animals. On the other hand, changes in synaptic strength alone switched the regularity but not the frequency of feeding output from naive to trained states. However, simultaneously imposed changes in both membrane and synaptic properties reproduced both major aspects of the motor plasticity. Conversely, in preparations from trained animals, experimental suppression of the membrane and synaptic plasticity abolished the increase in frequency and regularity of the learned motor output expression. These data establish direct causality for the contributions of distinct synaptic and nonsynaptic adaptive processes to complementary facets of a compulsive behavior resulting from operant reward learning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Liu, Zhiqiang; Han, Jing; Jia, Lintao; Maillet, Jean-Christian; Bai, Guang; Xu, Lin; Jia, Zhengping; Zheng, Qiaohua; Zhang, Wandong; Monette, Robert; Merali, Zul; Zhu, Zhou; Wang, Wei; Ren, Wei; Zhang, Xia
2010-01-01
Drug addiction is an association of compulsive drug use with long-term associative learning/memory. Multiple forms of learning/memory are primarily subserved by activity- or experience-dependent synaptic long-term potentiation (LTP) and long-term depression (LTD). Recent studies suggest LTP expression in locally activated glutamate synapses onto dopamine neurons (local Glu-DA synapses) of the midbrain ventral tegmental area (VTA) following a single or chronic exposure to many drugs of abuse, whereas a single exposure to cannabinoid did not significantly affect synaptic plasticity at these synapses. It is unknown whether chronic exposure of cannabis (marijuana or cannabinoids), the most commonly used illicit drug worldwide, induce LTP or LTD at these synapses. More importantly, whether such alterations in VTA synaptic plasticity causatively contribute to drug addictive behavior has not previously been addressed. Here we show in rats that chronic cannabinoid exposure activates VTA cannabinoid CB1 receptors to induce transient neurotransmission depression at VTA local Glu-DA synapses through activation of NMDA receptors and subsequent endocytosis of AMPA receptor GluR2 subunits. A GluR2-derived peptide blocks cannabinoid-induced VTA synaptic depression and conditioned place preference, i.e., learning to associate drug exposure with environmental cues. These data not only provide the first evidence, to our knowledge, that NMDA receptor-dependent synaptic depression at VTA dopamine circuitry requires GluR2 endocytosis, but also suggest an essential contribution of such synaptic depression to cannabinoid-associated addictive learning, in addition to pointing to novel pharmacological strategies for the treatment of cannabis addiction. PMID:21187978
Hiratani, Naoki; Fukai, Tomoki
2016-01-01
In the adult mammalian cortex, a small fraction of spines are created and eliminated every day, and the resultant synaptic connection structure is highly nonrandom, even in local circuits. However, it remains unknown whether a particular synaptic connection structure is functionally advantageous in local circuits, and why creation and elimination of synaptic connections is necessary in addition to rich synaptic weight plasticity. To answer these questions, we studied an inference task model through theoretical and numerical analyses. We demonstrate that a robustly beneficial network structure naturally emerges by combining Hebbian-type synaptic weight plasticity and wiring plasticity. Especially in a sparsely connected network, wiring plasticity achieves reliable computation by enabling efficient information transmission. Furthermore, the proposed rule reproduces experimental observed correlation between spine dynamics and task performance. PMID:27303271
Learning Universal Computations with Spikes
Thalmeier, Dominik; Uhlmann, Marvin; Kappen, Hilbert J.; Memmesheimer, Raoul-Martin
2016-01-01
Providing the neurobiological basis of information processing in higher animals, spiking neural networks must be able to learn a variety of complicated computations, including the generation of appropriate, possibly delayed reactions to inputs and the self-sustained generation of complex activity patterns, e.g. for locomotion. Many such computations require previous building of intrinsic world models. Here we show how spiking neural networks may solve these different tasks. Firstly, we derive constraints under which classes of spiking neural networks lend themselves to substrates of powerful general purpose computing. The networks contain dendritic or synaptic nonlinearities and have a constrained connectivity. We then combine such networks with learning rules for outputs or recurrent connections. We show that this allows to learn even difficult benchmark tasks such as the self-sustained generation of desired low-dimensional chaotic dynamics or memory-dependent computations. Furthermore, we show how spiking networks can build models of external world systems and use the acquired knowledge to control them. PMID:27309381
Toward a Neurocentric View of Learning.
Titley, Heather K; Brunel, Nicolas; Hansel, Christian
2017-07-05
Synaptic plasticity (e.g., long-term potentiation [LTP]) is considered the cellular correlate of learning. Recent optogenetic studies on memory engram formation assign a critical role in learning to suprathreshold activation of neurons and their integration into active engrams ("engram cells"). Here we review evidence that ensemble integration may result from LTP but also from cell-autonomous changes in membrane excitability. We propose that synaptic plasticity determines synaptic connectivity maps, whereas intrinsic plasticity-possibly separated in time-amplifies neuronal responsiveness and acutely drives engram integration. Our proposal marks a move away from an exclusively synaptocentric toward a non-exclusive, neurocentric view of learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Spatial Object Recognition Enables Endogenous LTD that Curtails LTP in the Mouse Hippocampus
Goh, Jinzhong Jeremy
2013-01-01
Although synaptic plasticity is believed to comprise the cellular substrate for learning and memory, limited direct evidence exists that hippocampus-dependent learning actually triggers synaptic plasticity. It is likely, however, that long-term potentiation (LTP) works in concert with its counterpart, long-term depression (LTD) in the creation of spatial memory. It has been reported in rats that weak synaptic plasticity is facilitated into persistent plasticity if afferent stimulation is coupled with a novel spatial learning event. It is not known if this phenomenon also occurs in other species. We recorded from the hippocampal CA1 of freely behaving mice and observed that novel spatial learning triggers endogenous LTD. Specifically, we observed that LTD is enabled when test-pulse afferent stimulation is given during the learning of object constellations or during a spatial object recognition task. Intriguingly, LTP is significantly impaired by the same tasks, suggesting that LTD is the main cellular substrate for this type of learning. These data indicate that learning-facilitated plasticity is not exclusive to rats and that spatial learning leads to endogenous LTD in the hippocampus, suggesting an important role for this type of synaptic plasticity in the creation of hippocampus-dependent memory. PMID:22510536
Learning and Memory, Part II: Molecular Mechanisms of Synaptic Plasticity
ERIC Educational Resources Information Center
Lombroso, Paul; Ogren, Marilee
2009-01-01
The molecular events that are responsible for strengthening synaptic connections and how these are linked to memory and learning are discussed. The laboratory preparations that allow the investigation of these events are also described.
CREB Selectively Controls Learning-Induced Structural Remodeling of Neurons
ERIC Educational Resources Information Center
Middei, Silvia; Spalloni, Alida; Longone, Patrizia; Pittenger, Christopher; O'Mara, Shane M.; Marie, Helene; Ammassari-Teule, Martine
2012-01-01
The modulation of synaptic strength associated with learning is post-synaptically regulated by changes in density and shape of dendritic spines. The transcription factor CREB (cAMP response element binding protein) is required for memory formation and in vitro dendritic spine rearrangements, but its role in learning-induced remodeling of neurons…
CACNA1C gene regulates behavioral strategies in operant rule learning
Berger, Stefan; Bartsch, Dusan; Gass, Peter
2017-01-01
Behavioral experiments are usually designed to tap into a specific cognitive function, but animals may solve a given task through a variety of different and individual behavioral strategies, some of them not foreseen by the experimenter. Animal learning may therefore be seen more as the process of selecting among, and adapting, potential behavioral policies, rather than mere strengthening of associative links. Calcium influx through high-voltage-gated Ca2+ channels is central to synaptic plasticity, and altered expression of Cav1.2 channels and the CACNA1C gene have been associated with severe learning deficits and psychiatric disorders. Given this, we were interested in how specifically a selective functional ablation of the Cacna1c gene would modulate the learning process. Using a detailed, individual-level analysis of learning on an operant cue discrimination task in terms of behavioral strategies, combined with Bayesian selection among computational models estimated from the empirical data, we show that a Cacna1c knockout does not impair learning in general but has a much more specific effect: the majority of Cacna1c knockout mice still managed to increase reward feedback across trials but did so by adapting an outcome-based strategy, while the majority of matched controls adopted the experimentally intended cue-association rule. Our results thus point to a quite specific role of a single gene in learning and highlight that much more mechanistic insight could be gained by examining response patterns in terms of a larger repertoire of potential behavioral strategies. The results may also have clinical implications for treating psychiatric disorders. PMID:28604818
CACNA1C gene regulates behavioral strategies in operant rule learning.
Koppe, Georgia; Mallien, Anne Stephanie; Berger, Stefan; Bartsch, Dusan; Gass, Peter; Vollmayr, Barbara; Durstewitz, Daniel
2017-06-01
Behavioral experiments are usually designed to tap into a specific cognitive function, but animals may solve a given task through a variety of different and individual behavioral strategies, some of them not foreseen by the experimenter. Animal learning may therefore be seen more as the process of selecting among, and adapting, potential behavioral policies, rather than mere strengthening of associative links. Calcium influx through high-voltage-gated Ca2+ channels is central to synaptic plasticity, and altered expression of Cav1.2 channels and the CACNA1C gene have been associated with severe learning deficits and psychiatric disorders. Given this, we were interested in how specifically a selective functional ablation of the Cacna1c gene would modulate the learning process. Using a detailed, individual-level analysis of learning on an operant cue discrimination task in terms of behavioral strategies, combined with Bayesian selection among computational models estimated from the empirical data, we show that a Cacna1c knockout does not impair learning in general but has a much more specific effect: the majority of Cacna1c knockout mice still managed to increase reward feedback across trials but did so by adapting an outcome-based strategy, while the majority of matched controls adopted the experimentally intended cue-association rule. Our results thus point to a quite specific role of a single gene in learning and highlight that much more mechanistic insight could be gained by examining response patterns in terms of a larger repertoire of potential behavioral strategies. The results may also have clinical implications for treating psychiatric disorders.
Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing.
Kuzum, Duygu; Jeyasingh, Rakesh G D; Lee, Byoungil; Wong, H-S Philip
2012-05-09
Brain-inspired computing is an emerging field, which aims to extend the capabilities of information technology beyond digital logic. A compact nanoscale device, emulating biological synapses, is needed as the building block for brain-like computational systems. Here, we report a new nanoscale electronic synapse based on technologically mature phase change materials employed in optical data storage and nonvolatile memory applications. We utilize continuous resistance transitions in phase change materials to mimic the analog nature of biological synapses, enabling the implementation of a synaptic learning rule. We demonstrate different forms of spike-timing-dependent plasticity using the same nanoscale synapse with picojoule level energy consumption.
1990-11-30
signaI flow , xi. The learning" of such statistics could result from synaptic modification rules similar to those known to exist in the brain 7 " 1 0,1 1...in figure 1 had been established. If the series are appro\\imat.ed by Gaussian process. the information flow from X to Y can be expressed by the...Based on this model. the information flow in different direction were calculated by using eq.(1). RESULTS Figures 2 illustrates the information flow
Zinc transporter 3 is involved in learned fear and extinction, but not in innate fear.
Martel, Guillaume; Hevi, Charles; Friebely, Olivia; Baybutt, Trevor; Shumyatsky, Gleb P
2010-11-01
Synaptically released Zn²+ is a potential modulator of neurotransmission and synaptic plasticity in fear-conditioning pathways. Zinc transporter 3 (ZnT3) knock-out (KO) mice are well suited to test the role of zinc in learned fear, because ZnT3 is colocalized with synaptic zinc, responsible for its transport to synaptic vesicles, highly enriched in the amygdala-associated neural circuitry, and ZnT3 KO mice lack Zn²+ in synaptic vesicles. However, earlier work reported no deficiency in fear memory in ZnT3 KO mice, which is surprising based on the effects of Zn²+ on amygdala synaptic plasticity. We therefore reexamined ZnT3 KO mice in various tasks for learned and innate fear. The mutants were deficient in a weak fear-conditioning protocol using single tone-shock pairing but showed normal memory when a stronger, five-pairing protocol was used. ZnT3 KO mice were deficient in memory when a tone was presented as complex auditory information in a discontinuous fashion. Moreover, ZnT3 KO mice showed abnormality in trace fear conditioning and in fear extinction. By contrast, ZnT3 KO mice had normal anxiety. Thus, ZnT3 is involved in associative fear memory and extinction, but not in innate fear, consistent with the role of synaptic zinc in amygdala synaptic plasticity.
Hippocampal LTP and contextual learning require surface diffusion of AMPA receptors.
Penn, A C; Zhang, C L; Georges, F; Royer, L; Breillat, C; Hosy, E; Petersen, J D; Humeau, Y; Choquet, D
2017-09-21
Long-term potentiation (LTP) of excitatory synaptic transmission has long been considered a cellular correlate for learning and memory. Early LTP (less than 1 h) had initially been explained either by presynaptic increases in glutamate release or by direct modification of postsynaptic AMPA (α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid) receptor function. Compelling models have more recently proposed that synaptic potentiation can occur by the recruitment of additional postsynaptic AMPA receptors (AMPARs), sourced either from an intracellular reserve pool by exocytosis or from nearby extra-synaptic receptors pre-existing on the neuronal surface. However, the exact mechanism through which synapses can rapidly recruit new AMPARs during early LTP remains unknown. In particular, direct evidence for a pivotal role of AMPAR surface diffusion as a trafficking mechanism in synaptic plasticity is still lacking. Here, using AMPAR immobilization approaches, we show that interfering with AMPAR surface diffusion markedly impairs synaptic potentiation of Schaffer collaterals and commissural inputs to the CA1 area of the mouse hippocampus in cultured slices, acute slices and in vivo. Our data also identify distinct contributions of various AMPAR trafficking routes to the temporal profile of synaptic potentiation. In addition, AMPAR immobilization in vivo in the dorsal hippocampus inhibited fear conditioning, indicating that AMPAR diffusion is important for the early phase of contextual learning. Therefore, our results provide a direct demonstration that the recruitment of new receptors to synapses by surface diffusion is a critical mechanism for the expression of LTP and hippocampal learning. Since AMPAR surface diffusion is dictated by weak Brownian forces that are readily perturbed by protein-protein interactions, we anticipate that this fundamental trafficking mechanism will be a key target for modulating synaptic potentiation and learning.
Weinmann, Oliver; Kellner, Yves; Yu, Xinzhu; Vicente, Raul; Gullo, Miriam; Kasper, Hansjörg; Lussi, Karin; Ristic, Zorica; Luft, Andreas R.; Rioult-Pedotti, Mengia; Zuo, Yi; Zagrebelsky, Marta; Schwab, Martin E.
2014-01-01
The membrane protein Nogo-A is known as an inhibitor of axonal outgrowth and regeneration in the CNS. However, its physiological functions in the normal adult CNS remain incompletely understood. Here, we investigated the role of Nogo-A in cortical synaptic plasticity and motor learning in the uninjured adult rodent motor cortex. Nogo-A and its receptor NgR1 are present at cortical synapses. Acute treatment of slices with function-blocking antibodies (Abs) against Nogo-A or against NgR1 increased long-term potentiation (LTP) induced by stimulation of layer 2/3 horizontal fibers. Furthermore, anti-Nogo-A Ab treatment increased LTP saturation levels, whereas long-term depression remained unchanged, thus leading to an enlarged synaptic modification range. In vivo, intrathecal application of Nogo-A-blocking Abs resulted in a higher dendritic spine density at cortical pyramidal neurons due to an increase in spine formation as revealed by in vivo two-photon microscopy. To investigate whether these changes in synaptic plasticity correlate with motor learning, we trained rats to learn a skilled forelimb-reaching task while receiving anti-Nogo-A Abs. Learning of this cortically controlled precision movement was improved upon anti-Nogo-A Ab treatment. Our results identify Nogo-A as an influential molecular modulator of synaptic plasticity and as a regulator for learning of skilled movements in the motor cortex. PMID:24966370
Out with the old and in with the new: Synaptic mechanisms of extinction in the amygdala
Maren, Stephen
2014-01-01
Considerable research indicates that long-term synaptic plasticity in the amygdala underlies the acquisition of emotional memories, including those learned during Pavlovian fear conditioning. Much less is known about the synaptic mechanisms involved in other forms of associative learning, including extinction, that update fear memories. Extinction learning might reverse conditioning-related changes (e.g., depotentiation) or induce plasticity at inhibitory synapses (e.g., long-term potentiation) to suppress conditioned fear responses. Either mechanism must account for fear recovery phenomena after extinction, as well as savings of extinction after fear recovery. PMID:25312830
Weighing the Evidence in Peters' Rule: Does Neuronal Morphology Predict Connectivity?
Rees, Christopher L; Moradi, Keivan; Ascoli, Giorgio A
2017-02-01
Although the importance of network connectivity is increasingly recognized, identifying synapses remains challenging relative to the routine characterization of neuronal morphology. Thus, researchers frequently employ axon-dendrite colocations as proxies of potential connections. This putative equivalence, commonly referred to as Peters' rule, has been recently studied at multiple levels and scales, fueling passionate debates regarding its validity. Our critical literature review identifies three conceptually distinct but often confused applications: inferring neuron type circuitry, predicting synaptic contacts among individual cells, and estimating synapse numbers within neuron pairs. Paradoxically, at the originally proposed cell-type level, Peters' rule remains largely untested. Leveraging Hippocampome.org, we validate and refine the relationship between axonal-dendritic colocations and synaptic circuits, clarifying the interpretation of existing and forthcoming data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hausrat, Torben J.; Muhia, Mary; Gerrow, Kimberly; Thomas, Philip; Hirdes, Wiebke; Tsukita, Sachiko; Heisler, Frank F.; Herich, Lena; Dubroqua, Sylvain; Breiden, Petra; Feldon, Joram; Schwarz, Jürgen R; Yee, Benjamin K.; Smart, Trevor G.; Triller, Antoine; Kneussel, Matthias
2015-01-01
Neurotransmitter receptor density is a major variable in regulating synaptic strength. Receptors rapidly exchange between synapses and intracellular storage pools through endocytic recycling. In addition, lateral diffusion and confinement exchanges surface membrane receptors between synaptic and extrasynaptic sites. However, the signals that regulate this transition are currently unknown. GABAA receptors containing α5-subunits (GABAAR-α5) concentrate extrasynaptically through radixin (Rdx)-mediated anchorage at the actin cytoskeleton. Here we report a novel mechanism that regulates adjustable plasma membrane receptor pools in the control of synaptic receptor density. RhoA/ROCK signalling regulates an activity-dependent Rdx phosphorylation switch that uncouples GABAAR-α5 from its extrasynaptic anchor, thereby enriching synaptic receptor numbers. Thus, the unphosphorylated form of Rdx alters mIPSCs. Rdx gene knockout impairs reversal learning and short-term memory, and Rdx phosphorylation in wild-type mice exhibits experience-dependent changes when exposed to novel environments. Our data suggest an additional mode of synaptic plasticity, in which extrasynaptic receptor reservoirs supply synaptic GABAARs. PMID:25891999
Ajemian, Robert; D’Ausilio, Alessandro; Moorman, Helene; Bizzi, Emilio
2013-01-01
During the process of skill learning, synaptic connections in our brains are modified to form motor memories of learned sensorimotor acts. The more plastic the adult brain is, the easier it is to learn new skills or adapt to neurological injury. However, if the brain is too plastic and the pattern of synaptic connectivity is constantly changing, new memories will overwrite old memories, and learning becomes unstable. This trade-off is known as the stability–plasticity dilemma. Here a theory of sensorimotor learning and memory is developed whereby synaptic strengths are perpetually fluctuating without causing instability in motor memory recall, as long as the underlying neural networks are sufficiently noisy and massively redundant. The theory implies two distinct stages of learning—preasymptotic and postasymptotic—because once the error drops to a level comparable to that of the noise-induced error, further error reduction requires altered network dynamics. A key behavioral prediction derived from this analysis is tested in a visuomotor adaptation experiment, and the resultant learning curves are modeled with a nonstationary neural network. Next, the theory is used to model two-photon microscopy data that show, in animals, high rates of dendritic spine turnover, even in the absence of overt behavioral learning. Finally, the theory predicts enhanced task selectivity in the responses of individual motor cortical neurons as the level of task expertise increases. From these considerations, a unique interpretation of sensorimotor memory is proposed—memories are defined not by fixed patterns of synaptic weights but, rather, by nonstationary synaptic patterns that fluctuate coherently. PMID:24324147
Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert
2015-01-01
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370
Operant conditioning of synaptic and spiking activity patterns in single hippocampal neurons.
Ishikawa, Daisuke; Matsumoto, Nobuyoshi; Sakaguchi, Tetsuya; Matsuki, Norio; Ikegaya, Yuji
2014-04-02
Learning is a process of plastic adaptation through which a neural circuit generates a more preferable outcome; however, at a microscopic level, little is known about how synaptic activity is patterned into a desired configuration. Here, we report that animals can generate a specific form of synaptic activity in a given neuron in the hippocampus. In awake, head-restricted mice, we applied electrical stimulation to the lateral hypothalamus, a reward-associated brain region, when whole-cell patch-clamped CA1 neurons exhibited spontaneous synaptic activity that met preset criteria. Within 15 min, the mice learned to generate frequently the excitatory synaptic input pattern that satisfied the criteria. This reinforcement learning of synaptic activity was not observed for inhibitory input patterns. When a burst unit activity pattern was conditioned in paired and nonpaired paradigms, the frequency of burst-spiking events increased and decreased, respectively. The burst reinforcement occurred in the conditioned neuron but not in other adjacent neurons; however, ripple field oscillations were concomitantly reinforced. Neural conditioning depended on activation of NMDA receptors and dopamine D1 receptors. Acutely stressed mice and depression model mice that were subjected to forced swimming failed to exhibit the neural conditioning. This learning deficit was rescued by repetitive treatment with fluoxetine, an antidepressant. Therefore, internally motivated animals are capable of routing an ongoing action potential series into a specific neural pathway of the hippocampal network.
Barmashenko, Gleb; Buttgereit, Jens; Herring, Neil; Bader, Michael; Özcelik, Cemil; Manahan-Vaughan, Denise; Braunewell, Karl H.
2014-01-01
The second messenger cyclic GMP affects synaptic transmission and modulates synaptic plasticity and certain types of learning and memory processes. The impact of the natriuretic peptide receptor B (NPR-B) and its ligand C-type natriuretic peptide (CNP), one of several cGMP producing signaling systems, on hippocampal synaptic plasticity and learning is, however, less well understood. We have previously shown that the NPR-B ligand CNP increases the magnitude of long-term depression (LTD) in hippocampal area CA1, while reducing the induction of long-term potentiation (LTP). We have extended this line of research to show that bidirectional plasticity is affected in the opposite way in rats expressing a dominant-negative mutant of NPR-B (NSE-NPR-BΔKC) lacking the intracellular guanylyl cyclase domain under control of a promoter for neuron-specific enolase. The brain cells of these transgenic rats express functional dimers of the NPR-B receptor containing the dominant-negative NPR-BΔKC mutant, and therefore show decreased CNP-stimulated cGMP-production in brain membranes. The NPR-B transgenic rats display enhanced LTP but reduced LTD in hippocampal slices. When the frequency-dependence of synaptic modification to afferent stimulation in the range of 1–100 Hz was assessed in transgenic rats, the threshold for both, LTP and LTD induction, was shifted to lower frequencies. In parallel, NPR-BΔKC rats exhibited an enhancement in exploratory and learning behavior. These results indicate that bidirectional plasticity and learning and memory mechanism are affected in transgenic rats expressing a dominant-negative mutant of NPR-B. Our data substantiate the hypothesis that NPR-B-dependent cGMP signaling has a modulatory role for synaptic information storage and learning. PMID:25520616
Huang, Lianyan; Yang, Guang
2014-01-01
Background Recent studies in rodents suggest that repeated and prolonged anesthetic exposure at early stages of development leads to cognitive and behavioral impairments later in life. However, the underlying mechanism remains unknown. In this study, we tested whether exposure to general anesthesia during early development will disrupt the maturation of synaptic circuits and compromise learning-related synaptic plasticity later in life. Methods Mice received ketamine/xylazine (20/3 mg/kg) anesthesia for one or three times, starting at either early [postnatal day 14 (P14)] or late (P21) stages of development (n=105). Control mice received saline injections (n=34). At P30, mice were subjected to rotarod motor training and fear conditioning. Motor learning-induced synaptic remodeling was examined in vivo by repeatedly imaging fluorescently-labeled postsynaptic dendritic spines in the primary motor cortex before and after training using two-photon microscopy. Results Three exposures to ketamine/xylazine anesthesia between P14–18 impair the animals’ motor learning and learning-dependent dendritic spine plasticity [new spine formation, 8.4 ± 1.3% (mean ± SD) versus 13.4 ± 1.8%, P = 0.002] without affecting fear memory and cell apoptosis. One exposure at P14 or three exposures between P21–25 has no effects on the animals’ motor learning or spine plasticity. Finally, enriched motor experience ameliorates anesthesia-induced motor learning impairment and synaptic deficits. Conclusion Our study demonstrates that repeated exposures to ketamine/xylazine during early development impair motor learning and learning-dependent dendritic spine plasticity later in life. The reduction in synaptic structural plasticity may underlie anesthesia-induced behavioral impairment. PMID:25575163
Panda, Priyadarshini; Roy, Kaushik
2017-01-01
Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations. PMID:29311774
Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.
Bush, Daniel; Philippides, Andrew; Husbands, Phil; O'Shea, Michael
2010-07-01
The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal) coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.
Examining Neuronal Connectivity and Its Role in Learning and Memory
NASA Astrophysics Data System (ADS)
Gala, Rohan
Learning and long-term memory formation are accompanied with changes in the patterns and weights of synaptic connections in the underlying neuronal network. However, the fundamental rules that drive connectivity changes, and the precise structure-function relationships within neuronal networks remain elusive. Technological improvements over the last few decades have enabled the observation of large but specific subsets of neurons and their connections in unprecedented detail. Devising robust and automated computational methods is critical to distill information from ever-increasing volumes of raw experimental data. Moreover, statistical models and theoretical frameworks are required to interpret the data and assemble evidence into understanding of brain function. In this thesis, I first describe computational methods to reconstruct connectivity based on light microscopy imaging experiments. Next, I use these methods to quantify structural changes in connectivity based on in vivo time-lapse imaging experiments. Finally, I present a theoretical model of associative learning that can explain many stereotypical features of experimentally observed connectivity.
Pontine Mechanisms of Respiratory Control
Dutschmann, Mathias; Dick, Thomas E.
2015-01-01
Pontine respiratory nuclei provide synaptic input to medullary rhythmogenic circuits to shape and adapt the breathing pattern. An understanding of this statement depends on appreciating breathing as a behavior, rather than a stereotypic rhythm. In this review, we focus on the pontine-mediated inspiratory off-switch (IOS) associated with postinspiratory glottal constriction. Further, IOS is examined in the context of pontine regulation of glottal resistance in response to multimodal sensory inputs and higher commands, which in turn rules timing, duration, and patterning of respiratory airflow. In addition, network plasticity in respiratory control emerges during the development of the pons. Synaptic plasticity is required for dynamic and efficient modulation of the expiratory breathing pattern to cope with rapid changes from eupneic to adaptive breathing linked to exploratory (foraging and sniffing) and expulsive (vocalizing, coughing, sneezing, and retching) behaviors, as well as conveyance of basic emotions. The speed and complexity of changes in the breathing pattern of behaving animals implies that “learning to breathe” is necessary to adjust to changing internal and external states to maintain homeostasis and survival. PMID:23720253
NASA Astrophysics Data System (ADS)
Lim, Hyungkwang; Kim, Inho; Kim, Jin-Sang; Hwang, Cheol Seong; Jeong, Doo Seok
2013-09-01
Chemical synapses are important components of the large-scaled neural network in the hippocampus of the mammalian brain, and a change in their weight is thought to be in charge of learning and memory. Thus, the realization of artificial chemical synapses is of crucial importance in achieving artificial neural networks emulating the brain’s functionalities to some extent. This kind of research is often referred to as neuromorphic engineering. In this study, we report short-term memory behaviours of electrochemical capacitors (ECs) utilizing TiO2 mixed ionic-electronic conductor and various reactive electrode materials e.g. Ti, Ni, and Cr. By experiments, it turned out that the potentiation behaviours did not represent unlimited growth of synaptic weight. Instead, the behaviours exhibited limited synaptic weight growth that can be understood by means of an empirical equation similar to the Bienenstock-Cooper-Munro rule, employing a sliding threshold. The observed potentiation behaviours were analysed using the empirical equation and the differences between the different ECs were parameterized.
MATERNAL HYPOTHYROXINEMIA LEADS TO PERSISTENT DEFICITS IN HIPPOCAMPAL SYNAPTIC TRANSMISSION AND LEARNING IN RAT OFFSPRING. M.E. Gilbert1 and Li Sui2, Neurotoxicology Division, 1US EPA and 2National Research Council, Research Triangle Pk, NC 27711.
While severe hypothyroidis...
Learning may need only a few bits of synaptic precision
NASA Astrophysics Data System (ADS)
Baldassi, Carlo; Gerace, Federica; Lucibello, Carlo; Saglietti, Luca; Zecchina, Riccardo
2016-05-01
Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary synapses. We extend the analysis to synapses with multiple states and generally more plausible biological features. The results clearly indicate that the overall qualitative picture is unchanged with respect to the binary case, and very robust to variation of the details of the model. We also provide quantitative results which suggest that the advantages of increasing the synaptic precision (i.e., the number of internal synaptic states) rapidly vanish after the first few bits, and therefore that, for practical applications, only few bits may be needed for near-optimal performance, consistent with recent biological findings. Finally, we demonstrate how the theoretical analysis can be exploited to design efficient algorithmic search strategies.
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors. PMID:25120464
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors.
Large-scale automated histology in the pursuit of connectomes.
Kleinfeld, David; Bharioke, Arjun; Blinder, Pablo; Bock, Davi D; Briggman, Kevin L; Chklovskii, Dmitri B; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P; Lee, Wei-Chung Allen; Meyer, Hanno S; Micheva, Kristina D; Oberlaender, Marcel; Prohaska, Steffen; Reid, R Clay; Smith, Stephen J; Takemura, Shinya; Tsai, Philbert S; Sakmann, Bert
2011-11-09
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.
Large-Scale Automated Histology in the Pursuit of Connectomes
Bharioke, Arjun; Blinder, Pablo; Bock, Davi D.; Briggman, Kevin L.; Chklovskii, Dmitri B.; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P.; Lee, Wei-Chung Allen; Meyer, Hanno S.; Micheva, Kristina D.; Oberlaender, Marcel; Prohaska, Steffen; Reid, R. Clay; Smith, Stephen J.; Takemura, Shinya; Tsai, Philbert S.; Sakmann, Bert
2011-01-01
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity. PMID:22072665
Plasticity in the prefrontal cortex of adult rats
Kolb, Bryan; Gibb, Robbin
2015-01-01
We review the plastic changes of the prefrontal cortex of the rat in response to a wide range of experiences including sensory and motor experience, gonadal hormones, psychoactive drugs, learning tasks, stress, social experience, metaplastic experiences, and brain injury. Our focus is on synaptic changes (dendritic morphology and spine density) in pyramidal neurons and the relationship to behavioral changes. The most general conclusion we can reach is that the prefrontal cortex is extremely plastic and that the medial and orbital prefrontal regions frequently respond very differently to the same experience in the same brain and the rules that govern prefrontal plasticity appear to differ for those of other cortical regions. PMID:25691857
Light-Stimulated Synaptic Devices Utilizing Interfacial Effect of Organic Field-Effect Transistors.
Dai, Shilei; Wu, Xiaohan; Liu, Dapeng; Chu, Yingli; Wang, Kai; Yang, Ben; Huang, Jia
2018-06-14
Synaptic transistors stimulated by light waves or photons may offer advantages to the devices, such as wide bandwidth, ultrafast signal transmission, and robustness. However, previously reported light-stimulated synaptic devices generally require special photoelectric properties from the semiconductors and sophisticated device's architectures. In this work, a simple and effective strategy for fabricating light-stimulated synaptic transistors is provided by utilizing interface charge trapping effect of organic field-effect transistors (OFETs). Significantly, our devices exhibited highly synapselike behaviors, such as excitatory postsynaptic current (EPSC) and pair-pulse facilitation (PPF), and presented memory and learning ability. The EPSC decay, PPF curves, and forgetting behavior can be well expressed by mathematical equations for synaptic devices, indicating that interfacial charge trapping effect of OFETs can be utilized as a reliable strategy to realize organic light-stimulated synapses. Therefore, this work provides a simple and effective strategy for fabricating light-stimulated synaptic transistors with both memory and learning ability, which enlightens a new direction for developing neuromorphic devices.
NASA Astrophysics Data System (ADS)
Mizusaki, Beatriz E. P.; Agnes, Everton J.; Erichsen, Rubem; Brunnet, Leonardo G.
2017-08-01
The plastic character of brain synapses is considered to be one of the foundations for the formation of memories. There are numerous kinds of such phenomenon currently described in the literature, but their role in the development of information pathways in neural networks with recurrent architectures is still not completely clear. In this paper we study the role of an activity-based process, called pre-synaptic dependent homeostatic scaling, in the organization of networks that yield precise-timed spiking patterns. It encodes spatio-temporal information in the synaptic weights as it associates a learned input with a specific response. We introduce a correlation measure to evaluate the precision of the spiking patterns and explore the effects of different inhibitory interactions and learning parameters. We find that large learning periods are important in order to improve the network learning capacity and discuss this ability in the presence of distinct inhibitory currents.
Cellular Mechanisms of Transcranial Direct Current Stimulation
2016-07-14
32 Section 3 Electrical stimulation accelerates and boosts the capacity for synaptic learning ...................... 50 Section 4...Section 3: tDCS is thought to boost the learning of tasks or therapy applied at the same time. We provide a cellular mechanism for this. Moreover, we...show that thus “boosting” is specific to the trained task. [Aim 2] Section 4: tDCS is though to boost learning by promoting synaptic plasticity. We
Aberrant learning in Parkinson's disease: A neurocomputational study on bradykinesia.
Ursino, Mauro; Baston, Chiara
2018-05-22
Parkinson's disease (PD) is a neurodegenerative disorder characterized by a progressive decline in motor functions, such as bradykinesia, caused by the pathological denervation of nigrostriatal dopaminergic neurons within the basal ganglia (BG). It is acknowledged that dopamine (DA) directly affects the modulatory role of BG towards the cortex. However, a growing body of literature is suggesting that DA-induced aberrant synaptic plasticity could play a role in the core symptoms of PD, thus recalling for a "reconceptualization" of the pathophysiology. The aim of this work was to investigate DA-driven aberrant learning as a concurrent cause of bradykinesia, using a comprehensive, biologically inspired neurocomputational model of action selection in the BG. The model includes the three main pathways operating in the BG circuitry, that is the direct, indirect and hyperdirect pathways, and use a two-term Hebb rule to train synapses in the striatum, based on previous history of rewards and punishments. Levodopa pharmacodynamics is also incorporated. Through model simulations of the Alternate Finger Tapping motor task, we assessed the role of aberrant learning on bradykinesia. The results show that training under drug medication (levodopa) provides not only immediate but also delayed benefit lasting in time. Conversely, if performed in conditions of vanishing levodopa efficacy, training may result in dysfunctional corticostriatal synaptic plasticity, further worsening motor performances in PD subjects. This suggests that bradykinesia may result from the concurrent effects of low DA levels and dysfunctional plasticity and that training can be exploited in medicated subjects to improve levodopa treatment. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
All about running: synaptic plasticity, growth factors and adult hippocampal neurogenesis.
Vivar, Carmen; Potter, Michelle C; van Praag, Henriette
2013-01-01
Accumulating evidence from animal and human research shows exercise benefits learning and memory, which may reduce the risk of neurodegenerative diseases, and could delay age-related cognitive decline. Exercise-induced improvements in learning and memory are correlated with enhanced adult hippocampal neurogenesis and increased activity-dependent synaptic plasticity. In this present chapter we will highlight the effects of physical activity on cognition in rodents, as well as on dentate gyrus (DG) neurogenesis, synaptic plasticity, spine density, neurotransmission and growth factors, in particular brain-derived nerve growth factor (BDNF).
Tavazoie, Saeed
2013-01-01
Here we explore the possibility that a core function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single unifying computational framework. PMID:23991161
Fletcher, Bonnie R; Calhoun, Michael E; Rapp, Peter R; Shapiro, Matthew L
2006-02-01
The immediate-early gene (IEG) Arc is transcribed after behavioral and physiological treatments that induce synaptic plasticity and is implicated in memory consolidation. The relative contributions of neuronal activity and learning-related plasticity to the behavioral induction of Arc remain to be defined. To differentiate the contributions of each, we assessed the induction of Arc transcription in rats with fornix lesions that impair hippocampal learning yet leave cortical connectivity and neuronal firing essentially intact. Arc expression was assessed after exploration of novel environments and performance of a novel water maze task during which normal rats learned the spatial location of an escape platform. During the same task, rats with fornix lesions learned to approach a visible platform but did not learn its spatial location. Rats with fornix lesions had normal baseline levels of hippocampal Arc mRNA, but unlike normal rats, expression was not increased in response to water maze training. The integrity of signaling pathways controlling Arc expression was demonstrated by stimulation of the medial perforant path, which induced normal synaptic potentiation and Arc in rats with fornix lesions. Together, the results demonstrate that Arc induction can be decoupled from behavior and is more likely to indicate the engagement of synaptic plasticity mechanisms than synaptic or neuronal activity per se. The results further imply that fornix lesions may impair memory in part by decoupling neuronal activity from signaling pathways required for long-lasting hippocampal synaptic plasticity.
A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP
Balduzzi, David; Tononi, Giulio
2012-01-01
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855
Spike Train Auto-Structure Impacts Post-Synaptic Firing and Timing-Based Plasticity
Scheller, Bertram; Castellano, Marta; Vicente, Raul; Pipa, Gordon
2011-01-01
Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification. PMID:22203800
Pannexin 1 regulates bidirectional hippocampal synaptic plasticity in adult mice.
Ardiles, Alvaro O; Flores-Muñoz, Carolina; Toro-Ayala, Gabriela; Cárdenas, Ana M; Palacios, Adrian G; Muñoz, Pablo; Fuenzalida, Marco; Sáez, Juan C; Martínez, Agustín D
2014-01-01
The threshold for bidirectional modification of synaptic plasticity is known to be controlled by several factors, including the balance between protein phosphorylation and dephosphorylation, postsynaptic free Ca(2+) concentration and NMDA receptor (NMDAR) composition of GluN2 subunits. Pannexin 1 (Panx1), a member of the integral membrane protein family, has been shown to form non-selective channels and to regulate the induction of synaptic plasticity as well as hippocampal-dependent learning. Although Panx1 channels have been suggested to play a role in excitatory long-term potentiation (LTP), it remains unknown whether these channels also modulate long-term depression (LTD) or the balance between both types of synaptic plasticity. To study how Panx1 contributes to excitatory synaptic efficacy, we examined the age-dependent effects of eliminating or blocking Panx1 channels on excitatory synaptic plasticity within the CA1 region of the mouse hippocampus. By using different protocols to induce bidirectional synaptic plasticity, Panx1 channel blockade or lack of Panx1 were found to enhance LTP, whereas both conditions precluded the induction of LTD in adults, but not in young animals. These findings suggest that Panx1 channels restrain the sliding threshold for the induction of synaptic plasticity and underlying brain mechanisms of learning and memory.
Pannexin 1 regulates bidirectional hippocampal synaptic plasticity in adult mice
Ardiles, Alvaro O.; Flores-Muñoz, Carolina; Toro-Ayala, Gabriela; Cárdenas, Ana M.; Palacios, Adrian G.; Muñoz, Pablo; Fuenzalida, Marco; Sáez, Juan C.; Martínez, Agustín D.
2014-01-01
The threshold for bidirectional modification of synaptic plasticity is known to be controlled by several factors, including the balance between protein phosphorylation and dephosphorylation, postsynaptic free Ca2+ concentration and NMDA receptor (NMDAR) composition of GluN2 subunits. Pannexin 1 (Panx1), a member of the integral membrane protein family, has been shown to form non-selective channels and to regulate the induction of synaptic plasticity as well as hippocampal-dependent learning. Although Panx1 channels have been suggested to play a role in excitatory long-term potentiation (LTP), it remains unknown whether these channels also modulate long-term depression (LTD) or the balance between both types of synaptic plasticity. To study how Panx1 contributes to excitatory synaptic efficacy, we examined the age-dependent effects of eliminating or blocking Panx1 channels on excitatory synaptic plasticity within the CA1 region of the mouse hippocampus. By using different protocols to induce bidirectional synaptic plasticity, Panx1 channel blockade or lack of Panx1 were found to enhance LTP, whereas both conditions precluded the induction of LTD in adults, but not in young animals. These findings suggest that Panx1 channels restrain the sliding threshold for the induction of synaptic plasticity and underlying brain mechanisms of learning and memory. PMID:25360084
The central amygdala controls learning in the lateral amygdala
Yu, Kai; Ahrens, Sandra; Zhang, Xian; Schiff, Hillary; Ramakrishnan, Charu; Fenno, Lief; Deisseroth, Karl; Zhao, Fei; Luo, Min-Hua; Gong, Ling; He, Miao; Zhou, Pengcheng; Paninski, Liam; Li, Bo
2018-01-01
Experience-driven synaptic plasticity in the lateral amygdala (LA) is thought to underlie the formation of associations between sensory stimuli and an ensuing threat. However, how the central amygdala (CeA) participates in such learning process remains unclear. Here we show that PKC-δ-expressing CeA neurons are essential for the synaptic plasticity underlying learning in the LA, as they convey information about unconditioned stimulus to LA neurons during fear conditioning. PMID:29184202
Madroñal, Noelia; Gruart, Agnès; Sacktor, Todd C.; Delgado-García, José M.
2010-01-01
A leading candidate in the process of memory formation is hippocampal long-term potentiation (LTP), a persistent enhancement in synaptic strength evoked by the repetitive activation of excitatory synapses, either by experimental high-frequency stimulation (HFS) or, as recently shown, during actual learning. But are the molecular mechanisms for maintaining synaptic potentiation induced by HFS and by experience the same? Protein kinase Mzeta (PKMζ), an autonomously active atypical protein kinase C isoform, plays a key role in the maintenance of LTP induced by tetanic stimulation and the storage of long-term memory. To test whether the persistent action of PKMζ is necessary for the maintenance of synaptic potentiation induced after learning, the effects of ZIP (zeta inhibitory peptide), a PKMζ inhibitor, on eyeblink-conditioned mice were studied. PKMζ inhibition in the hippocampus disrupted both the correct retrieval of conditioned responses (CRs) and the experience-dependent persistent increase in synaptic strength observed at CA3-CA1 synapses. In addition, the effects of ZIP on the same associative test were examined when tetanic LTP was induced at the hippocampal CA3-CA1 synapse before conditioning. In this case, PKMζ inhibition both reversed tetanic LTP and prevented the expected LTP-mediated deleterious effects on eyeblink conditioning. Thus, PKMζ inhibition in the CA1 area is able to reverse both the expression of trace eyeblink conditioned memories and the underlying changes in CA3-CA1 synaptic strength, as well as the anterograde effects of LTP on associative learning. PMID:20454458
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
Shi, S; Hayashi, Y; Esteban, J A; Malinow, R
2001-05-04
AMPA-type glutamate receptors (AMPA-Rs) mediate a majority of excitatory synaptic transmission in the brain. In hippocampus, most AMPA-Rs are hetero-oligomers composed of GluR1/GluR2 or GluR2/GluR3 subunits. Here we show that these AMPA-R forms display different synaptic delivery mechanisms. GluR1/GluR2 receptors are added to synapses during plasticity; this requires interactions between GluR1 and group I PDZ domain proteins. In contrast, GluR2/GluR3 receptors replace existing synaptic receptors continuously; this occurs only at synapses that already have AMPA-Rs and requires interactions by GluR2 with NSF and group II PDZ domain proteins. The combination of regulated addition and continuous replacement of synaptic receptors can stabilize long-term changes in synaptic efficacy and may serve as a general model for how surface receptor number is established and maintained.
Born, Jannis; Galeazzi, Juan M; Stringer, Simon M
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Born, Jannis; Stringer, Simon M.
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618
Berger, Stefan M; Fernández-Lamo, Iván; Schönig, Kai; Fernández Moya, Sandra M; Ehses, Janina; Schieweck, Rico; Clementi, Stefano; Enkel, Thomas; Grothe, Sascha; von Bohlen Und Halbach, Oliver; Segura, Inmaculada; Delgado-García, José María; Gruart, Agnès; Kiebler, Michael A; Bartsch, Dusan
2017-11-17
Dendritic messenger RNA (mRNA) localization and subsequent local translation in dendrites critically contributes to synaptic plasticity and learning and memory. Little is known, however, about the contribution of RNA-binding proteins (RBPs) to these processes in vivo. To delineate the role of the double-stranded RBP Staufen2 (Stau2), we generate a transgenic rat model, in which Stau2 expression is conditionally silenced by Cre-inducible expression of a microRNA (miRNA) targeting Stau2 mRNA in adult forebrain neurons. Known physiological mRNA targets for Stau2, such as RhoA, Complexin 1, and Rgs4 mRNAs, are found to be dysregulated in brains of Stau2-deficient rats. In vivo electrophysiological recordings reveal synaptic strengthening upon stimulation, showing a shift in the frequency-response function of hippocampal synaptic plasticity to favor long-term potentiation and impair long-term depression in Stau2-deficient rats. These observations are accompanied by deficits in hippocampal spatial working memory, spatial novelty detection, and in tasks investigating associative learning and memory. Together, these experiments reveal a critical contribution of Stau2 to various forms of synaptic plasticity including spatial working memory and cognitive management of new environmental information. These findings might contribute to the development of treatments for conditions associated with learning and memory deficits.
Oscillations, Timing, Plasticity, and Learning in the Cerebellum.
Cheron, G; Márquez-Ruiz, J; Dan, B
2016-04-01
The highly stereotyped, crystal-like architecture of the cerebellum has long served as a basis for hypotheses with regard to the function(s) that it subserves. Historically, most clinical observations and experimental work have focused on the involvement of the cerebellum in motor control, with particular emphasis on coordination and learning. Two main models have been suggested to account for cerebellar functioning. According to Llinás's theory, the cerebellum acts as a control machine that uses the rhythmic activity of the inferior olive to synchronize Purkinje cell populations for fine-tuning of coordination. In contrast, the Ito-Marr-Albus theory views the cerebellum as a motor learning machine that heuristically refines synaptic weights of the Purkinje cell based on error signals coming from the inferior olive. Here, we review the role of timing of neuronal events, oscillatory behavior, and synaptic and non-synaptic influences in functional plasticity that can be recorded in awake animals in various physiological and pathological models in a perspective that also includes non-motor aspects of cerebellar function. We discuss organizational levels from genes through intracellular signaling, synaptic network to system and behavior, as well as processes from signal production and processing to memory, delegation, and actual learning. We suggest an integrative concept for control and learning based on articulated oscillation templates.
Random synaptic feedback weights support error backpropagation for deep learning
NASA Astrophysics Data System (ADS)
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-11-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning.
Random synaptic feedback weights support error backpropagation for deep learning
Lillicrap, Timothy P.; Cownden, Daniel; Tweed, Douglas B.; Akerman, Colin J.
2016-01-01
The brain processes information through multiple layers of neurons. This deep architecture is representationally powerful, but complicates learning because it is difficult to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame by multiplying error signals with all the synaptic weights on each neuron's axon and further downstream. However, this involves a precise, symmetric backward connectivity pattern, which is thought to be impossible in the brain. Here we demonstrate that this strong architectural constraint is not required for effective error propagation. We present a surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights. This mechanism can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Our results help reopen questions about how the brain could use error signals and dispel long-held assumptions about algorithmic constraints on learning. PMID:27824044
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
NASA Astrophysics Data System (ADS)
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo
2015-10-01
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems.
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo
2015-10-14
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a 'basin' of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.
NASA Astrophysics Data System (ADS)
Lu, Ke; Li, Yi; He, Wei-Fan; Chen, Jia; Zhou, Ya-Xiong; Duan, Nian; Jin, Miao-Miao; Gu, Wei; Xue, Kan-Hao; Sun, Hua-Jun; Miao, Xiang-Shui
2018-06-01
Memristors have emerged as promising candidates for artificial synaptic devices, serving as the building block of brain-inspired neuromorphic computing. In this letter, we developed a Pt/HfO x /Ti memristor with nonvolatile multilevel resistive switching behaviors due to the evolution of the conductive filaments and the variation in the Schottky barrier. Diverse state-dependent spike-timing-dependent-plasticity (STDP) functions were implemented with different initial resistance states. The measured STDP forms were adopted as the learning rule for a three-layer spiking neural network which achieves a 75.74% recognition accuracy for MNIST handwritten digit dataset. This work has shown the capability of memristive synapse in spiking neural networks for pattern recognition application.
Dopamine Promotes Motor Cortex Plasticity and Motor Skill Learning via PLC Activation
Rioult-Pedotti, Mengia-Seraina; Pekanovic, Ana; Atiemo, Clement Osei; Marshall, John; Luft, Andreas Rüdiger
2015-01-01
Dopaminergic neurons in the ventral tegmental area, the major midbrain nucleus projecting to the motor cortex, play a key role in motor skill learning and motor cortex synaptic plasticity. Dopamine D1 and D2 receptor antagonists exert parallel effects in the motor system: they impair motor skill learning and reduce long-term potentiation. Traditionally, D1 and D2 receptor modulate adenylyl cyclase activity and cyclic adenosine monophosphate accumulation in opposite directions via different G-proteins and bidirectionally modulate protein kinase A (PKA), leading to distinct physiological and behavioral effects. Here we show that D1 and D2 receptor activity influences motor skill acquisition and long term synaptic potentiation via phospholipase C (PLC) activation in rat primary motor cortex. Learning a new forelimb reaching task is severely impaired in the presence of PLC, but not PKA-inhibitor. Similarly, long term potentiation in motor cortex, a mechanism involved in motor skill learning, is reduced when PLC is inhibited but remains unaffected by the PKA inhibitor. Skill learning deficits and reduced synaptic plasticity caused by dopamine antagonists are prevented by co-administration of a PLC agonist. These results provide evidence for a role of intracellular PLC signaling in motor skill learning and associated cortical synaptic plasticity, challenging the traditional view of bidirectional modulation of PKA by D1 and D2 receptors. These findings reveal a novel and important action of dopamine in motor cortex that might be a future target for selective therapeutic interventions to support learning and recovery of movement resulting from injury and disease. PMID:25938462
Nie, Jingjing; Yang, Xiaosu
2017-01-01
In recent years, rehabilitation of ischemic stroke draws more and more attention in the world, and has been linked to changes of synaptic plasticity. Exercise training improves motor function of ischemia as well as cognition which is associated with formation of learning and memory. The molecular basis of learning and memory might be synaptic plasticity. Research has therefore been conducted in an attempt to relate effects of exercise training to neuroprotection and neurogenesis adjacent to the ischemic injury brain. The present paper reviews the current literature addressing this question and discusses the possible mechanisms involved in modulation of synaptic plasticity by exercise training. This review shows the pathological process of synaptic dysfunction in ischemic roughly and then discusses the effects of exercise training on scaffold proteins and regulatory protein expression. The expression of scaffold proteins generally increased after training, but the effects on regulatory proteins were mixed. Moreover, the compositions of postsynaptic receptors were changed and the strength of synaptic transmission was enhanced after training. Finally, the recovery of cognition is critically associated with synaptic remodeling in an injured brain, and the remodeling occurs through a number of local regulations including mRNA translation, remodeling of cytoskeleton, and receptor trafficking into and out of the synapse. We do provide a comprehensive knowledge of synaptic plasticity enhancement obtained by exercise training in this review.
Ding, Juan; Xi, Yuan-Di; Zhang, Dan-Di; Zhao, Xia; Liu, Jin-Meng; Li, Chao-Qun; Han, Jing; Xiao, Rong
2013-12-01
This research aims to investigate whether soybean isoflavone (SIF) could alleviate the learning and memory deficit induced by β-amyloid peptides 1-42 (Aβ 1-42) by protecting the synapses of rats. Adult male Wistar rats were randomly allocated to the following groups: (1) control group; (2) Aβ 1-42 group; (3) SIF group; (4) SIF + Aβ 1-42 group (SIF pretreatment group) according to body weight. The 80 mg/kg/day of SIF was administered orally by gavage to the rats in SIF and SIF+Aβ 1-42 groups. Aβ 1-42 was injected into the lateral cerebral ventricle of rats in Aβ 1-42 and SIF+Aβ 1-42 groups. The ability of learning and memory, ultramicrostructure of hippocampal synapses, and expression of synaptic related proteins were investigated. The Morris water maze results showed the escape latency and total distance were decreased in the rats of SIF pretreatment group compared to the rats in Aβ1-42 group. Furthermore, SIF pretreatment could alleviate the synaptic structural damage and antagonize the down-regulation expressions of below proteins induced by Aβ1-42: (1) mRNA and protein of the synaptophysin and postsynaptic density protein 95 (PSD-95); (2) protein of calmodulin (CaM), Ca(2+) /calmodulin-dependent protein kinase II (CaMK II), and cAMP response element binding protein (CREB); (3) phosphorylation levels of CaMK II and CREB (pCAMK II, pCREB). These results suggested that SIF pretreatment could ameliorate the impairment of learning and memory ability in rats induced by Aβ 1-42, and its mechanism might be associated with the protection of synaptic plasticity by improving the synaptic structure and regulating the synaptic related proteins. Copyright © 2013 Wiley Periodicals, Inc.
Nanou, Evanthia; Scheuer, Todd; Catterall, William A
2016-11-15
Many forms of short-term synaptic plasticity rely on regulation of presynaptic voltage-gated Ca 2+ type 2.1 (Ca V 2.1) channels. However, the contribution of regulation of Ca V 2.1 channels to other forms of neuroplasticity and to learning and memory are not known. Here we have studied mice with a mutation (IM-AA) that disrupts regulation of Ca V 2.1 channels by calmodulin and related calcium sensor proteins. Surprisingly, we find that long-term potentiation (LTP) of synaptic transmission at the Schaffer collateral-CA1 synapse in the hippocampus is substantially weakened, even though this form of synaptic plasticity is thought to be primarily generated postsynaptically. LTP in response to θ-burst stimulation and to 100-Hz tetanic stimulation is much reduced. However, a normal level of LTP can be generated by repetitive 100-Hz stimulation or by depolarization of the postsynaptic cell to prevent block of NMDA-specific glutamate receptors by Mg 2+ The ratio of postsynaptic responses of NMDA-specific glutamate receptors to those of AMPA-specific glutamate receptors is decreased, but the postsynaptic current from activation of NMDA-specific glutamate receptors is progressively increased during trains of stimuli and exceeds WT by the end of 1-s trains. Strikingly, these impairments in long-term synaptic plasticity and the previously documented impairments in short-term synaptic plasticity in IM-AA mice are associated with pronounced deficits in spatial learning and memory in context-dependent fear conditioning and in the Barnes circular maze. Thus, regulation of Ca V 2.1 channels by calcium sensor proteins is required for normal short-term synaptic plasticity, LTP, and spatial learning and memory in mice.
Han, Xiao-jie; Xiao, Yong-mei; Ai, Bao-min; Hu, Xiao-xia; Wei, Qing; Hu, Qian-sheng
2014-01-01
To study the effect of organic Se on spatial learning and memory deficits induced by Pb exposure at different developmental stages, and its relationship with alterations of synaptic structural plasticity, postnatal rat pups were randomly divided into five groups: Control; Pb (Weaned pups were exposed to Pb at postnatal day (PND) 21-42); Pb-Se (Weaned pups were exposed to Se at PND 43-63 after Pb exposure); maternal Pb (mPb) (Parents were exposed to Pb from 3 weeks before mating to the weaning of pups); mPb-Se (Parents were exposed to Pb and weaned pups were exposed to Se at PND 43-63). The spatial learning and memory of rat pups was measured by Morris water maze (MWM) on PND 63. We found that rat pups in Pb-Se group performed significantly better than those in Pb group (p<0.05). However, there was no significant difference in the ability of spatial learning and memory between the groups of mPb and mPb-Se (p>0.05). We also found that, before MWM, the numbers of neurons and synapses significantly decreased in mPb group, but not in Pb group. After MWM, the number of synapses, the thickness of postsynaptic density (PSD), the length of synaptic active zone and the synaptic curvature increased significantly in Pb-Se and mPb-Se group; while the width of synaptic cleft decreased significantly (p<0.05), compared to Pb group and mPb group, respectively. However, the number of synapses in mPb-Se group was still significantly lower than that in the control group (p<0.05). Our data demonstrated that organic Se had protective effects on the impairments of spatial learning and memory as well as synaptic structural plasticity induced by Pb exposure in rats after weaning, but not by the maternal Pb exposure which reduced the numbers of neurons and synapses in the early neural development.
Pimashkin, Alexey; Gladkov, Arseniy; Mukhina, Irina; Kazantsev, Victor
2013-01-01
Learning in neuronal networks can be investigated using dissociated cultures on multielectrode arrays supplied with appropriate closed-loop stimulation. It was shown in previous studies that weakly respondent neurons on the electrodes can be trained to increase their evoked spiking rate within a predefined time window after the stimulus. Such neurons can be associated with weak synaptic connections in nearby culture network. The stimulation leads to the increase in the connectivity and in the response. However, it was not possible to perform the learning protocol for the neurons on electrodes with relatively strong synaptic inputs and responding at higher rates. We proposed an adaptive closed-loop stimulation protocol capable to achieve learning even for the highly respondent electrodes. It means that the culture network can reorganize appropriately its synaptic connectivity to generate a desired response. We introduced an adaptive reinforcement condition accounting for the response variability in control stimulation. It significantly enhanced the learning protocol to a large number of responding electrodes independently on its base response level. We also found that learning effect preserved after 4–6 h after training. PMID:23745105
Hagena, Hardy; Hansen, Niels; Manahan-Vaughan, Denise
2016-01-01
Noradrenaline (NA) is a key neuromodulator for the regulation of behavioral state and cognition. It supports learning by increasing arousal and vigilance, whereby new experiences are “earmarked” for encoding. Within the hippocampus, experience-dependent information storage occurs by means of synaptic plasticity. Furthermore, novel spatial, contextual, or associative learning drives changes in synaptic strength, reflected by the strengthening of long-term potentiation (LTP) or long-term depression (LTD). NA acting on β-adrenergic receptors (β-AR) is a key determinant as to whether new experiences result in persistent hippocampal synaptic plasticity. This can even dictate the direction of change of synaptic strength. The different hippocampal subfields play different roles in encoding components of a spatial representation through LTP and LTD. Strikingly, the sensitivity of synaptic plasticity in these subfields to β-adrenergic control is very distinct (dentate gyrus > CA3 > CA1). Moreover, NA released from the locus coeruleus that acts on β-AR leads to hippocampal LTD and an enhancement of LTD-related memory processing. We propose that NA acting on hippocampal β-AR, that is graded according to the novelty or saliency of the experience, determines the content and persistency of synaptic information storage in the hippocampal subfields and therefore of spatial memories. PMID:26804338
Mahati, K; Bhagya, V; Christofer, T; Sneha, A; Shankaranarayana Rao, B S
2016-10-01
Severe depression compromises structural and functional integrity of the brain and results in impaired learning and memory, maladaptive synaptic plasticity as well as degenerative changes in the hippocampus and amygdala. The precise mechanisms underlying cognitive dysfunctions in depression remain largely unknown. On the other hand, enriched environment (EE) offers beneficial effects on cognitive functions, synaptic plasticity in the hippocampus. However, the effect of EE on endogenous depression associated cognitive dysfunction has not been explored. Accordingly, we have attempted to address this issue by investigating behavioural, structural and synaptic plasticity mechanisms in an animal model of endogenous depression after exposure to enriched environment. Our results demonstrate that depression is associated with impaired spatial learning and enhanced anxiety-like behaviour which is correlated with hypotrophy of the dentate gyrus and amygdalar hypertrophy. We also observed a gross reduction in the hippocampal long-term potentiation (LTP). We report a complete behavioural recovery with reduced indices of anhedonia and behavioural despair, reduced anxiety-like behaviour and improved spatial learning along with a complete restoration of dentate gyrus and amygdalar volumes in depressive rats subjected to EE. Enrichment also facilitated CA3-Schaffer collateral LTP. Our study convincingly proves that depression-induces learning deficits and impairs hippocampal synaptic plasticity. It also highlights the role of environmental stimuli in restoring depression-induced cognitive deficits which might prove vital in outlining more effective strategies to treat major depressive disorders. Copyright © 2016 Elsevier Inc. All rights reserved.
Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P
2003-01-01
Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.
Reumann, Rebecca; Vierk, Ricardo; Zhou, Lepu; Gries, Frederice; Kraus, Vanessa; Mienert, Julia; Romswinkel, Eva; Morellini, Fabio; Ferrer, Isidre; Nicolini, Chiara; Fahnestock, Margaret; Rune, Gabriele; Glatzel, Markus; Galliciotti, Giovanna
2017-01-01
The serine protease inhibitor neuroserpin regulates the activity of tissue-type plasminogen activator (tPA) in the nervous system. Neuroserpin expression is particularly prominent at late stages of neuronal development in most regions of the central nervous system (CNS), whereas it is restricted to regions related to learning and memory in the adult brain. The physiological expression pattern of neuroserpin, its high degree of colocalization with tPA within the CNS, together with its dysregulation in neuropsychiatric disorders, suggest a role in formation and refinement of synapses. In fact, studies in cell culture and mice point to a role for neuroserpin in dendritic branching, spine morphology, and modulation of behavior. In this study, we investigated the physiological role of neuroserpin in the regulation of synaptic density, synaptic plasticity, and behavior in neuroserpin-deficient mice. In the absence of neuroserpin, mice show a significant decrease in spine-synapse density in the CA1 region of the hippocampus, while expression of the key postsynaptic scaffold protein PSD-95 is increased in this region. Neuroserpin-deficient mice show decreased synaptic potentiation, as indicated by reduced long-term potentiation (LTP), whereas presynaptic paired-pulse facilitation (PPF) is unaffected. Consistent with altered synaptic plasticity, neuroserpin-deficient mice exhibit cognitive and sociability deficits in behavioral assays. However, although synaptic dysfunction is implicated in neuropsychiatric disorders, we do not detect alterations in expression of neuroserpin in fusiform gyrus of autism patients or in dorsolateral prefrontal cortex of schizophrenia patients. Our results identify neuroserpin as a modulator of synaptic plasticity, and point to a role for neuroserpin in learning and memory. PMID:29142062
Molecular mechanisms of memory in imprinting.
Solomonia, Revaz O; McCabe, Brian J
2015-03-01
Converging evidence implicates the intermediate and medial mesopallium (IMM) of the domestic chick forebrain in memory for a visual imprinting stimulus. During and after imprinting training, neuronal responsiveness in the IMM to the familiar stimulus exhibits a distinct temporal profile, suggesting several memory phases. We discuss the temporal progression of learning-related biochemical changes in the IMM, relative to the start of this electrophysiological profile. c-fos gene expression increases <15 min after training onset, followed by a learning-related increase in Fos expression, in neurons immunopositive for GABA, taurine and parvalbumin (not calbindin). Approximately simultaneously or shortly after, there are increases in phosphorylation level of glutamate (AMPA) receptor subunits and in releasable neurotransmitter pools of GABA and taurine. Later, the mean area of spine synapse post-synaptic densities, N-methyl-D-aspartate receptor number and phosphorylation level of further synaptic proteins are elevated. After ∼ 15 h, learning-related changes in amounts of several synaptic proteins are observed. The results indicate progression from transient/labile to trophic synaptic modification, culminating in stable recognition memory. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Molecular mechanisms of memory in imprinting
Solomonia, Revaz O.; McCabe, Brian J.
2015-01-01
Converging evidence implicates the intermediate and medial mesopallium (IMM) of the domestic chick forebrain in memory for a visual imprinting stimulus. During and after imprinting training, neuronal responsiveness in the IMM to the familiar stimulus exhibits a distinct temporal profile, suggesting several memory phases. We discuss the temporal progression of learning-related biochemical changes in the IMM, relative to the start of this electrophysiological profile. c-fos gene expression increases <15 min after training onset, followed by a learning-related increase in Fos expression, in neurons immunopositive for GABA, taurine and parvalbumin (not calbindin). Approximately simultaneously or shortly after, there are increases in phosphorylation level of glutamate (AMPA) receptor subunits and in releasable neurotransmitter pools of GABA and taurine. Later, the mean area of spine synapse post-synaptic densities, N-methyl-d-aspartate receptor number and phosphorylation level of further synaptic proteins are elevated. After ∼15 h, learning-related changes in amounts of several synaptic proteins are observed. The results indicate progression from transient/labile to trophic synaptic modification, culminating in stable recognition memory. PMID:25280906
Weng, Feng-Ju; Garcia, Rodrigo I; Lutzu, Stefano; Alviña, Karina; Zhang, Yuxiang; Dushko, Margaret; Ku, Taeyun; Zemoura, Khaled; Rich, David; Garcia-Dominguez, Dario; Hung, Matthew; Yelhekar, Tushar D; Sørensen, Andreas Toft; Xu, Weifeng; Chung, Kwanghun; Castillo, Pablo E; Lin, Yingxi
2018-03-07
Synaptic connections between hippocampal mossy fibers (MFs) and CA3 pyramidal neurons are essential for contextual memory encoding, but the molecular mechanisms regulating MF-CA3 synapses during memory formation and the exact nature of this regulation are poorly understood. Here we report that the activity-dependent transcription factor Npas4 selectively regulates the structure and strength of MF-CA3 synapses by restricting the number of their functional synaptic contacts without affecting the other synaptic inputs onto CA3 pyramidal neurons. Using an activity-dependent reporter, we identified CA3 pyramidal cells that were activated by contextual learning and found that MF inputs on these cells were selectively strengthened. Deletion of Npas4 prevented both contextual memory formation and this learning-induced synaptic modification. We further show that Npas4 regulates MF-CA3 synapses by controlling the expression of the polo-like kinase Plk2. Thus, Npas4 is a critical regulator of experience-dependent, structural, and functional plasticity at MF-CA3 synapses during contextual memory formation. Copyright © 2018 Elsevier Inc. All rights reserved.
Synaptic Mechanisms of Memory Consolidation during Sleep Slow Oscillations
Wei, Yina; Krishnan, Giri P.
2016-01-01
Sleep is critical for regulation of synaptic efficacy, memories, and learning. However, the underlying mechanisms of how sleep rhythms contribute to consolidating memories acquired during wakefulness remain unclear. Here we studied the role of slow oscillations, 0.2–1 Hz rhythmic transitions between Up and Down states during stage 3/4 sleep, on dynamics of synaptic connectivity in the thalamocortical network model implementing spike-timing-dependent synaptic plasticity. We found that the spatiotemporal pattern of Up-state propagation determines the changes of synaptic strengths between neurons. Furthermore, an external input, mimicking hippocampal ripples, delivered to the cortical network results in input-specific changes of synaptic weights, which persisted after stimulation was removed. These synaptic changes promoted replay of specific firing sequences of the cortical neurons. Our study proposes a neuronal mechanism on how an interaction between hippocampal input, such as mediated by sharp wave-ripple events, cortical slow oscillations, and synaptic plasticity, may lead to consolidation of memories through preferential replay of cortical cell spike sequences during slow-wave sleep. SIGNIFICANCE STATEMENT Sleep is critical for memory and learning. Replay during sleep of temporally ordered spike sequences related to a recent experience was proposed to be a neuronal substrate of memory consolidation. However, specific mechanisms of replay or how spike sequence replay leads to synaptic changes that underlie memory consolidation are still poorly understood. Here we used a detailed computational model of the thalamocortical system to report that interaction between slow cortical oscillations and synaptic plasticity during deep sleep can underlie mapping hippocampal memory traces to persistent cortical representation. This study provided, for the first time, a mechanistic explanation of how slow-wave sleep may promote consolidation of recent memory events. PMID:27076422
Stochastic lattice model of synaptic membrane protein domains.
Li, Yiwei; Kahraman, Osman; Haselwandter, Christoph A
2017-05-01
Neurotransmitter receptor molecules, concentrated in synaptic membrane domains along with scaffolds and other kinds of proteins, are crucial for signal transmission across chemical synapses. In common with other membrane protein domains, synaptic domains are characterized by low protein copy numbers and protein crowding, with rapid stochastic turnover of individual molecules. We study here in detail a stochastic lattice model of the receptor-scaffold reaction-diffusion dynamics at synaptic domains that was found previously to capture, at the mean-field level, the self-assembly, stability, and characteristic size of synaptic domains observed in experiments. We show that our stochastic lattice model yields quantitative agreement with mean-field models of nonlinear diffusion in crowded membranes. Through a combination of analytic and numerical solutions of the master equation governing the reaction dynamics at synaptic domains, together with kinetic Monte Carlo simulations, we find substantial discrepancies between mean-field and stochastic models for the reaction dynamics at synaptic domains. Based on the reaction and diffusion properties of synaptic receptors and scaffolds suggested by previous experiments and mean-field calculations, we show that the stochastic reaction-diffusion dynamics of synaptic receptors and scaffolds provide a simple physical mechanism for collective fluctuations in synaptic domains, the molecular turnover observed at synaptic domains, key features of the observed single-molecule trajectories, and spatial heterogeneity in the effective rates at which receptors and scaffolds are recycled at the cell membrane. Our work sheds light on the physical mechanisms and principles linking the collective properties of membrane protein domains to the stochastic dynamics that rule their molecular components.
Stringer, Simon M; Rolls, Edmund T
2006-12-01
A key issue is how networks in the brain learn to perform path integration, that is update a represented position using a velocity signal. Using head direction cells as an example, we show that a competitive network could self-organize to learn to respond to combinations of head direction and angular head rotation velocity. These combination cells can then be used to drive a continuous attractor network to the next head direction based on the incoming rotation signal. An associative synaptic modification rule with a short term memory trace enables preceding combination cell activity during training to be associated with the next position in the continuous attractor network. The network accounts for the presence of neurons found in the brain that respond to combinations of head direction and angular head rotation velocity. Analogous networks in the hippocampal system could self-organize to perform path integration of place and spatial view representations.
Hippocampal Metaplasticity Is Required for the Formation of Temporal Associative Memories
Xu, Jian; Antion, Marcia D.; Nomura, Toshihiro; Kraniotis, Stephen; Zhu, Yongling
2014-01-01
Metaplasticity regulates the threshold for modification of synaptic strength and is an important regulator of learning rules; however, it is not known whether these cellular mechanisms for homeostatic regulation of synapses contribute to particular forms of learning. Conditional ablation of mGluR5 in CA1 pyramidal neurons resulted in the inability of low-frequency trains of afferent activation to prime synapses for subsequent theta burst potentiation. Priming-induced metaplasticity requires mGluR5-mediated mobilization of endocannabinoids during the priming train to induce long-term depression of inhibition (I-LTD). Mice lacking priming-induced plasticity had no deficit in spatial reference memory tasks, but were impaired in an associative task with a temporal component. Conversely, enhancing endocannabinoid signaling facilitated temporal associative memory acquisition and, after training animals in these tasks, ex vivo I-LTD was partially occluded and theta burst LTP was enhanced. Together, these results suggest a link between metaplasticity mechanisms in the hippocampus and the formation of temporal associative memories. PMID:25505329
Removal of S6K1 and S6K2 Leads to Divergent Alterations in Learning, Memory, and Synaptic Plasticity
ERIC Educational Resources Information Center
Antion, Marcia D.; Merhav, Maayan; Hoeffer, Charles A.; Reis, Gerald; Kozma, Sara C.; Thomas, George; Schuman Erin M.; Rosenblum, Kobi; Klann, Eric
2008-01-01
Protein synthesis is required for the expression of enduring memories and long-lasting synaptic plasticity. During cellular proliferation and growth, S6 kinases (S6Ks) are activated and coordinate the synthesis of de novo proteins. We hypothesized that protein synthesis mediated by S6Ks is critical for the manifestation of learning, memory, and…
Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems
Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; del Giudice, Paolo
2015-01-01
Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases. PMID:26463272
A correlated nickelate synaptic transistor.
Shi, Jian; Ha, Sieu D; Zhou, You; Schoofs, Frank; Ramanathan, Shriram
2013-01-01
Inspired by biological neural systems, neuromorphic devices may open up new computing paradigms to explore cognition, learning and limits of parallel computation. Here we report the demonstration of a synaptic transistor with SmNiO₃, a correlated electron system with insulator-metal transition temperature at 130°C in bulk form. Non-volatile resistance and synaptic multilevel analogue states are demonstrated by control over composition in ionic liquid-gated devices on silicon platforms. The extent of the resistance modulation can be dramatically controlled by the film microstructure. By simulating the time difference between postneuron and preneuron spikes as the input parameter of a gate bias voltage pulse, synaptic spike-timing-dependent plasticity learning behaviour is realized. The extreme sensitivity of electrical properties to defects in correlated oxides may make them a particularly suitable class of materials to realize artificial biological circuits that can be operated at and above room temperature and seamlessly integrated into conventional electronic circuits.
Synaptic long-term potentiation realized in Pavlov's dog model based on a NiOx-based memristor
NASA Astrophysics Data System (ADS)
Hu, S. G.; Liu, Y.; Liu, Z.; Chen, T. P.; Yu, Q.; Deng, L. J.; Yin, Y.; Hosaka, Sumio
2014-12-01
Synaptic Long-Term Potentiation (LTP), which is a long-lasting enhancement in signal transmission between neurons, is widely considered as the major cellular mechanism during learning and memorization. In this work, a NiOx-based memristor is found to be able to emulate the synaptic LTP. Electrical conductance of the memristor is increased by electrical pulse stimulation and then spontaneously decays towards its initial state, which resembles the synaptic LTP. The lasting time of the LTP in the memristor can be estimated with the relaxation equation, which well describes the conductance decay behavior. The LTP effect of the memristor has a dependence on the stimulation parameters, including pulse height, width, interval, and number of pulses. An artificial network consisting of three neurons and two synapses is constructed to demonstrate the associative learning and LTP behavior in extinction of association in Pavlov's dog experiment.
Learning and memory: Steroids and epigenetics.
Colciago, Alessandra; Casati, Lavinia; Negri-Cesi, Paola; Celotti, Fabio
2015-06-01
Memory formation and utilization is a complex process involving several brain structures in conjunction as the hippocampus, the amygdala and the adjacent cortical areas, usually defined as medial temporal lobe structures (MTL). The memory processes depend on the formation and modulation of synaptic connectivity affecting synaptic strength, synaptic plasticity and synaptic consolidation. The basic neurocognitive mechanisms of learning and memory are shortly recalled in the initial section of this paper. The effect of sex hormones (estrogens, androgens and progesterone) and of adrenocortical steroids on several aspects of memory processes are then analyzed on the basis of animal and human studies. A specific attention has been devoted to the different types of steroid receptors (membrane or nuclear) involved and on local metabolic transformations when required. The review is concluded by a short excursus on the steroid activated epigenetic mechanisms involved in memory formation. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Corticohippocampal Circuit, Synaptic Plasticity, and Memory
Basu, Jayeeta; Siegelbaum, Steven A.
2015-01-01
Synaptic plasticity serves as a cellular substrate for information storage in the central nervous system. The entorhinal cortex (EC) and hippocampus are interconnected brain areas supporting basic cognitive functions important for the formation and retrieval of declarative memories. Here, we discuss how information flow in the EC–hippocampal loop is organized through circuit design. We highlight recently identified corticohippocampal and intrahippocampal connections and how these long-range and local microcircuits contribute to learning. This review also describes various forms of activity-dependent mechanisms that change the strength of corticohippocampal synaptic transmission. A key point to emerge from these studies is that patterned activity and interaction of coincident inputs gives rise to associational plasticity and long-term regulation of information flow. Finally, we offer insights about how learning-related synaptic plasticity within the corticohippocampal circuit during sensory experiences may enable adaptive behaviors for encoding spatial, episodic, social, and contextual memories. PMID:26525152
ERIC Educational Resources Information Center
Hugues, Sandrine; Garcia, Rene
2007-01-01
We have previously shown that fear extinction is accompanied by an increase of synaptic efficacy in inputs from the ventral hippocampus (vHPC) and mediodorsal thalamus (MD) to the medial prefrontal cortex (mPFC) and that disrupting these changes to mPFC synaptic transmission compromises extinction processes. The aim of this study was to examine…
State-dependencies of learning across brain scales
Ritter, Petra; Born, Jan; Brecht, Michael; Dinse, Hubert R.; Heinemann, Uwe; Pleger, Burkhard; Schmitz, Dietmar; Schreiber, Susanne; Villringer, Arno; Kempter, Richard
2015-01-01
Learning is a complex brain function operating on different time scales, from milliseconds to years, which induces enduring changes in brain dynamics. The brain also undergoes continuous “spontaneous” shifts in states, which, amongst others, are characterized by rhythmic activity of various frequencies. Besides the most obvious distinct modes of waking and sleep, wake-associated brain states comprise modulations of vigilance and attention. Recent findings show that certain brain states, particularly during sleep, are essential for learning and memory consolidation. Oscillatory activity plays a crucial role on several spatial scales, for example in plasticity at a synaptic level or in communication across brain areas. However, the underlying mechanisms and computational rules linking brain states and rhythms to learning, though relevant for our understanding of brain function and therapeutic approaches in brain disease, have not yet been elucidated. Here we review known mechanisms of how brain states mediate and modulate learning by their characteristic rhythmic signatures. To understand the critical interplay between brain states, brain rhythms, and learning processes, a wide range of experimental and theoretical work in animal models and human subjects from the single synapse to the large-scale cortical level needs to be integrated. By discussing results from experiments and theoretical approaches, we illuminate new avenues for utilizing neuronal learning mechanisms in developing tools and therapies, e.g., for stroke patients and to devise memory enhancement strategies for the elderly. PMID:25767445
Super Resolution Imaging of Genetically Labeled Synapses in Drosophila Brain Tissue
Spühler, Isabelle A.; Conley, Gaurasundar M.; Scheffold, Frank; Sprecher, Simon G.
2016-01-01
Understanding synaptic connectivity and plasticity within brain circuits and their relationship to learning and behavior is a fundamental quest in neuroscience. Visualizing the fine details of synapses using optical microscopy remains however a major technical challenge. Super resolution microscopy opens the possibility to reveal molecular features of synapses beyond the diffraction limit. With direct stochastic optical reconstruction microscopy, dSTORM, we image synaptic proteins in the brain tissue of the fruit fly, Drosophila melanogaster. Super resolution imaging of brain tissue harbors difficulties due to light scattering and the density of signals. In order to reduce out of focus signal, we take advantage of the genetic tools available in the Drosophila and have fluorescently tagged synaptic proteins expressed in only a small number of neurons. These neurons form synapses within the calyx of the mushroom body, a distinct brain region involved in associative memory formation. Our results show that super resolution microscopy, in combination with genetically labeled synaptic proteins, is a powerful tool to investigate synapses in a quantitative fashion providing an entry point for studies on synaptic plasticity during learning and memory formation. PMID:27303270
Super Resolution Imaging of Genetically Labeled Synapses in Drosophila Brain Tissue.
Spühler, Isabelle A; Conley, Gaurasundar M; Scheffold, Frank; Sprecher, Simon G
2016-01-01
Understanding synaptic connectivity and plasticity within brain circuits and their relationship to learning and behavior is a fundamental quest in neuroscience. Visualizing the fine details of synapses using optical microscopy remains however a major technical challenge. Super resolution microscopy opens the possibility to reveal molecular features of synapses beyond the diffraction limit. With direct stochastic optical reconstruction microscopy, dSTORM, we image synaptic proteins in the brain tissue of the fruit fly, Drosophila melanogaster. Super resolution imaging of brain tissue harbors difficulties due to light scattering and the density of signals. In order to reduce out of focus signal, we take advantage of the genetic tools available in the Drosophila and have fluorescently tagged synaptic proteins expressed in only a small number of neurons. These neurons form synapses within the calyx of the mushroom body, a distinct brain region involved in associative memory formation. Our results show that super resolution microscopy, in combination with genetically labeled synaptic proteins, is a powerful tool to investigate synapses in a quantitative fashion providing an entry point for studies on synaptic plasticity during learning and memory formation.
Lack of Pannexin 1 Alters Synaptic GluN2 Subunit Composition and Spatial Reversal Learning in Mice.
Gajardo, Ivana; Salazar, Claudia S; Lopez-Espíndola, Daniela; Estay, Carolina; Flores-Muñoz, Carolina; Elgueta, Claudio; Gonzalez-Jamett, Arlek M; Martínez, Agustín D; Muñoz, Pablo; Ardiles, Álvaro O
2018-01-01
Long-term potentiation (LTP) and long-term depression (LTD) are two forms of synaptic plasticity that have been considered as the cellular substrate of memory formation. Although LTP has received considerable more attention, recent evidences indicate that LTD plays also important roles in the acquisition and storage of novel information in the brain. Pannexin 1 (Panx1) is a membrane protein that forms non-selective channels which have been shown to modulate the induction of hippocampal synaptic plasticity. Animals lacking Panx1 or blockade of Pannexin 1 channels precludes the induction of LTD and facilitates LTP. To evaluate if the absence of Panx1 also affects the acquisition of rapidly changing information we trained Panx1 knockout (KO) mice and wild type (WT) littermates in a visual and hidden version of the Morris water maze (MWM). We found that KO mice find the hidden platform similarly although slightly quicker than WT animals, nonetheless, when the hidden platform was located in the opposite quadrant (OQ) to the previous learned location, KO mice spent significantly more time in the previous quadrant than in the new location indicating that the absence of Panx1 affects the reversion of a previously acquired spatial memory. Consistently, we observed changes in the content of synaptic proteins critical to LTD, such as GluN2 subunits of N-methyl-D-aspartate receptors (NMDARs), which changed their contribution to synaptic plasticity in conditions of Panx1 ablation. Our findings give further support to the role of Panx1 channels on the modulation of synaptic plasticity induction, learning and memory processes.
Bian, Chen; Huang, Yan; Zhu, Haitao; Zhao, Yangang; Zhao, Jikai; Zhang, Jiqiang
2018-05-01
Steroids have been demonstrated to play profound roles in the regulation of hippocampal function by acting on their receptors, which need coactivators for their transcriptional activities. Previous studies have shown that steroid receptor coactivator-1 (SRC-1) is the predominant coactivator in the hippocampus, but its exact role and the underlying mechanisms remain unclear. In this study, we constructed SRC-1 RNA interference (RNAi) lentiviruses, injected them into the hippocampus of male mice, and then examined the changes in the expression of selected synaptic proteins, CA1 synapse density, postsynaptic density (PSD) thickness, and in vivo long-term potentiation (LTP). Spatial learning and memory behavior changes were investigated using the Morris water maze. We then transfected the lentiviruses into cultured hippocampal cells and examined the changes in synaptic protein and phospho-cyclic AMP response element-binding protein (pCREB) expression. The in vivo results showed that SRC-1 knockdown significantly decreased the expression of synaptic proteins and CA1 synapse density as well as PSD thickness; SRC-1 knockdown also significantly impaired in vivo LTP and disrupted spatial learning and memory. The in vitro results showed that while the expression of synaptic proteins was significantly decreased by SRC-1 knockdown, pCREB expression was also significantly decreased. The above results suggest a pivotal role of SRC-1 in the regulation of hippocampal synaptic plasticity and spatial learning and memory, strongly indicating SRC-1 may serve as a novel therapeutic target for hippocampus-dependent memory disorders. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Wörgötter, Florentin
2011-01-01
Synaptic scaling is a slow process that modifies synapses, keeping the firing rate of neural circuits in specific regimes. Together with other processes, such as conventional synaptic plasticity in the form of long term depression and potentiation, synaptic scaling changes the synaptic patterns in a network, ensuring diverse, functionally relevant, stable, and input-dependent connectivity. How synaptic patterns are generated and stabilized, however, is largely unknown. Here we formally describe and analyze synaptic scaling based on results from experimental studies and demonstrate that the combination of different conventional plasticity mechanisms and synaptic scaling provides a powerful general framework for regulating network connectivity. In addition, we design several simple models that reproduce experimentally observed synaptic distributions as well as the observed synaptic modifications during sustained activity changes. These models predict that the combination of plasticity with scaling generates globally stable, input-controlled synaptic patterns, also in recurrent networks. Thus, in combination with other forms of plasticity, synaptic scaling can robustly yield neuronal circuits with high synaptic diversity, which potentially enables robust dynamic storage of complex activation patterns. This mechanism is even more pronounced when considering networks with a realistic degree of inhibition. Synaptic scaling combined with plasticity could thus be the basis for learning structured behavior even in initially random networks. PMID:22203799
Watabe, Ayako M; Nagase, Masashi; Hagiwara, Akari; Hida, Yamato; Tsuji, Megumi; Ochiai, Toshitaka; Kato, Fusao; Ohtsuka, Toshihisa
2016-01-01
Synapses of amphids defective (SAD)-A/B kinases control various steps in neuronal development and differentiation, such as axon specifications and maturation in central and peripheral nervous systems. At mature pre-synaptic terminals, SAD-B is associated with synaptic vesicles and the active zone cytomatrix; however, how SAD-B regulates neurotransmission and synaptic plasticity in vivo remains unclear. Thus, we used SAD-B knockout (KO) mice to study the function of this pre-synaptic kinase in the brain. We found that the paired-pulse ratio was significantly enhanced at Shaffer collateral synapses in the hippocampal CA1 region in SAD-B KO mice compared with wild-type littermates. We also found that the frequency of the miniature excitatory post-synaptic current was decreased in SAD-B KO mice. Moreover, synaptic depression following prolonged low-frequency synaptic stimulation was significantly enhanced in SAD-B KO mice. These results suggest that SAD-B kinase regulates vesicular release probability at pre-synaptic terminals and is involved in vesicular trafficking and/or regulation of the readily releasable pool size. Finally, we found that hippocampus-dependent contextual fear learning was significantly impaired in SAD-B KO mice. These observations suggest that SAD-B kinase plays pivotal roles in controlling vesicular release properties and regulating hippocampal function in the mature brain. Synapses of amphids defective (SAD)-A/B kinases control various steps in neuronal development and differentiation, but their roles in mature brains were only partially known. Here, we demonstrated, at mature pre-synaptic terminals, that SAD-B regulates vesicular release probability and synaptic plasticity. Moreover, hippocampus-dependent contextual fear learning was significantly impaired in SAD-B KO mice, suggesting that SAD-B kinase plays pivotal roles in controlling vesicular release properties and regulating hippocampal function in the mature brain. © 2015 International Society for Neurochemistry.
Slow synaptic dynamics in a network: From exponential to power-law forgetting
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2014-09-01
We investigate a mean-field model of interacting synapses on a directed neural network. Our interest lies in the slow adaptive dynamics of synapses, which are driven by the fast dynamics of the neurons they connect. Cooperation is modeled from the usual Hebbian perspective, while competition is modeled by an original polarity-driven rule. The emergence of a critical manifold culminating in a tricritical point is crucially dependent on the presence of synaptic competition. This leads to a universal 1/t power-law relaxation of the mean synaptic strength along the critical manifold and an equally universal 1/√t relaxation at the tricritical point, to be contrasted with the exponential relaxation that is otherwise generic. In turn, this leads to the natural emergence of long- and short-term memory from different parts of parameter space in a synaptic network, which is the most original and important result of our present investigations.
Self-organization in Balanced State Networks by STDP and Homeostatic Plasticity
Effenberger, Felix; Jost, Jürgen; Levina, Anna
2015-01-01
Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network. PMID:26335425
Information and Biological Revolutions: Global Governance Challenges Summary of a Study Group
2000-01-01
instruction and organize the classes. This change will come about slowly and then only if it proves to increase learning in the classroom. See Thomas K...dendrites and to form synaptic connections, then help to maintain and regulate synaptic activity responsible for learning and cognitive functions...testosterone may foster violent behavior is by the promotion of positive illusions about competitive ability . 77 Thus, an evolutionary history of raiding
Negrón-Oyarzo, Ignacio; Pérez, Miguel Ángel; Terreros, Gonzalo; Muñoz, Pablo; Dagnino-Subiabre, Alexies
2014-02-01
The prelimbic cortex and amygdala regulate the extinction of conditioned fear and anxiety, respectively. In adult rats, chronic stress affects the dendritic morphology of these brain areas, slowing extinction of learned fear and enhancing anxiety. The aim of this study was to determine whether rats subjected to chronic stress in adolescence show changes in learned fear, anxiety, and synaptic transmission in the prelimbic cortex during adulthood. Male Sprague Dawley rats were subjected to seven days of restraint stress on postnatal day forty-two (PND 42, adolescence). Afterward, the fear-conditioning paradigm was used to study conditioned fear extinction. Anxiety-like behavior was measured one day (PND 50) and twenty-one days (PND 70, adulthood) after stress using the elevated-plus maze and dark-light box tests, respectively. With another set of rats, excitatory synaptic transmission was analyzed with slices of the prelimbic cortex. Rats that had been stressed during adolescence and adulthood had higher anxiety-like behavior levels than did controls, while stress-induced slowing of learned fear extinction in adolescence was reversed during adulthood. As well, the field excitatory postsynaptic potentials of stressed adolescent rats had significantly lower amplitudes than those of controls, although the amplitudes were higher in adulthood. Our results demonstrate that short-term stress in adolescence induces strong effects on excitatory synaptic transmission in the prelimbic cortex and extinction of learned fear, where the effect of stress on anxiety is more persistent than on the extinction of learned fear. These data contribute to the understanding of stress neurobiology. Copyright © 2013 Elsevier B.V. All rights reserved.
Li, Wen-Chang; Cooke, Tom; Sautois, Bart; Soffe, Stephen R; Borisyuk, Roman; Roberts, Alan
2007-09-10
How specific are the synaptic connections formed as neuronal networks develop and can simple rules account for the formation of functioning circuits? These questions are assessed in the spinal circuits controlling swimming in hatchling frog tadpoles. This is possible because detailed information is now available on the identity and synaptic connections of the main types of neuron. The probabilities of synapses between 7 types of identified spinal neuron were measured directly by making electrical recordings from 500 pairs of neurons. For the same neuron types, the dorso-ventral distributions of axons and dendrites were measured and then used to calculate the probabilities that axons would encounter particular dendrites and so potentially form synaptic connections. Surprisingly, synapses were found between all types of neuron but contact probabilities could be predicted simply by the anatomical overlap of their axons and dendrites. These results suggested that synapse formation may not require axons to recognise specific, correct dendrites. To test the plausibility of simpler hypotheses, we first made computational models that were able to generate longitudinal axon growth paths and reproduce the axon distribution patterns and synaptic contact probabilities found in the spinal cord. To test if probabilistic rules could produce functioning spinal networks, we then made realistic computational models of spinal cord neurons, giving them established cell-specific properties and connecting them into networks using the contact probabilities we had determined. A majority of these networks produced robust swimming activity. Simple factors such as morphogen gradients controlling dorso-ventral soma, dendrite and axon positions may sufficiently constrain the synaptic connections made between different types of neuron as the spinal cord first develops and allow functional networks to form. Our analysis implies that detailed cellular recognition between spinal neuron types may not be necessary for the reliable formation of functional networks to generate early behaviour like swimming.
Intracellular GPCRs Play Key Roles in Synaptic Plasticity.
Jong, Yuh-Jiin I; Harmon, Steven K; O'Malley, Karen L
2018-02-16
The trillions of synaptic connections within the human brain are shaped by experience and neuronal activity, both of which underlie synaptic plasticity and ultimately learning and memory. G protein-coupled receptors (GPCRs) play key roles in synaptic plasticity by strengthening or weakening synapses and/or shaping dendritic spines. While most studies of synaptic plasticity have focused on cell surface receptors and their downstream signaling partners, emerging data point to a critical new role for the very same receptors to signal from inside the cell. Intracellular receptors have been localized to the nucleus, endoplasmic reticulum, lysosome, and mitochondria. From these intracellular positions, such receptors may couple to different signaling systems, display unique desensitization patterns, and/or show distinct patterns of subcellular distribution. Intracellular GPCRs can be activated at the cell surface, endocytosed, and transported to an intracellular site or simply activated in situ by de novo ligand synthesis, diffusion of permeable ligands, or active transport of non-permeable ligands. Current findings reinforce the notion that intracellular GPCRs play a dynamic role in synaptic plasticity and learning and memory. As new intracellular GPCR roles are defined, the need to selectively tailor agonists and/or antagonists to both intracellular and cell surface receptors may lead to the development of more effective therapeutic tools.
The Roles of Cortical Slow Waves in Synaptic Plasticity and Memory Consolidation.
Miyamoto, Daisuke; Hirai, Daichi; Murayama, Masanori
2017-01-01
Sleep plays important roles in sensory and motor memory consolidation. Sleep oscillations, reflecting neural population activity, involve the reactivation of learning-related neurons and regulate synaptic strength and, thereby affect memory consolidation. Among sleep oscillations, slow waves (0.5-4 Hz) are closely associated with memory consolidation. For example, slow-wave power is regulated in an experience-dependent manner and correlates with acquired memory. Furthermore, manipulating slow waves can enhance or impair memory consolidation. During slow wave sleep, inter-areal interactions between the cortex and hippocampus (HC) have been proposed to consolidate declarative memory; however, interactions for non-declarative (HC-independent) memory remain largely uninvestigated. We recently showed that the directional influence in a slow-wave range through a top-down cortical long-range circuit is involved in the consolidation of non-declarative memory. At the synaptic level, the average cortical synaptic strength is known to be potentiated during wakefulness and depressed during sleep. Moreover, learning causes plasticity in a subset of synapses, allocating memory to them. Sleep may help to differentiate synaptic strength between allocated and non-allocated synapses (i.e., improving the signal-to-noise ratio, which may facilitate memory consolidation). Herein, we offer perspectives on inter-areal interactions and synaptic plasticity for memory consolidation during sleep.
The Roles of Cortical Slow Waves in Synaptic Plasticity and Memory Consolidation
Miyamoto, Daisuke; Hirai, Daichi; Murayama, Masanori
2017-01-01
Sleep plays important roles in sensory and motor memory consolidation. Sleep oscillations, reflecting neural population activity, involve the reactivation of learning-related neurons and regulate synaptic strength and, thereby affect memory consolidation. Among sleep oscillations, slow waves (0.5–4 Hz) are closely associated with memory consolidation. For example, slow-wave power is regulated in an experience-dependent manner and correlates with acquired memory. Furthermore, manipulating slow waves can enhance or impair memory consolidation. During slow wave sleep, inter-areal interactions between the cortex and hippocampus (HC) have been proposed to consolidate declarative memory; however, interactions for non-declarative (HC-independent) memory remain largely uninvestigated. We recently showed that the directional influence in a slow-wave range through a top-down cortical long-range circuit is involved in the consolidation of non-declarative memory. At the synaptic level, the average cortical synaptic strength is known to be potentiated during wakefulness and depressed during sleep. Moreover, learning causes plasticity in a subset of synapses, allocating memory to them. Sleep may help to differentiate synaptic strength between allocated and non-allocated synapses (i.e., improving the signal-to-noise ratio, which may facilitate memory consolidation). Herein, we offer perspectives on inter-areal interactions and synaptic plasticity for memory consolidation during sleep. PMID:29213231
Goh, Jinzhong J.; Manahan-Vaughan, Denise
2012-01-01
Persistent synaptic plasticity has been subjected to intense study in the decades since it was first described. Occurring in the form of long-term potentiation (LTP) and long-term depression (LTD), it shares many cellular and molecular properties with hippocampus-dependent forms of persistent memory. Recent reports of both LTP and LTD occurring endogenously under specific learning conditions provide further support that these forms of synaptic plasticity may comprise the cellular correlates of memory. Most studies of synaptic plasticity are performed using in vitro or in vivo preparations where patterned electrical stimulation of afferent fibers is implemented to induce changes in synaptic strength. This strategy has proven very effective in inducing LTP, even under in vivo conditions. LTD in vivo has proven more elusive: although LTD occurs endogenously under specific learning conditions in both rats and mice, its induction has not been successfully demonstrated with afferent electrical stimulation alone. In this study we screened a large spectrum of protocols that are known to induce LTD either in hippocampal slices or in the intact rat hippocampus, to clarify if LTD can be induced by sole afferent stimulation in the mouse CA1 region in vivo. Low frequency stimulation at 1, 2, 3, 5, 7, or 10 Hz given in the range of 100 through 1800 pulses produced, at best, short-term depression (STD) that lasted for up to 60 min. Varying the administration pattern of the stimuli (e.g., 900 pulses given twice at 5 min intervals), or changing the stimulation intensity did not improve the persistency of synaptic depression. LTD that lasts for at least 24 h occurs under learning conditions in mice. We conclude that a coincidence of factors, such as afferent activity together with neuromodulatory inputs, play a decisive role in the enablement of LTD under more naturalistic (e.g., learning) conditions. PMID:23355815
Wan, Chang Jin; Liu, Yang Hui; Zhu, Li Qiang; Feng, Ping; Shi, Yi; Wan, Qing
2016-04-20
In the biological nervous system, synaptic plasticity regulation is based on the modulation of ionic fluxes, and such regulation was regarded as the fundamental mechanism underlying memory and learning. Inspired by such biological strategies, indium-gallium-zinc-oxide (IGZO) electric-double-layer (EDL) transistors gated by aqueous solutions were proposed for synaptic behavior emulations. Short-term synaptic plasticity, such as paired-pulse facilitation, high-pass filtering, and orientation tuning, was experimentally emulated in these EDL transistors. Most importantly, we found that such short-term synaptic plasticity can be effectively regulated by alcohol (ethyl alcohol) and salt (potassium chloride) additives. Our results suggest that solution gated oxide-based EDL transistors could act as the platforms for short-term synaptic plasticity emulation.
NASA Astrophysics Data System (ADS)
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-01
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Srinivasan, Gopalakrishnan; Sengupta, Abhronil; Roy, Kaushik
2016-07-13
Spiking Neural Networks (SNNs) have emerged as a powerful neuromorphic computing paradigm to carry out classification and recognition tasks. Nevertheless, the general purpose computing platforms and the custom hardware architectures implemented using standard CMOS technology, have been unable to rival the power efficiency of the human brain. Hence, there is a need for novel nanoelectronic devices that can efficiently model the neurons and synapses constituting an SNN. In this work, we propose a heterostructure composed of a Magnetic Tunnel Junction (MTJ) and a heavy metal as a stochastic binary synapse. Synaptic plasticity is achieved by the stochastic switching of the MTJ conductance states, based on the temporal correlation between the spiking activities of the interconnecting neurons. Additionally, we present a significance driven long-term short-term stochastic synapse comprising two unique binary synaptic elements, in order to improve the synaptic learning efficiency. We demonstrate the efficacy of the proposed synaptic configurations and the stochastic learning algorithm on an SNN trained to classify handwritten digits from the MNIST dataset, using a device to system-level simulation framework. The power efficiency of the proposed neuromorphic system stems from the ultra-low programming energy of the spintronic synapses.
Asymmetry of Neuronal Combinatorial Codes Arises from Minimizing Synaptic Weight Change.
Leibold, Christian; Monsalve-Mercado, Mauro M
2016-08-01
Synaptic change is a costly resource, particularly for brain structures that have a high demand of synaptic plasticity. For example, building memories of object positions requires efficient use of plasticity resources since objects can easily change their location in space and yet we can memorize object locations. But how should a neural circuit ideally be set up to integrate two input streams (object location and identity) in case the overall synaptic changes should be minimized during ongoing learning? This letter provides a theoretical framework on how the two input pathways should ideally be specified. Generally the model predicts that the information-rich pathway should be plastic and encoded sparsely, whereas the pathway conveying less information should be encoded densely and undergo learning only if a neuronal representation of a novel object has to be established. As an example, we consider hippocampal area CA1, which combines place and object information. The model thereby provides a normative account of hippocampal rate remapping, that is, modulations of place field activity by changes of local cues. It may as well be applicable to other brain areas (such as neocortical layer V) that learn combinatorial codes from multiple input streams.
Ivannikov, Maxim V.; Sugimori, Mutsuyuki; Llinás, Rodolfo R.
2012-01-01
Synaptic plasticity in many regions of the central nervous system leads to the continuous adjustment of synaptic strength, which is essential for learning and memory. In this study, we show by visualizing synaptic vesicle release in mouse hippocampal synaptosomes that presynaptic mitochondria and specifically, their capacities for ATP production are essential determinants of synaptic vesicle exocytosis and its magnitude. Total internal reflection microscopy of FM1-43 loaded hippocampal synaptosomes showed that inhibition of mitochondrial oxidative phosphorylation reduces evoked synaptic release. This reduction was accompanied by a substantial drop in synaptosomal ATP levels. However, cytosolic calcium influx was not affected. Structural characterization of stimulated hippocampal synaptosomes revealed that higher total presynaptic mitochondrial volumes were consistently associated with higher levels of exocytosis. Thus, synaptic vesicle release is linked to the presynaptic ability to regenerate ATP, which itself is a utility of mitochondrial density and activity. PMID:22772899
Structure and plasticity potential of neural networks in the cerebral cortex
NASA Astrophysics Data System (ADS)
Fares, Tarec Edmond
In this thesis, we first described a theoretical framework for the analysis of spine remodeling plasticity. We provided a quantitative description of two models of spine remodeling in which the presence of a bouton is either required or not for the formation of a new synapse. We derived expressions for the density of potential synapses in the neuropil, the connectivity fraction, which is the ratio of actual to potential synapses, and the number of structurally different circuits attainable with spine remodeling. We calculated these parameters in mouse occipital cortex, rat CA1, monkey V1, and human temporal cortex. We found that on average a dendritic spine can choose among 4-7 potential targets in rodents and 10-20 potential targets in primates. The neuropil's potential for structural circuit remodeling is highest in rat CA1 (7.1-8.6 bits/mum3) and lowest in monkey V1 (1.3-1.5 bits/mum 3 We next studied the role neuron morphology plays in defining synaptic connectivity. As previously stated it is clear that only pairs of neurons with closely positioned axonal and dendritic branches can be synaptically coupled. For excitatory neurons in the cerebral cortex, ). We also evaluated the lower bound of neuron selectivity in the choice of synaptic partners. Post-synaptic excitatory neurons in rodents make synaptic contacts with more than 21-30% of pre-synaptic axons encountered with new spine growth. Primate neurons appear to be more selective, making synaptic connections with more than 7-15% of encountered axons. We next studied the role neuron morphology plays in defining synaptic connectivity. As previously stated it is clear that only pairs of neurons with closely positioned axonal and dendritic branches can be synaptically coupled. For excitatory neurons in the cerebral cortex, such axo-dendritic oppositions, or potential synapses, must be bridged by dendritic spines to form synaptic connections. To explore the rules by which synaptic connections are formed within the constraints imposed by neuron morphology, we compared the distributions of the numbers of actual and potential synapses between pre- and post-synaptic neurons forming different laminar projections in rat barrel cortex. Quantitative comparison explicitly ruled out the hypothesis that individual synapses between neurons are formed independently of each other. Instead, the data are consistent with a cooperative scheme of synapse formation, where multiple-synaptic connections between neurons are stabilized, while neurons that do not establish a critical number of synapses are not likely to remain synaptically coupled. In the above two projects, analysis of potential synapse numbers played an important role in shaping our understanding of connectivity and structural plasticity. In the third part of this thesis, we shift our attention to the study of the distribution of potential synapse numbers. This distribution is dependent on the details of neuron morphology and it defines synaptic connectivity patterns attainable with spine remodeling. To better understand how the distribution of potential synapse numbers is influenced by the overlap and the shapes of axonal and dendritic arbors, we first analyzed uniform disconnected arbors generated in silico. The resulting distributions are well described by binomial functions. We used a dataset of neurons reconstructed in 3D and generated the potential synapse distributions for neurons of different classes. Quantitative analysis showed that the binomial distribution is a good fit to this data as well. All distributions considered clustered into two categories, inhibitory to inhibitory and excitatory to excitatory projections. We showed that the distributions of potential synapse numbers are universally described by a family of single parameter (p) binomial functions, where p = 0.08, and for the inhibitory and p = 0.19 for the excitatory projections. In the last part of this thesis an attempt is made to incorporate some of the biological constraints we considered thus far, into an artificial neural network model. It became clear that several features of synaptic connectivity are ubiquitous among different cortical networks: (1) neural networks are predominately excitatory, containing roughly 80% of excitatory neurons and synapses, (2) neural networks are only sparsely interconnected, where the probabilities of finding connected neurons are always less than 50% even for neighboring cells, (3) the distribution of connection strengths has been shown to have a slow non-exponential decay. In the attempt to understand the advantage of such network architecture for learning and memory, we analyzed the associative memory capacity of a biologically constrained perceptron-like neural network model. The artificial neural network we consider consists of robust excitatory and inhibitory McCulloch and Pitts neurons with a constant firing threshold. Our theoretical results show that the capacity for associative memory storage in such networks increases with an addition of a small fraction of inhibitory neurons, while the connection probability remains below 50%. (Abstract shortened by UMI.)
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.
Goal-Directed Decision Making with Spiking Neurons
Lengyel, Máté
2016-01-01
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636
Valcarcel-Ares, Marta Noa; Tucsek, Zsuzsanna; Kiss, Tamas; Giles, Cory B; Tarantini, Stefano; Yabluchanskiy, Andriy; Balasubramanian, Priya; Gautam, Tripti; Galvan, Veronica; Ballabh, Praveen; Richardson, Arlan; Freeman, Willard M; Wren, Jonathan D; Deak, Ferenc; Ungvari, Zoltan; Csiszar, Anna
2018-06-08
There is strong evidence that obesity has deleterious effects on cognitive function of older adults. Previous preclinical studies demonstrate that obesity in aging is associated with a heightened state of systemic inflammation, which exacerbates blood brain barrier disruption, promoting neuroinflammation and oxidative stress. To test the hypothesis that synergistic effects of obesity and aging on inflammatory processes exert deleterious effects on hippocampal function, young and aged C57BL/6 mice were rendered obese by chronic feeding of a high fat diet followed by assessment of learning and memory function, measurement of hippocampal long-term potentiation (LTP), assessment of changes in hippocampal expression of genes relevant for synaptic function and determination of synaptic density. Because there is increasing evidence that altered production of lipid mediators modulate LTP, neuroinflammation and neurovascular coupling responses, the effects of obesity on hippocampal levels of relevant eicosanoid mediators were also assessed. We found that aging exacerbates obesity-induced microglia activation, which is associated with deficits in hippocampal-dependent learning and memory tests, impaired LTP, decreased synaptic density and dysregulation of genes involved in regulation of synaptic plasticity. Obesity in aging also resulted in an altered hippocampal eicosanoid profile, including decreases in vasodilator and pro-LTP epoxy-eicosatrienoic acids (EETs). Collectively, our results taken together with previous findings suggest that obesity in aging promotes hippocampal inflammation, which in turn may contribute to synaptic dysfunction and cognitive impairment.
Spontaneous Activity Drives Local Synaptic Plasticity In Vivo.
Winnubst, Johan; Cheyne, Juliette E; Niculescu, Dragos; Lohmann, Christian
2015-07-15
Spontaneous activity fine-tunes neuronal connections in the developing brain. To explore the underlying synaptic plasticity mechanisms, we monitored naturally occurring changes in spontaneous activity at individual synapses with whole-cell patch-clamp recordings and simultaneous calcium imaging in the mouse visual cortex in vivo. Analyzing activity changes across large populations of synapses revealed a simple and efficient local plasticity rule: synapses that exhibit low synchronicity with nearby neighbors (<12 μm) become depressed in their transmission frequency. Asynchronous electrical stimulation of individual synapses in hippocampal slices showed that this is due to a decrease in synaptic transmission efficiency. Accordingly, experimentally increasing local synchronicity, by stimulating synapses in response to spontaneous activity at neighboring synapses, stabilized synaptic transmission. Finally, blockade of the high-affinity proBDNF receptor p75(NTR) prevented the depression of asynchronously stimulated synapses. Thus, spontaneous activity drives local synaptic plasticity at individual synapses in an "out-of-sync, lose-your-link" fashion through proBDNF/p75(NTR) signaling to refine neuronal connectivity. VIDEO ABSTRACT. Copyright © 2015 Elsevier Inc. All rights reserved.
Tissue Plasminogen Activator Induction in Purkinje Neurons After Cerebellar Motor Learning
NASA Astrophysics Data System (ADS)
Seeds, Nicholas W.; Williams, Brian L.; Bickford, Paula C.
1995-12-01
The cerebellar cortex is implicated in the learning of complex motor skills. This learning may require synaptic remodeling of Purkinje cell inputs. An extracellular serine protease, tissue plasminogen activator (tPA), is involved in remodeling various nonneural tissues and is associated with developing and regenerating neurons. In situ hybridization showed that expression of tPA messenger RNA was increased in the Purkinje neurons of rats within an hour of their being trained for a complex motor task. Antibody to tPA also showed the induction of tPA protein associated with cerebellar Purkinje cells. Thus, the induction of tPA during motor learning may play a role in activity-dependent synaptic plasticity.
Sleep, Plasticity and Memory from Molecules to Whole-Brain Networks
Abel, Ted; Havekes, Robbert; Saletin, Jared M.; Walker, Matthew P.
2014-01-01
Despite the ubiquity of sleep across phylogeny, its function remains elusive. In this review, we consider one compelling candidate: brain plasticity associated with memory processing. Focusing largely on hippocampus-dependent memory in rodents and humans, we describe molecular, cellular, network, whole-brain and behavioral evidence establishing a role for sleep both in preparation for initial memory encoding, and in the subsequent offline consolidation ofmemory. Sleep and sleep deprivation bidirectionally alter molecular signaling pathways that regulate synaptic strength and control plasticity-related gene transcription and protein translation. At the cellular level, sleep deprivation impairs cellular excitability necessary for inducing synaptic potentiation and accelerates the decay of long-lasting forms of synaptic plasticity. In contrast, NREM and REM sleep enhance previously induced synaptic potentiation, although synaptic de-potentiation during sleep has also been observed. Beyond single cell dynamics, large-scale cell ensembles express coordinated replay of prior learning-related firing patterns during subsequent sleep. This occurs in the hippocampus, in the cortex, and between the hippocampus and cortex, commonly in association with specific NREM sleep oscillations. At the whole-brain level, somewhat analogous learning-associated hippocampal (re)activation during NREM sleep has been reported in humans. Moreover, the same cortical NREM oscillations associated with replay in rodents also promote human hippocampal memory consolidation, and this process can be manipulated using exogenous reactivation cues during sleep. Mirroring molecular findings in rodents, specific NREM sleep oscillations before encoding refresh human hippocampal learning capacity, while deprivation of sleep conversely impairs subsequent hippocampal activity and associated encoding. Together, these cross-descriptive level findings demonstrate that the unique neurobiology of sleep exert powerful effects on molecular, cellular and network mechanism of plasticity that govern both initial learning and subsequent long-term memory consolidation. PMID:24028961
Common mechanisms of synaptic plasticity in vertebrates and invertebrates
Glanzman, David L.
2016-01-01
Until recently, the literature on learning-related synaptic plasticity in invertebrates has been dominated by models assuming plasticity is mediated by presynaptic changes, whereas the vertebrate literature has been dominated by models assuming it is mediated by postsynaptic changes. Here I will argue that this situation does not reflect a biological reality and that, in fact, invertebrate and vertebrate nervous systems share a common set of mechanisms of synaptic plasticity. PMID:20152143
Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task
2017-01-01
Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP) are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making. PMID:28961245
A neuronal model of predictive coding accounting for the mismatch negativity.
Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas
2012-03-14
The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spiking excitatory and inhibitory neurons interconnected in a layered cortical architecture with distinct input, predictive, and prediction error units. A spike-timing dependent learning rule, relying upon NMDA receptor synaptic transmission, allows the network to adjust its internal predictions and use a memory of the recent past inputs to anticipate on future stimuli based on transition statistics. We demonstrate that this simple architecture can account for the major empirical properties of the MMN. These include a frequency-dependent response to rare deviants, a response to unexpected repeats in alternating sequences (ABABAA…), a lack of consideration of the global sequence context, a response to sound omission, and a sensitivity of the MMN to NMDA receptor antagonists. Novel predictions are presented, and a new magnetoencephalography experiment in healthy human subjects is presented that validates our key hypothesis: the MMN results from active cortical prediction rather than passive synaptic habituation.
Human θ burst stimulation enhances subsequent motor learning and increases performance variability.
Teo, James T H; Swayne, Orlando B C; Cheeran, Binith; Greenwood, Richard J; Rothwell, John C
2011-07-01
Intermittent theta burst stimulation (iTBS) transiently increases motor cortex excitability in healthy humans by a process thought to involve synaptic long-term potentiation (LTP), and this is enhanced by nicotine. Acquisition of a ballistic motor task is likewise accompanied by increased excitability and presumed intracortical LTP. Here, we test how iTBS and nicotine influences subsequent motor learning. Ten healthy subjects participated in a double-blinded placebo-controlled trial testing the effects of iTBS and nicotine. iTBS alone increased the rate of learning but this increase was blocked by nicotine. We then investigated factors other than synaptic strengthening that may play a role. Behavioral analysis and modeling suggested that iTBS increased performance variability, which correlated with learning outcome. A control experiment confirmed the increase in motor output variability by showing that iTBS increased the dispersion of involuntary transcranial magnetic stimulation-evoked thumb movements. We suggest that in addition to the effect on synaptic plasticity, iTBS may have facilitated performance by increasing motor output variability; nicotine negated this effect on variability perhaps via increasing the signal-to-noise ratio in cerebral cortex.
The A-Current Modulates Learning via NMDA Receptors Containing the NR2B Subunit
Fontán-Lozano, Ángela; Suárez-Pereira, Irene; González-Forero, David; Carrión, Ángel Manuel
2011-01-01
Synaptic plasticity involves short- and long-term events, although the molecular mechanisms that underlie these processes are not fully understood. The transient A-type K+ current (IA) controls the excitability of the dendrites from CA1 pyramidal neurons by regulating the back-propagation of action potentials and shaping synaptic input. Here, we have studied how decreases in IA affect cognitive processes and synaptic plasticity. Using wild-type mice treated with 4-AP, an IA inhibitor, and mice lacking the DREAM protein, a transcriptional repressor and modulator of the IA, we demonstrate that impairment of IA decreases the stimulation threshold for learning and the induction of early-LTP. Hippocampal electrical recordings in both models revealed alterations in basal electrical oscillatory properties toward low-theta frequencies. In addition, we demonstrated that the facilitated learning induced by decreased IA requires the activation of NMDA receptors containing the NR2B subunit. Together, these findings point to a balance between the IA and the activity of NR2B-containing NMDA receptors in the regulation of learning. PMID:21966384
Reversing the Effects of Fragile X Syndrome
ERIC Educational Resources Information Center
Ogren, Marilee P.; Lombroso, Paul J.
2008-01-01
A research on how synaptic plasticity is abnormally regulated in fragile X syndrome and how this abnormality can be reversed by therapeutic interventions is presented. Fragile X syndrome is a disorder of synaptic plasticity that contributes to abnormal development and interferes with normal learning and memory.
ALTERED PHOSPHORYLATION OF MAP KINASE AFTER ACUTE EXPOSURE TO PCB153.
Long-term potentiation (LTP) is a model of synaptic plasticity believed to encompass the physiological substrate of memory. The mitogen-activated protein kinase (ERK1/2) signalling cascade contributes to synaptic plasticity and to long-term memory formation. Learning and LTP st...
Robust short-term memory without synaptic learning.
Johnson, Samuel; Marro, J; Torres, Joaquín J
2013-01-01
Short-term memory in the brain cannot in general be explained the way long-term memory can--as a gradual modification of synaptic weights--since it takes place too quickly. Theories based on some form of cellular bistability, however, do not seem able to account for the fact that noisy neurons can collectively store information in a robust manner. We show how a sufficiently clustered network of simple model neurons can be instantly induced into metastable states capable of retaining information for a short time (a few seconds). The mechanism is robust to different network topologies and kinds of neural model. This could constitute a viable means available to the brain for sensory and/or short-term memory with no need of synaptic learning. Relevant phenomena described by neurobiology and psychology, such as local synchronization of synaptic inputs and power-law statistics of forgetting avalanches, emerge naturally from this mechanism, and we suggest possible experiments to test its viability in more biological settings.
Acute and Chronic Effects of Ethanol on Learning-Related Synaptic Plasticity
Zorumski, Charles F.; Mennerick, Steven; Izumi, Yukitoshi
2014-01-01
Alcoholism is associated with acute and long-term cognitive dysfunction including memory impairment, resulting in substantial disability and cost to society. Thus, understanding how ethanol impairs cognition is essential for developing treatment strategies to dampen its adverse impact. Memory processing is thought to involve persistent, use-dependent changes in synaptic transmission, and ethanol alters the activity of multiple signaling molecules involved in synaptic processing, including modulation of the glutamate and gamma-aminobutyric acid (GABA) transmitter systems that mediate most fast excitatory and inhibitory transmission in the brain. Effects on glutamate and GABA receptors contribute to ethanol-induced changes in long-term potentiation (LTP) and long-term depression (LTD), forms of synaptic plasticity thought to underlie memory acquisition. In this paper, we review the effects of ethanol on learning-related forms of synaptic plasticity with emphasis on changes observed in the hippocampus, a brain region that is critical for encoding contextual and episodic memories. We also include studies in other brain regions as they pertain to altered cognitive and mental function. Comparison of effects in the hippocampus to other brain regions is instructive for understanding the complexities of ethanol’s acute and long-term pharmacological consequences. PMID:24447472
The roles of protein expression in synaptic plasticity and memory consolidation
Rosenberg, Tali; Gal-Ben-Ari, Shunit; Dieterich, Daniela C.; Kreutz, Michael R.; Ziv, Noam E.; Gundelfinger, Eckart D.; Rosenblum, Kobi
2014-01-01
The amount and availability of proteins are regulated by their synthesis, degradation, and transport. These processes can specifically, locally, and temporally regulate a protein or a population of proteins, thus affecting numerous biological processes in health and disease states. Accordingly, malfunction in the processes of protein turnover and localization underlies different neuronal diseases. However, as early as a century ago, it was recognized that there is a specific need for normal macromolecular synthesis in a specific fragment of the learning process, memory consolidation, which takes place minutes to hours following acquisition. Memory consolidation is the process by which fragile short-term memory is converted into stable long-term memory. It is accepted today that synaptic plasticity is a cellular mechanism of learning and memory processes. Interestingly, similar molecular mechanisms subserve both memory and synaptic plasticity consolidation. In this review, we survey the current view on the connection between memory consolidation processes and proteostasis, i.e., maintaining the protein contents at the neuron and the synapse. In addition, we describe the technical obstacles and possible new methods to determine neuronal proteostasis of synaptic function and better explain the process of memory and synaptic plasticity consolidation. PMID:25429258
Invariant visual object recognition: a model, with lighting invariance.
Rolls, Edmund T; Stringer, Simon M
2006-01-01
How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.
Mean-field theory of a plastic network of integrate-and-fire neurons.
Chen, Chun-Chung; Jasnow, David
2010-01-01
We consider a noise-driven network of integrate-and-fire neurons. The network evolves as result of the activities of the neurons following spike-timing-dependent plasticity rules. We apply a self-consistent mean-field theory to the system to obtain the mean activity level for the system as a function of the mean synaptic weight, which predicts a first-order transition and hysteresis between a noise-dominated regime and a regime of persistent neural activity. Assuming Poisson firing statistics for the neurons, the plasticity dynamics of a synapse under the influence of the mean-field environment can be mapped to the dynamics of an asymmetric random walk in synaptic-weight space. Using a master equation for small steps, we predict a narrow distribution of synaptic weights that scales with the square root of the plasticity rate for the stationary state of the system given plausible physiological parameter values describing neural transmission and plasticity. The dependence of the distribution on the synaptic weight of the mean-field environment allows us to determine the mean synaptic weight self-consistently. The effect of fluctuations in the total synaptic conductance and plasticity step sizes are also considered. Such fluctuations result in a smoothing of the first-order transition for low number of afferent synapses per neuron and a broadening of the synaptic-weight distribution, respectively.
Learning and memory disabilities in IUGR babies: Functional and molecular analysis in a rat model.
Camprubí Camprubí, Marta; Balada Caballé, Rafel; Ortega Cano, Juan A; Ortega de la Torre, Maria de Los Angeles; Duran Fernández-Feijoo, Cristina; Girabent-Farrés, Montserrat; Figueras-Aloy, Josep; Krauel, Xavier; Alcántara, Soledad
2017-03-01
1Intrauterine growth restriction (IUGR) is the failure of the fetus to achieve its inherent growth potential, and it has frequently been associated with neurodevelopmental problems in childhood. Neurological disorders are mostly associated with IUGR babies with an abnormally high cephalization index (CI) and a brain sparing effect. However, a similar correlation has never been demonstrated in an animal model. The aim of this study was to determine the correlations between CI, functional deficits in learning and memory and alterations in synaptic proteins in a rat model of IUGR. 2Utero-placental insufficiency was induced by meso-ovarian vessel cauterization (CMO) in pregnant rats at embryonic day 17 (E17). Learning performance in an aquatic learning test was evaluated 25 days after birth and during 10 days. Some synaptic proteins were analyzed (PSD95, Synaptophysin) by Western blot and immunohistochemistry. 3Placental insufficiency in CMO pups was associated with spatial memory deficits, which are correlated with a CI above the normal range. CMO pups presented altered levels of synaptic proteins PSD95 and synaptophysin in the hippocampus. 4The results of this study suggest that learning disabilities may be associated with altered development of excitatory neurotransmission and synaptic plasticity. Although interspecific differences in fetal response to placental insufficiency should be taken into account, the translation of these data to humans suggest that both IUGR babies and babies with a normal birth weight but with intrauterine Doppler alterations and abnormal CI should be closely followed to detect neurodevelopmental alterations during the postnatal period.
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R.
2017-01-01
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee (Bombus terrestris) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. PMID:28978727
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R; Chittka, Lars; Perry, Clint J
2017-10-11
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee ( Bombus terrestris ) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. © 2017 The Authors.
Cabirol, Amélie; Brooks, Rufus; Groh, Claudia; Barron, Andrew B; Devaud, Jean-Marc
2017-10-01
The honey bee mushroom bodies (MBs) are brain centers required for specific learning tasks. Here, we show that environmental conditions experienced as young adults affect the maturation of MB neuropil and performance in a MB-dependent learning task. Specifically, olfactory reversal learning was selectively impaired following early exposure to an impoverished environment lacking some of the sensory and social interactions present in the hive. In parallel, the overall number of synaptic boutons increased within the MB olfactory neuropil, whose volume remained unaffected. This suggests that experience of the rich in-hive environment promotes MB maturation and the development of MB-dependent learning capacities. © 2017 Cabirol et al.; Published by Cold Spring Harbor Laboratory Press.
Introduction: Thyroid hormones (TH) influence central nervous system (CNS) function during development and in adulthood. The hippocampus, a brain area critical for learning and memory is sensitive to TH insufficiency. Synaptic transmission in the hippocampus is impaired following...
NASA Astrophysics Data System (ADS)
Grytskyy, Dmytro; Diesmann, Markus; Helias, Moritz
2016-06-01
Self-organized structures in networks with spike-timing dependent synaptic plasticity (STDP) are likely to play a central role for information processing in the brain. In the present study we derive a reaction-diffusion-like formalism for plastic feed-forward networks of nonlinear rate-based model neurons with a correlation sensitive learning rule inspired by and being qualitatively similar to STDP. After obtaining equations that describe the change of the spatial shape of the signal from layer to layer, we derive a criterion for the nonlinearity necessary to obtain stable dynamics for arbitrary input. We classify the possible scenarios of signal evolution and find that close to the transition to the unstable regime metastable solutions appear. The form of these dissipative solitons is determined analytically and the evolution and interaction of several such coexistent objects is investigated.
Rinaldi, Arianna; Defterali, Cagla; Mialot, Antoine; Garden, Derek L F; Beraneck, Mathieu; Nolan, Matthew F
2013-01-01
Neural computations rely on ion channels that modify neuronal responses to synaptic inputs. While single cell recordings suggest diverse and neurone type-specific computational functions for HCN1 channels, their behavioural roles in any single neurone type are not clear. Using a battery of behavioural assays, including analysis of motor learning in vestibulo-ocular reflex and rotarod tests, we find that deletion of HCN1 channels from cerebellar Purkinje cells selectively impairs late stages of motor learning. Because deletion of HCN1 modifies only a subset of behaviours involving Purkinje cells, we asked whether the channel also has functional specificity at a cellular level. We find that HCN1 channels in cerebellar Purkinje cells reduce the duration of inhibitory synaptic responses but, in the absence of membrane hyperpolarization, do not affect responses to excitatory inputs. Our results indicate that manipulation of subthreshold computation in a single neurone type causes specific modifications to behaviour. PMID:24000178
Cerebellar Plasticity and Motor Learning Deficits in a Copy Number Variation Mouse Model of Autism
Piochon, Claire; Kloth, Alexander D; Grasselli, Giorgio; Titley, Heather K; Nakayama, Hisako; Hashimoto, Kouichi; Wan, Vivian; Simmons, Dana H; Eissa, Tahra; Nakatani, Jin; Cherskov, Adriana; Miyazaki, Taisuke; Watanabe, Masahiko; Takumi, Toru; Kano, Masanobu; Wang, Samuel S-H; Hansel, Christian
2014-01-01
A common feature of autism spectrum disorder (ASD) is the impairment of motor control and learning, occurring in a majority of children with autism, consistent with perturbation in cerebellar function. Here we report alterations in motor behavior and cerebellar synaptic plasticity in a mouse model (patDp/+) for the human 15q11-13 duplication, one of the most frequently observed genetic aberrations in autism. These mice show ASD-resembling social behavior deficits. We find that in patDp/+ mice delay eyeblink conditioning—a form of cerebellum-dependent motor learning—is impaired, and observe deregulation of a putative cellular mechanism for motor learning, long-term depression (LTD) at parallel fiber-Purkinje cell synapses. Moreover, developmental elimination of surplus climbing fibers—a model for activity-dependent synaptic pruning—is impaired. These findings point to deficits in synaptic plasticity and pruning as potential causes for motor problems and abnormal circuit development in autism. PMID:25418414
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-08-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-01-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure. PMID:26291697
Cuthbert, Peter C; Stanford, Lianne E; Coba, Marcelo P; Ainge, James A; Fink, Ann E; Opazo, Patricio; Delgado, Jary Y; Komiyama, Noboru H; O'Dell, Thomas J; Grant, Seth G N
2007-03-07
Understanding the mechanisms whereby information encoded within patterns of action potentials is deciphered by neurons is central to cognitive psychology. The multiprotein complexes formed by NMDA receptors linked to synaptic membrane-associated guanylate kinase (MAGUK) proteins including synapse-associated protein 102 (SAP102) and other associated proteins are instrumental in these processes. Although humans with mutations in SAP102 show mental retardation, the physiological and biochemical mechanisms involved are unknown. Using SAP102 knock-out mice, we found specific impairments in synaptic plasticity induced by selective frequencies of stimulation that also required extracellular signal-regulated kinase signaling. This was paralleled by inflexibility and impairment in spatial learning. Improvement in spatial learning performance occurred with extra training despite continued use of a suboptimal search strategy, and, in a separate nonspatial task, the mutants again deployed a different strategy. Double-mutant analysis of postsynaptic density-95 and SAP102 mutants indicate overlapping and specific functions of the two MAGUKs. These in vivo data support the model that specific MAGUK proteins couple the NMDA receptor to distinct downstream signaling pathways. This provides a mechanism for discriminating patterns of synaptic activity that lead to long-lasting changes in synaptic strength as well as distinct aspects of cognition in the mammalian nervous system.
Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
NASA Astrophysics Data System (ADS)
Gao, Ligang; Wang, I.-Ting; Chen, Pai-Yu; Vrudhula, Sarma; Seo, Jae-sun; Cao, Yu; Hou, Tuo-Hung; Yu, Shimeng
2015-11-01
A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO x /TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.
Kwon, Jeong-Tae; Choi, June-Seek
2009-08-05
Use-dependent synaptic modifications in the lateral nucleus of the amygdala (LA) have been suggested to be the cellular analog of memory trace after pavlovian fear conditioning. However, whether neurophysiological changes in the LA are produced as a direct consequence of associative learning awaits additional proof. Using microstimulation of the medial geniculate nucleus of the thalamus as the conditioned stimulus (CS), we demonstrated that contingent pairings of the brain-stimulation CS and a footshock unconditioned stimulus lead to enhanced synaptic efficacy in the thalamic input to the LA, supporting the hypothesis that localized synaptic alterations underlie fear memory formation.
Synaptic plasticity in drug reward circuitry.
Winder, Danny G; Egli, Regula E; Schramm, Nicole L; Matthews, Robert T
2002-11-01
Drug addiction is a major public health issue worldwide. The persistence of drug craving coupled with the known recruitment of learning and memory centers in the brain has led investigators to hypothesize that the alterations in glutamatergic synaptic efficacy brought on by synaptic plasticity may play key roles in the addiction process. Here we review the present literature, examining the properties of synaptic plasticity within drug reward circuitry, and the effects that drugs of abuse have on these forms of plasticity. Interestingly, multiple forms of synaptic plasticity can be induced at glutamatergic synapses within the dorsal striatum, its ventral extension the nucleus accumbens, and the ventral tegmental area, and at least some of these forms of plasticity are regulated by behaviorally meaningful administration of cocaine and/or amphetamine. Thus, the present data suggest that regulation of synaptic plasticity in reward circuits is a tractable candidate mechanism underlying aspects of addiction.
Bennett, James E. M.; Bair, Wyeth
2015-01-01
Traveling waves in the developing brain are a prominent source of highly correlated spiking activity that may instruct the refinement of neural circuits. A candidate mechanism for mediating such refinement is spike-timing dependent plasticity (STDP), which translates correlated activity patterns into changes in synaptic strength. To assess the potential of these phenomena to build useful structure in developing neural circuits, we examined the interaction of wave activity with STDP rules in simple, biologically plausible models of spiking neurons. We derive an expression for the synaptic strength dynamics showing that, by mapping the time dependence of STDP into spatial interactions, traveling waves can build periodic synaptic connectivity patterns into feedforward circuits with a broad class of experimentally observed STDP rules. The spatial scale of the connectivity patterns increases with wave speed and STDP time constants. We verify these results with simulations and demonstrate their robustness to likely sources of noise. We show how this pattern formation ability, which is analogous to solutions of reaction-diffusion systems that have been widely applied to biological pattern formation, can be harnessed to instruct the refinement of postsynaptic receptive fields. Our results hold for rich, complex wave patterns in two dimensions and over several orders of magnitude in wave speeds and STDP time constants, and they provide predictions that can be tested under existing experimental paradigms. Our model generalizes across brain areas and STDP rules, allowing broad application to the ubiquitous occurrence of traveling waves and to wave-like activity patterns induced by moving stimuli. PMID:26308406
Bennett, James E M; Bair, Wyeth
2015-08-01
Traveling waves in the developing brain are a prominent source of highly correlated spiking activity that may instruct the refinement of neural circuits. A candidate mechanism for mediating such refinement is spike-timing dependent plasticity (STDP), which translates correlated activity patterns into changes in synaptic strength. To assess the potential of these phenomena to build useful structure in developing neural circuits, we examined the interaction of wave activity with STDP rules in simple, biologically plausible models of spiking neurons. We derive an expression for the synaptic strength dynamics showing that, by mapping the time dependence of STDP into spatial interactions, traveling waves can build periodic synaptic connectivity patterns into feedforward circuits with a broad class of experimentally observed STDP rules. The spatial scale of the connectivity patterns increases with wave speed and STDP time constants. We verify these results with simulations and demonstrate their robustness to likely sources of noise. We show how this pattern formation ability, which is analogous to solutions of reaction-diffusion systems that have been widely applied to biological pattern formation, can be harnessed to instruct the refinement of postsynaptic receptive fields. Our results hold for rich, complex wave patterns in two dimensions and over several orders of magnitude in wave speeds and STDP time constants, and they provide predictions that can be tested under existing experimental paradigms. Our model generalizes across brain areas and STDP rules, allowing broad application to the ubiquitous occurrence of traveling waves and to wave-like activity patterns induced by moving stimuli.
Distance-dependent gradient in NMDAR-driven spine calcium signals along tapering dendrites
Walker, Alison S.; Grillo, Federico; Jackson, Rachel E.; Rigby, Mark; Lowe, Andrew S.; Vizcay-Barrena, Gema; Fleck, Roland A.; Burrone, Juan
2017-01-01
Neurons receive a multitude of synaptic inputs along their dendritic arbor, but how this highly heterogeneous population of synaptic compartments is spatially organized remains unclear. By measuring N-methyl-d-aspartic acid receptor (NMDAR)-driven calcium responses in single spines, we provide a spatial map of synaptic calcium signals along dendritic arbors of hippocampal neurons and relate this to measures of synapse structure. We find that quantal NMDAR calcium signals increase in amplitude as they approach a thinning dendritic tip end. Based on a compartmental model of spine calcium dynamics, we propose that this biased distribution in calcium signals is governed by a gradual, distance-dependent decline in spine size, which we visualized using serial block-face scanning electron microscopy. Our data describe a cell-autonomous feature of principal neurons, where tapering dendrites show an inverse distribution of spine size and NMDAR-driven calcium signals along dendritic trees, with important implications for synaptic plasticity rules and spine function. PMID:28209776
Long Term Synaptic Plasticity and Learning in Neuronal Networks
1989-01-14
Videomicroscopy and synaptic physiology of cultured hippocampal slices. Soc, Neurosci. Abstr. 14:246, 1988. Griffith, W.H., Brown, T.H. and Johnston, D...Chapman, P.F., Chang, V., and Brown, T.H. . Videomicroscopy of acute brain slices from hippocampus and amygdala. Brain Res. Bull, 21: 373-383, 1988
Differential splicing and glycosylation of Apoer2 alters synaptic plasticity and fear learning.
Wasser, Catherine R; Masiulis, Irene; Durakoglugil, Murat S; Lane-Donovan, Courtney; Xian, Xunde; Beffert, Uwe; Agarwala, Anandita; Hammer, Robert E; Herz, Joachim
2014-11-25
Apoer2 is an essential receptor in the central nervous system that binds to the apolipoprotein ApoE. Various splice variants of Apoer2 are produced. We showed that Apoer2 lacking exon 16, which encodes the O-linked sugar (OLS) domain, altered the proteolytic processing and abundance of Apoer2 in cells and synapse number and function in mice. In cultured cells expressing this splice variant, extracellular cleavage of OLS-deficient Apoer2 was reduced, consequently preventing γ-secretase-dependent release of the intracellular domain of Apoer2. Mice expressing Apoer2 lacking the OLS domain had increased Apoer2 abundance in the brain, hippocampal spine density, and glutamate receptor abundance, but decreased synaptic efficacy. Mice expressing a form of Apoer2 lacking the OLS domain and containing an alternatively spliced cytoplasmic tail region that promotes glutamate receptor signaling showed enhanced hippocampal long-term potentiation (LTP), a phenomenon associated with learning and memory. However, these mice did not display enhanced spatial learning in the Morris water maze, and cued fear conditioning was reduced. Reducing the expression of the mutant Apoer2 allele so that the abundance of the protein was similar to that of Apoer2 in wild-type mice normalized spine density, hippocampal LTP, and cued fear learning. These findings demonstrated a role for ApoE receptors as regulators of synaptic glutamate receptor activity and established differential receptor glycosylation as a potential regulator of synaptic function and memory. Copyright © 2014, American Association for the Advancement of Science.
Differential splicing and glycosylation of Apoer2 alters synaptic plasticity and fear learning
Wasser, Catherine R.; Masiulis, Irene; Durakoglugil, Murat S.; Lane-Donovan, Courtney; Xian, Xunde; Beffert, Uwe; Agarwala, Anandita; Hammer, Robert E.; Herz, Joachim
2015-01-01
Apoer2 is an essential receptor in the central nervous system that binds to the apolipoprotein ApoE. Various splice variants of Apoer2 are produced. We showed that Apoer2 lacking exon 16, which encodes the O-linked sugar (OLS) domain, altered the proteolytic processing and abundance of Apoer2 in cells and synapse number and function in mice. In cultured cells expressing this splice variant, extracellular cleavage of OLS-deficient Apoer2 was reduced, consequently preventing γ-secretase-dependent release of the intracellular domain of Apoer2. Mice expressing Apoer2 lacking the OLS domain had increased Apoer2 abundance in the brain, hippocampal spine density, and glutamate receptor abundance, but decreased synaptic efficacy. Mice expressing a form of Apoer2 lacking the OLS domain and containing an alternatively spliced cytoplasmic tail region that promotes glutamate receptor signaling showed enhanced hippocampal long-term potentiation (LTP), a phenomenon associated with learning and memory. However, these mice did not display enhanced spatial learning in the Morris water maze, and cued fear conditioning was reduced. Reducing the expression of the mutant Apoer2 allele so that the abundance of the protein was similar to that of Apoer2 in wild-type mice normalized spine density, hippocampal LTP, and cued fear learning. These findings demonstrated a role for ApoE receptors as regulators of synaptic glutamate receptor activity and established differential receptor glycosylation as a potential regulator of synaptic function and memory. PMID:25429077
Remodeling of hippocampal spine synapses in the rat learned helplessness model of depression.
Hajszan, Tibor; Dow, Antonia; Warner-Schmidt, Jennifer L; Szigeti-Buck, Klara; Sallam, Nermin L; Parducz, Arpad; Leranth, Csaba; Duman, Ronald S
2009-03-01
Although it has been postulated for many years that depression is associated with loss of synapses, primarily in the hippocampus, and that antidepressants facilitate synapse growth, we still lack ultrastructural evidence that changes in depressive behavior are indeed correlated with structural synaptic modifications. We analyzed hippocampal spine synapses of male rats (n=127) with electron microscopic stereology in association with performance in the learned helplessness paradigm. Inescapable footshock (IES) caused an acute and persistent loss of spine synapses in each of CA1, CA3, and dentate gyrus, which was associated with a severe escape deficit in learned helplessness. On the other hand, IES elicited no significant synaptic alterations in motor cortex. A single injection of corticosterone reproduced both the hippocampal synaptic changes and the behavioral responses induced by IES. Treatment of IES-exposed animals for 6 days with desipramine reversed both the hippocampal spine synapse loss and the escape deficit in learned helplessness. We noted, however, that desipramine failed to restore the number of CA1 spine synapses to nonstressed levels, which was associated with a minor escape deficit compared with nonstressed control rats. Shorter, 1-day or 3-day desipramine treatments, however, had neither synaptic nor behavioral effects. These results indicate that changes in depressive behavior are associated with remarkable remodeling of hippocampal spine synapses at the ultrastructural level. Because spine synapse loss contributes to hippocampal dysfunction, this cellular mechanism may be an important component in the neurobiology of stress-related disorders such as depression.
Hanuschkin, A; Ganguli, S; Hahnloser, R H R
2013-01-01
Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.
Hanuschkin, A.; Ganguli, S.; Hahnloser, R. H. R.
2013-01-01
Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli. PMID:23801941
Circadian glucocorticoid oscillations promote learning-dependent synapse formation and maintenance
Liston, Conor; Cichon, Joseph M; Jeanneteau, Freddy; Jia, Zhengping; Chao, Moses V; Gan, Wen-Biao
2013-01-01
Excessive glucocorticoid exposure during chronic stress causes synapse loss and learning impairment. Under normal physiological conditions, glucocorticoid activity oscillates in synchrony with the circadian rhythm. Whether and how endogenous glucocorticoid oscillations modulate synaptic plasticity and learning is unknown. Here we show that circadian glucocorticoid peaks promote postsynaptic dendritic spine formation in the mouse cortex after motor skill learning, whereas troughs are required for stabilizing newly formed spines that are important for long-term memory retention. Conversely, chronic and excessive exposure to glucocorticoids eliminates learning-associated new spines and disrupts previously acquired memories. Furthermore, we show that glucocorticoids promote rapid spine formation through a non-transcriptional mechanism by means of the LIM kinase–cofilin pathway and increase spine elimination through transcriptional mechanisms involving mineralocorticoid receptor activation. Together, these findings indicate that tightly regulated circadian glucocorticoid oscillations are important for learning-dependent synaptic formation and maintenance. They also delineate a new signaling mechanism underlying these effects. PMID:23624512
NASA Astrophysics Data System (ADS)
Ciszak, Marzena; Bellesi, Michele
2011-12-01
The transitions between waking and sleep states are characterized by considerable changes in neuronal firing. During waking, neurons fire tonically at irregular intervals and a desynchronized activity is observed at the electroencephalogram. This activity becomes synchronized with slow wave sleep onset when neurons start to oscillate between periods of firing (up-states) and periods of silence (down-states). Recently, it has been proposed that the connections between neurons undergo potentiation during waking, whereas they weaken during slow wave sleep. Here, we propose a dynamical model to describe basic features of the autonomous transitions between such states. We consider a network of coupled neurons in which the strength of the interactions is modulated by synaptic long term potentiation and depression, according to the spike time-dependent plasticity rule (STDP). The model shows that the enhancement of synaptic strength between neurons occurring in waking increases the propensity of the network to synchronize and, conversely, desynchronization appears when the strength of the connections become weaker. Both transitions appear spontaneously, but the transition from sleep to waking required a slight modification of the STDP rule with the introduction of a mechanism which becomes active during sleep and changes the proportion between potentiation and depression in accordance with biological data. At the neuron level, transitions from desynchronization to synchronization and vice versa can be described as a bifurcation between two different states, whose dynamical regime is modulated by synaptic strengths, thus suggesting that transition from a state to an another can be determined by quantitative differences between potentiation and depression.
Hippocampal metaplasticity is required for the formation of temporal associative memories.
Xu, Jian; Antion, Marcia D; Nomura, Toshihiro; Kraniotis, Stephen; Zhu, Yongling; Contractor, Anis
2014-12-10
Metaplasticity regulates the threshold for modification of synaptic strength and is an important regulator of learning rules; however, it is not known whether these cellular mechanisms for homeostatic regulation of synapses contribute to particular forms of learning. Conditional ablation of mGluR5 in CA1 pyramidal neurons resulted in the inability of low-frequency trains of afferent activation to prime synapses for subsequent theta burst potentiation. Priming-induced metaplasticity requires mGluR5-mediated mobilization of endocannabinoids during the priming train to induce long-term depression of inhibition (I-LTD). Mice lacking priming-induced plasticity had no deficit in spatial reference memory tasks, but were impaired in an associative task with a temporal component. Conversely, enhancing endocannabinoid signaling facilitated temporal associative memory acquisition and, after training animals in these tasks, ex vivo I-LTD was partially occluded and theta burst LTP was enhanced. Together, these results suggest a link between metaplasticity mechanisms in the hippocampus and the formation of temporal associative memories. Copyright © 2014 the authors 0270-6474/14/3416762-12$15.00/0.
Movement and Learning: A Valuable Connection
ERIC Educational Resources Information Center
Stevens-Smith, Deborah
2004-01-01
In this article, the author discusses the relatedness between movement and learning for students. The process of learning involves basic nerve cells that transmit information and create numerous neural connections essential to learning. One way to increase learning is to encourage creation of more synaptic connections in the brain through…
Monje, Francisco J; Kim, Eun-Jung; Pollak, Daniela D; Cabatic, Maureen; Li, Lin; Baston, Arthur; Lubec, Gert
2012-01-01
The focal adhesion kinase (FAK) is a non-receptor tyrosine kinase abundantly expressed in the mammalian brain and highly enriched in neuronal growth cones. Inhibitory and facilitatory activities of FAK on neuronal growth have been reported and its role in neuritic outgrowth remains controversial. Unlike other tyrosine kinases, such as the neurotrophin receptors regulating neuronal growth and plasticity, the relevance of FAK for learning and memory in vivo has not been clearly defined yet. A comprehensive study aimed at determining the role of FAK in neuronal growth, neurotransmitter release and synaptic plasticity in hippocampal neurons and in hippocampus-dependent learning and memory was therefore undertaken using the mouse model. Gain- and loss-of-function experiments indicated that FAK is a critical regulator of hippocampal cell morphology. FAK mediated neurotrophin-induced neuritic outgrowth and FAK inhibition affected both miniature excitatory postsynaptic potentials and activity-dependent hippocampal long-term potentiation prompting us to explore the possible role of FAK in spatial learning and memory in vivo. Our data indicate that FAK has a growth-promoting effect, is importantly involved in the regulation of the synaptic function and mediates in vivo hippocampus-dependent spatial learning and memory. Copyright © 2011 S. Karger AG, Basel.
Martinez, L A; Tejada-Simon, Maria Victoria
2018-06-01
Behavioral intervention therapy has proven beneficial in the treatment of autism and intellectual disabilities (ID), raising the possibility of certain changes in molecular mechanisms activated by these interventions that may promote learning. Fragile X syndrome (FXS) is a neurodevelopmental disorder characterized by autistic features and intellectual disability and can serve as a model to examine mechanisms that promote learning. FXS results from mutations in the fragile X mental retardation 1 gene (Fmr1) that prevents expression of the Fmr1 protein (FMRP), a messenger RNA (mRNA) translation regulator at synapses. Among many other functions, FMRP organizes a complex with the actin cytoskeleton-regulating small Rho GTPase Rac1. As in humans, Fmr1 KO mice lacking FMRP display autistic-like behaviors and deformities of actin-rich synaptic structures in addition to impaired hippocampal learning and synaptic plasticity. These features have been previously linked to proper function of actin remodeling proteins that includes Rac1. An important step in Rac1 activation and function is its translocation to the membrane, where it can influence synaptic actin cytoskeleton remodeling during hippocampus-dependent learning. Herein, we report that Fmr1 KO mouse hippocampus exhibits increased levels of membrane-bound Rac1, which may prevent proper learning-induced synaptic changes. We also determine that increasing training intensity during fear conditioning (FC) training restores contextual memory in Fmr1 KO mice and reduces membrane-bound Rac1 in Fmr1 KO hippocampus. Increased training intensity also results in normalized long-term potentiation in hippocampal slices taken from Fmr1 KO mice. These results point to interventional treatments providing new therapeutic options for FXS-related cognitive dysfunction.
Statistical characteristics of climbing fiber spikes necessary for efficient cerebellar learning.
Kuroda, S; Yamamoto, K; Miyamoto, H; Doya, K; Kawat, M
2001-03-01
Mean firing rates (MFRs), with analogue values, have thus far been used as information carriers of neurons in most brain theories of learning. However, the neurons transmit the signal by spikes, which are discrete events. The climbing fibers (CFs), which are known to be essential for cerebellar motor learning, fire at the ultra-low firing rates (around 1 Hz), and it is not yet understood theoretically how high-frequency information can be conveyed and how learning of smooth and fast movements can be achieved. Here we address whether cerebellar learning can be achieved by CF spikes instead of conventional MFR in an eye movement task, such as the ocular following response (OFR), and an arm movement task. There are two major afferents into cerebellar Purkinje cells: parallel fiber (PF) and CF, and the synaptic weights between PFs and Purkinje cells have been shown to be modulated by the stimulation of both types of fiber. The modulation of the synaptic weights is regulated by the cerebellar synaptic plasticity. In this study we simulated cerebellar learning using CF signals as spikes instead of conventional MFR. To generate the spikes we used the following four spike generation models: (1) a Poisson model in which the spike interval probability follows a Poisson distribution, (2) a gamma model in which the spike interval probability follows the gamma distribution, (3) a max model in which a spike is generated when a synaptic input reaches maximum, and (4) a threshold model in which a spike is generated when the input crosses a certain small threshold. We found that, in an OFR task with a constant visual velocity, learning was successful with stochastic models, such as Poisson and gamma models, but not in the deterministic models, such as max and threshold models. In an OFR with a stepwise velocity change and an arm movement task, learning could be achieved only in the Poisson model. In addition, for efficient cerebellar learning, the distribution of CF spike-occurrence time after stimulus onset must capture at least the first, second and third moments of the temporal distribution of error signals.
[How does sleeping restore our brain?].
Wigren, Henna-Kaisa; Stenberg, Tarja
2015-01-01
The central function of sleep is to keep our brain functional, but what is the restoration that sleep provides? Sleep after learning improves learning outcomes. According to the theory of synaptic homeostasis the total strength of synapses, having increased during the day, is restored during sleep, making room for the next day's experiences. According to the theory of active synaptic consolidation, repetition during sleep strengthens the synapses, and these strengthened synapses form a permanent engram. According to a recent study, removal of waste products from the brain may also be one of the functions of sleep.
Synaptic potentiation onto habenula neurons in the learned helplessness model of depression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, B.; Schulz, D.; Li, B
The cellular basis of depressive disorders is poorly understood. Recent studies in monkeys indicate that neurons in the lateral habenula (LHb), a nucleus that mediates communication between forebrain and midbrain structures, can increase their activity when an animal fails to receive an expected positive reward or receives a stimulus that predicts aversive conditions (that is, disappointment or anticipation of a negative outcome). LHb neurons project to, and modulate, dopamine-rich regions, such as the ventral tegmental area (VTA), that control reward-seeking behaviour and participate in depressive disorders. Here we show that in two learned helplessness models of depression, excitatory synapses ontomore » LHb neurons projecting to the VTA are potentiated. Synaptic potentiation correlates with an animal's helplessness behaviour and is due to an enhanced presynaptic release probability. Depleting transmitter release by repeated electrical stimulation of LHb afferents, using a protocol that can be effective for patients who are depressed, markedly suppresses synaptic drive onto VTA-projecting LHb neurons in brain slices and can significantly reduce learned helplessness behaviour in rats. Our results indicate that increased presynaptic action onto LHb neurons contributes to the rodent learned helplessness model of depression.« less
Moran, Rosalyn J; Symmonds, Mkael; Dolan, Raymond J; Friston, Karl J
2014-01-01
The aging brain shows a progressive loss of neuropil, which is accompanied by subtle changes in neuronal plasticity, sensory learning and memory. Neurophysiologically, aging attenuates evoked responses--including the mismatch negativity (MMN). This is accompanied by a shift in cortical responsivity from sensory (posterior) regions to executive (anterior) regions, which has been interpreted as a compensatory response for cognitive decline. Theoretical neurobiology offers a simpler explanation for all of these effects--from a Bayesian perspective, as the brain is progressively optimized to model its world, its complexity will decrease. A corollary of this complexity reduction is an attenuation of Bayesian updating or sensory learning. Here we confirmed this hypothesis using magnetoencephalographic recordings of the mismatch negativity elicited in a large cohort of human subjects, in their third to ninth decade. Employing dynamic causal modeling to assay the synaptic mechanisms underlying these non-invasive recordings, we found a selective age-related attenuation of synaptic connectivity changes that underpin rapid sensory learning. In contrast, baseline synaptic connectivity strengths were consistently strong over the decades. Our findings suggest that the lifetime accrual of sensory experience optimizes functional brain architectures to enable efficient and generalizable predictions of the world.
Synaptic potentiation onto habenula neurons in learned helplessness model of depression
Li, Bo; Piriz, Joaquin; Mirrione, Martine; Chung, ChiHye; Proulx, Christophe D.; Schulz, Daniela; Henn, Fritz; Malinow, Roberto
2010-01-01
The cellular basis of depressive disorders is poorly understood1. Recent studies in monkeys indicate that neurons in the lateral habenula (LHb), a nucleus that mediates communication between forebrain and midbrain structures, can increase their activity when an animal fails to receive an expected positive reward or receives a stimulus that predicts aversive conditions (i.e. disappointment or anticipation of a negative outcome)2, 3, 4. LHb neurons project to and modulate dopamine-rich regions such as the ventral-tegmental area (VTA)2, 5 that control reward-seeking behavior6 and participate in depressive disorders7. Here we show in two learned helplessness models of depression that excitatory synapses onto LHb neurons projecting to the VTA are potentiated. Synaptic potentiation correlates with an animal’s helplessness behavior and is due to an enhanced presynaptic release probability. Depleting transmitter release by repeated electrical stimulation of LHb afferents, using a protocol that can be effective on depressed patients8, 9, dramatically suppresses synaptic drive onto VTA-projecting LHb neurons in brain slices and can significantly reduce learned helplessness behavior in rats. Our results indicate that increased presynaptic action onto LHb neurons contributes to the rodent learned helplessness model of depression. PMID:21350486
Synaptic potentiation onto habenula neurons in the learned helplessness model of depression.
Li, Bo; Piriz, Joaquin; Mirrione, Martine; Chung, ChiHye; Proulx, Christophe D; Schulz, Daniela; Henn, Fritz; Malinow, Roberto
2011-02-24
The cellular basis of depressive disorders is poorly understood. Recent studies in monkeys indicate that neurons in the lateral habenula (LHb), a nucleus that mediates communication between forebrain and midbrain structures, can increase their activity when an animal fails to receive an expected positive reward or receives a stimulus that predicts aversive conditions (that is, disappointment or anticipation of a negative outcome). LHb neurons project to, and modulate, dopamine-rich regions, such as the ventral tegmental area (VTA), that control reward-seeking behaviour and participate in depressive disorders. Here we show that in two learned helplessness models of depression, excitatory synapses onto LHb neurons projecting to the VTA are potentiated. Synaptic potentiation correlates with an animal's helplessness behaviour and is due to an enhanced presynaptic release probability. Depleting transmitter release by repeated electrical stimulation of LHb afferents, using a protocol that can be effective for patients who are depressed, markedly suppresses synaptic drive onto VTA-projecting LHb neurons in brain slices and can significantly reduce learned helplessness behaviour in rats. Our results indicate that increased presynaptic action onto LHb neurons contributes to the rodent learned helplessness model of depression.
Generalized Adaptive Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1993-01-01
Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.
Rothwell, Patrick E; Fuccillo, Marc V; Maxeiner, Stephan; Hayton, Scott J; Gokce, Ozgun; Lim, Byung Kook; Fowler, Stephen C; Malenka, Robert C; Südhof, Thomas C
2014-07-03
In humans, neuroligin-3 mutations are associated with autism, whereas in mice, the corresponding mutations produce robust synaptic and behavioral changes. However, different neuroligin-3 mutations cause largely distinct phenotypes in mice, and no causal relationship links a specific synaptic dysfunction to a behavioral change. Using rotarod motor learning as a proxy for acquired repetitive behaviors in mice, we found that different neuroligin-3 mutations uniformly enhanced formation of repetitive motor routines. Surprisingly, neuroligin-3 mutations caused this phenotype not via changes in the cerebellum or dorsal striatum but via a selective synaptic impairment in the nucleus accumbens/ventral striatum. Here, neuroligin-3 mutations increased rotarod learning by specifically impeding synaptic inhibition onto D1-dopamine receptor-expressing but not D2-dopamine receptor-expressing medium spiny neurons. Our data thus suggest that different autism-associated neuroligin-3 mutations cause a common increase in acquired repetitive behaviors by impairing a specific striatal synapse and thereby provide a plausible circuit substrate for autism pathophysiology. Copyright © 2014 Elsevier Inc. All rights reserved.
Luo, Sarah X; Timbang, Leah; Kim, Jae-Ick; Shang, Yulei; Sandoval, Kadellyn; Tang, Amy A; Whistler, Jennifer L; Ding, Jun B; Huang, Eric J
2016-12-20
Neural circuits involving midbrain dopaminergic (DA) neurons regulate reward and goal-directed behaviors. Although local GABAergic input is known to modulate DA circuits, the mechanism that controls excitatory/inhibitory synaptic balance in DA neurons remains unclear. Here, we show that DA neurons use autocrine transforming growth factor β (TGF-β) signaling to promote the growth of axons and dendrites. Surprisingly, removing TGF-β type II receptor in DA neurons also disrupts the balance in TGF-β1 expression in DA neurons and neighboring GABAergic neurons, which increases inhibitory input, reduces excitatory synaptic input, and alters phasic firing patterns in DA neurons. Mice lacking TGF-β signaling in DA neurons are hyperactive and exhibit inflexibility in relinquishing learned behaviors and re-establishing new stimulus-reward associations. These results support a role for TGF-β in regulating the delicate balance of excitatory/inhibitory synaptic input in local microcircuits involving DA and GABAergic neurons and its potential contributions to neuropsychiatric disorders. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Reelin protects against amyloid β toxicity in vivo
Lane-Donovan, Courtney; Philips, Gary T.; Wasser, Catherine R.; Durakoglugil, Murat S.; Masiulis, Irene; Upadhaya, Ajeet; Pohlkamp, Theresa; Coskun, Cagil; Kotti, Tiina; Steller, Laura; Hammer, Robert E.; Frotscher, Michael; Bock, Hans H.; Herz, Joachim
2015-01-01
Alzheimer's disease (AD) is a currently incurable neurodegenerative disorder and the most common form of dementia in people over the age of 65. The predominant genetic risk factor for AD is the ε4 allele encoding apolipoprotein E (ApoE4). The secreted glycoprotein Reelin, which is a physiological ligand for the multifunctional ApoE receptors Apolipoprotein E receptor 2 (Apoer2) and very low-density lipoprotein receptor (Vldlr), enhances synaptic plasticity. We have previously shown that the presence of ApoE4 renders neurons unresponsive to Reelin by impairing the recycling of the receptors, thereby decreasing its protective effects against amyloid β (Aβ) oligomer-induced synaptic toxicity in vitro. Here, we show that when Reelin was knocked out in adult mice, these mice behaved normally without overt learning or memory deficits. However, they were strikingly sensitive to amyloid-induced synaptic suppression, and had profound memory and learning disabilities at very low amounts of amyloid deposition. Our findings highlight the physiological importance of Reelin in protecting the brain against Aβ-induced synaptic dysfunction and memory impairment. PMID:26152694
A Cognitive Model Based on Neuromodulated Plasticity
Ruan, Xiaogang
2016-01-01
Associative learning, including classical conditioning and operant conditioning, is regarded as the most fundamental type of learning for animals and human beings. Many models have been proposed surrounding classical conditioning or operant conditioning. However, a unified and integrated model to explain the two types of conditioning is much less studied. Here, a model based on neuromodulated synaptic plasticity is presented. The model is bioinspired including multistored memory module and simulated VTA dopaminergic neurons to produce reward signal. The synaptic weights are modified according to the reward signal, which simulates the change of associative strengths in associative learning. The experiment results in real robots prove the suitability and validity of the proposed model. PMID:27872638
Dynamic Observation of Brain-Like Learning in a Ferroelectric Synapse Device
NASA Astrophysics Data System (ADS)
Nishitani, Yu; Kaneko, Yukihiro; Ueda, Michihito; Fujii, Eiji; Tsujimura, Ayumu
2013-04-01
A brain-like learning function was implemented in an electronic synapse device using a ferroelectric-gate field effect transistor (FeFET). The FeFET was a bottom-gate type FET with a ZnO channel and a ferroelectric Pb(Zr,Ti)O3 (PZT) gate insulator. The synaptic weight, which is represented by the channel conductance of the FeFET, is updated by applying a gate voltage through a change in the ferroelectric polarization in the PZT. A learning function based on the symmetric spike-timing dependent synaptic plasticity was implemented in the synapse device using the multilevel weight update by applying a pulse gate voltage. The dynamic weighting and learning behavior in the synapse device was observed as a change in the membrane potential in a spiking neuron circuit.
Statistical mechanics of neocortical interactions. Derivation of short-term-memory capacity
NASA Astrophysics Data System (ADS)
Ingber, Lester
1984-06-01
A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearest-neighbor columnar interactions are all consistent with well-established empirical rules of human short-term memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions.
RhoGTPase Regulators Orchestrate Distinct Stages of Synaptic Development
Martin-Vilchez, Samuel; Whitmore, Leanna; Asmussen, Hannelore; Zareno, Jessica; Horwitz, Rick; Newell-Litwa, Karen
2017-01-01
Small RhoGTPases regulate changes in post-synaptic spine morphology and density that support learning and memory. They are also major targets of synaptic disorders, including Autism. Here we sought to determine whether upstream RhoGTPase regulators, including GEFs, GAPs, and GDIs, sculpt specific stages of synaptic development. The majority of examined molecules uniquely regulate either early spine precursor formation or later maturation. Specifically, an activator of actin polymerization, the Rac1 GEF β-PIX, drives spine precursor formation, whereas both FRABIN, a Cdc42 GEF, and OLIGOPHRENIN-1, a RhoA GAP, regulate spine precursor elongation. However, in later development, a novel Rac1 GAP, ARHGAP23, and RhoGDIs inactivate actomyosin dynamics to stabilize mature synapses. Our observations demonstrate that specific combinations of RhoGTPase regulatory proteins temporally balance RhoGTPase activity during post-synaptic spine development. PMID:28114311
NASA Astrophysics Data System (ADS)
La Barbera, Selina; Vincent, Adrien F.; Vuillaume, Dominique; Querlioz, Damien; Alibart, Fabien
2016-12-01
Bio-inspired computing represents today a major challenge at different levels ranging from material science for the design of innovative devices and circuits to computer science for the understanding of the key features required for processing of natural data. In this paper, we propose a detail analysis of resistive switching dynamics in electrochemical metallization cells for synaptic plasticity implementation. We show how filament stability associated to joule effect during switching can be used to emulate key synaptic features such as short term to long term plasticity transition and spike timing dependent plasticity. Furthermore, an interplay between these different synaptic features is demonstrated for object motion detection in a spike-based neuromorphic circuit. System level simulation presents robust learning and promising synaptic operation paving the way to complex bio-inspired computing systems composed of innovative memory devices.
Chau, Lily S.; Prakapenka, Alesia V.; Zendeli, Liridon; Davis, Ashley S.; Galvez, Roberto
2014-01-01
Studies utilizing general learning and memory tasks have suggested the importance of neocortical structural plasticity for memory consolidation. However, these learning tasks typically result in learning of multiple different tasks over several days of training, making it difficult to determine the synaptic time course mediating each learning event. The current study used trace-eyeblink conditioning to determine the time course for neocortical spine modification during learning. With eyeblink conditioning, subjects are presented with a neutral, conditioned stimulus (CS) paired with a salient, unconditioned stimulus (US) to elicit an unconditioned response (UR). With multiple CS-US pairings, subjects learn to associate the CS with the US and exhibit a conditioned response (CR) when presented with the CS. Trace conditioning is when there is a stimulus free interval between the CS and the US. Utilizing trace-eyeblink conditioning with whisker stimulation as the CS (whisker-trace-eyeblink: WTEB), previous findings have shown that primary somatosensory (barrel) cortex is required for both acquisition and retention of the trace-association. Additionally, prior findings demonstrated that WTEB acquisition results in an expansion of the cytochrome oxidase whisker representation and synaptic modification in layer IV of barrel cortex. To further explore these findings and determine the time course for neocortical learning-induced spine modification, the present study utilized WTEB conditioning to examine Golgi-Cox stained neurons in layer IV of barrel cortex. Findings from this study demonstrated a training-dependent spine proliferation in layer IV of barrel cortex during trace associative learning. Furthermore, findings from this study showing that filopodia-like spines exhibited a similar pattern to the overall spine density further suggests that reorganization of synaptic contacts set the foundation for learning-induced neocortical modifications through the different neocortical layers. PMID:24760074
Cerebellar supervised learning revisited: biophysical modeling and degrees-of-freedom control.
Kawato, Mitsuo; Kuroda, Shinya; Schweighofer, Nicolas
2011-10-01
The biophysical models of spike-timing-dependent plasticity have explored dynamics with molecular basis for such computational concepts as coincidence detection, synaptic eligibility trace, and Hebbian learning. They overall support different learning algorithms in different brain areas, especially supervised learning in the cerebellum. Because a single spine is physically very small, chemical reactions at it are essentially stochastic, and thus sensitivity-longevity dilemma exists in the synaptic memory. Here, the cascade of excitable and bistable dynamics is proposed to overcome this difficulty. All kinds of learning algorithms in different brain regions confront with difficult generalization problems. For resolution of this issue, the control of the degrees-of-freedom can be realized by changing synchronicity of neural firing. Especially, for cerebellar supervised learning, the triangle closed-loop circuit consisting of Purkinje cells, the inferior olive nucleus, and the cerebellar nucleus is proposed as a circuit to optimally control synchronous firing and degrees-of-freedom in learning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Remodeling of Hippocampal Spine Synapses in the Rat Learned Helplessness Model of Depression
Hajszan, Tibor; Dow, Antonia; Warner-Schmidt, Jennifer L.; Szigeti-Buck, Klara; Sallam, Nermin L.; Parducz, Arpad; Leranth, Csaba; Duman, Ronald S.
2009-01-01
Background Although it has been postulated for many years that depression is associated with loss of synapses, primarily in the hippocampus, and that antidepressants facilitate synapse growth, we still lack ultrastructural evidence that changes in depressive behavior are indeed correlated with structural synaptic modifications. Methods We analyzed hippocampal spine synapses of male rats (n=127) with electron microscopic stereology in association with performance in the learned helplessness paradigm. Results Inescapable footshock (IES) caused an acute and persistent loss of spine synapses in each of CA1, CA3, and dentate gyrus, which was associated with a severe escape deficit in learned helplessness. On the other hand, IES elicited no significant synaptic alterations in motor cortex. A single injection of corticosterone reproduced both the hippocampal synaptic changes and the behavioral responses induced by IES. Treatment of IES-exposed animals for six days with desipramine reversed both the hippocampal spine synapse loss and the escape deficit in learned helplessness. We noted, however, that desipramine failed to restore the number of CA1 spine synapses to nonstressed levels, which was associated with a minor escape deficit compared to nonstressed controls. Shorter, one-day or three-day desipramine treatments, however, had neither synaptic nor behavioral effects. Conclusions These results indicate that changes in depressive behavior are associated with remarkable remodeling of hippocampal spine synapses at the ultrastructural level. Because spine synapse loss contributes to hippocampal dysfunction, this cellular mechanism may be an important component in the neurobiology of stress-related disorders such as depression. PMID:19006787
Optimal structure of metaplasticity for adaptive learning
2017-01-01
Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. PMID:28658247
Dynamic DNA Methylation Controls Glutamate Receptor Trafficking and Synaptic Scaling
Sweatt, J. David
2016-01-01
Hebbian plasticity, including LTP and LTD, has long been regarded as important for local circuit refinement in the context of memory formation and stabilization. However, circuit development and stabilization additionally relies on non-Hebbian, homoeostatic, forms of plasticity such as synaptic scaling. Synaptic scaling is induced by chronic increases or decreases in neuronal activity. Synaptic scaling is associated with cell-wide adjustments in postsynaptic receptor density, and can occur in a multiplicative manner resulting in preservation of relative synaptic strengths across the entire neuron's population of synapses. Both active DNA methylation and de-methylation have been validated as crucial regulators of gene transcription during learning, and synaptic scaling is known to be transcriptionally dependent. However, it has been unclear whether homeostatic forms of plasticity such as synaptic scaling are regulated via epigenetic mechanisms. This review describes exciting recent work that has demonstrated a role for active changes in neuronal DNA methylation and demethylation as a controller of synaptic scaling and glutamate receptor trafficking. These findings bring together three major categories of memory-associated mechanisms that were previously largely considered separately: DNA methylation, homeostatic plasticity, and glutamate receptor trafficking. PMID:26849493
Nitric Oxide Is an Activity-Dependent Regulator of Target Neuron Intrinsic Excitability
Steinert, Joern R.; Robinson, Susan W.; Tong, Huaxia; Haustein, Martin D.; Kopp-Scheinpflug, Cornelia; Forsythe, Ian D.
2011-01-01
Summary Activity-dependent changes in synaptic strength are well established as mediating long-term plasticity underlying learning and memory, but modulation of target neuron excitability could complement changes in synaptic strength and regulate network activity. It is thought that homeostatic mechanisms match intrinsic excitability to the incoming synaptic drive, but evidence for involvement of voltage-gated conductances is sparse. Here, we show that glutamatergic synaptic activity modulates target neuron excitability and switches the basis of action potential repolarization from Kv3 to Kv2 potassium channel dominance, thereby adjusting neuronal signaling between low and high activity states, respectively. This nitric oxide-mediated signaling dramatically increases Kv2 currents in both the auditory brain stem and hippocampus (>3-fold) transforming synaptic integration and information transmission but with only modest changes in action potential waveform. We conclude that nitric oxide is a homeostatic regulator, tuning neuronal excitability to the recent history of excitatory synaptic inputs over intervals of minutes to hours. PMID:21791288
Nie, Jing; Tian, Yong; Zhang, Yu; Lu, Yan-Liu; Li, Li-Sheng
2016-01-01
Background Neuronal and synaptic loss is the most important risk factor for cognitive impairment. Inhibiting neuronal apoptosis and preventing synaptic loss are promising therapeutic approaches for Alzheimer’s disease (AD). In this study, we investigate the protective effects of Dendrobium alkaloids (DNLA), a Chinese medicinal herb extract, on β-amyloid peptide segment 25–35 (Aβ25-35)-induced neuron and synaptic loss in mice. Method Aβ25–35(10 µg) was injected into the bilateral ventricles of male mice followed by an oral administration of DNLA (40 mg/kg) for 19 days. The Morris water maze was used for evaluating the ability of spatial learning and memory function of mice. The morphological changes were examined via H&E staining and Nissl staining. TUNEL staining was used to check the neuronal apoptosis. The ultrastructure changes of neurons were observed under electron microscope. Western blot was used to evaluate the protein expression levels of ciliary neurotrophic factor (CNTF), glial cell line-derived neurotrophic factor (GDNF), and brain-derived neurotrophic factor (BDNF) in the hippocampus and cortex. Results DNLA significantly attenuated Aβ25–35-induced spatial learning and memory impairments in mice. DNLA prevented Aβ25–35-induced neuronal loss in the hippocampus and cortex, increased the number of Nissl bodies, improved the ultrastructural injury of neurons and increased the number of synapses in neurons. Furthermore, DNLA increased the protein expression of neurotrophic factors BDNF, CNTF and GDNF in the hippocampus and cortex. Conclusions DNLA can prevent neuronal apoptosis and synaptic loss. This effect is mediated at least in part via increasing the expression of BDNF, GDNF and CNTF in the hippocampus and cortex; improving Aβ-induced spatial learning and memory impairment in mice. PMID:27994964
Cicvaric, Ana; Yang, Jiaye; Krieger, Sigurd; Khan, Deeba; Kim, Eun-Jung; Dominguez-Rodriguez, Manuel; Cabatic, Maureen; Molz, Barbara; Acevedo Aguilar, Juan Pablo; Milicevic, Radoslav; Smani, Tarik; Breuss, Johannes M; Kerjaschki, Dontscho; Pollak, Daniela D; Uhrin, Pavel; Monje, Francisco J
2016-12-01
Podoplanin is a cell-surface glycoprotein constitutively expressed in the brain and implicated in human brain tumorigenesis. The intrinsic function of podoplanin in brain neurons remains however uncharacterized. Using an established podoplanin-knockout mouse model and electrophysiological, biochemical, and behavioral approaches, we investigated the brain neuronal role of podoplanin. Ex-vivo electrophysiology showed that podoplanin deletion impairs dentate gyrus synaptic strengthening. In vivo, podoplanin deletion selectively impaired hippocampus-dependent spatial learning and memory without affecting amygdala-dependent cued fear conditioning. In vitro, neuronal overexpression of podoplanin promoted synaptic activity and neuritic outgrowth whereas podoplanin-deficient neurons exhibited stunted outgrowth and lower levels of p-Ezrin, TrkA, and CREB in response to nerve growth factor (NGF). Surface Plasmon Resonance data further indicated a physical interaction between podoplanin and NGF. This work proposes podoplanin as a novel component of the neuronal machinery underlying neuritogenesis, synaptic plasticity, and hippocampus-dependent memory functions. The existence of a relevant cross-talk between podoplanin and the NGF/TrkA signaling pathway is also for the first time proposed here, thus providing a novel molecular complex as a target for future multidisciplinary studies of the brain function in the physiology and the pathology. Key messages Podoplanin, a protein linked to the promotion of human brain tumors, is required in vivo for proper hippocampus-dependent learning and memory functions. Deletion of podoplanin selectively impairs activity-dependent synaptic strengthening at the neurogenic dentate-gyrus and hampers neuritogenesis and phospho Ezrin, TrkA and CREB protein levels upon NGF stimulation. Surface plasmon resonance data indicates a physical interaction between podoplanin and NGF. On these grounds, a relevant cross-talk between podoplanin and NGF as well as a role for podoplanin in plasticity-related brain neuronal functions is here proposed.
ERIC Educational Resources Information Center
Pu, Lu; Kopec, Ashley M.; Boyle, Heather D.; Carew, Thomas J.
2014-01-01
Neurotrophins are critically involved in developmental processes such as neuronal cell survival, growth, and differentiation, as well as in adult synaptic plasticity contributing to learning and memory. Our previous studies examining neurotrophins and memory formation in "Aplysia" showed that a TrkB ligand is required for MAPK…
ERIC Educational Resources Information Center
Joels, Marian; Krugers, Harm; Wiegert, Olof
2006-01-01
Stress facilitates memory formation, but only when the stressor is closely linked to the learning context. These effects are, at least in part, mediated by corticosteroid hormones. Here we demonstrate that corticosterone rapidly facilitates synaptic potentiation in the mouse hippocampal CA1 area when high levels of the hormone and high-frequency…
ERIC Educational Resources Information Center
Nagy, Vanja; Bozdagi, Ozlem; Huntley, George W.
2007-01-01
Matrix metalloproteinases (MMPs) are a family of extracellularly acting proteolytic enzymes with well-recognized roles in plasticity and remodeling of synaptic circuits during brain development and following brain injury. However, it is now becoming increasingly apparent that MMPs also function in normal, nonpathological synaptic plasticity of the…
Long-Term Exercise Is Needed to Enhance Synaptic Plasticity in the Hippocampus
ERIC Educational Resources Information Center
Patten, Anna R.; Sickmann, Helle; Hryciw, Brett N.; Kucharsky, Tessa; Parton, Roberta; Kernick, Aimee; Christie, Brian R.
2013-01-01
Exercise can have many benefits for the body, but it also benefits the brain by increasing neurogenesis, synaptic plasticity, and performance on learning and memory tasks. The period of exercise needed to realize the structural and functional benefits for the brain have not been well delineated, and previous studies have used periods of exercise…
Li, Yan; Wang, Guo-Dong; Wang, Ming-Shan; Irwin, David M; Wu, Dong-Dong; Zhang, Ya-Ping
2014-11-05
Dogs shared a much closer relationship with humans than any other domesticated animals, probably due to their unique social cognitive capabilities, which were hypothesized to be a by-product of selection for tameness toward humans. Here, we demonstrate that genes involved in glutamate metabolism, which account partially for fear response, indeed show the greatest population differentiation by whole-genome comparison of dogs and wolves. However, the changing direction of their expression supports a role in increasing excitatory synaptic plasticity in dogs rather than reducing fear response. Because synaptic plasticity are widely believed to be cellular correlates of learning and memory, this change may alter the learning and memory abilities of ancient scavenging wolves, weaken the fear reaction toward humans, and prompt the initial interspecific contact. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Neural ECM proteases in learning and synaptic plasticity.
Tsilibary, Effie; Tzinia, Athina; Radenovic, Lidija; Stamenkovic, Vera; Lebitko, Tomasz; Mucha, Mariusz; Pawlak, Robert; Frischknecht, Renato; Kaczmarek, Leszek
2014-01-01
Recent studies implicate extracellular proteases in synaptic plasticity, learning, and memory. The data are especially strong for such serine proteases as thrombin, tissue plasminogen activator, neurotrypsin, and neuropsin as well as matrix metalloproteinases, MMP-9 in particular. The role of those enzymes in the aforementioned phenomena is supported by the experimental results on the expression patterns (at the gene expression and protein and enzymatic activity levels) and functional studies, including knockout mice, specific inhibitors, etc. Counterintuitively, the studies have shown that the extracellular proteolysis is not responsible mainly for an overall degradation of the extracellular matrix (ECM) and loosening perisynaptic structures, but rather allows for releasing signaling molecules from the ECM, transsynaptic proteins, and latent form of growth factors. Notably, there are also indications implying those enzymes in the major neuropsychiatric disorders, probably by contributing to synaptic aberrations underlying such diseases as schizophrenia, bipolar, autism spectrum disorders, and drug addiction.
Tononi, Giulio; Cirelli, Chiara
2014-01-01
Summary Sleep is universal, tightly regulated, and its loss impairs cognition. But why does the brain need to disconnect from the environment for hours every day? The synaptic homeostasis hypothesis (SHY) proposes that sleep is the price the brain pays for plasticity. During a waking episode, learning statistical regularities about the current environment requires strengthening connections throughout the brain. This increases cellular needs for energy and supplies, decreases signal-to-noise ratios, and saturates learning. During sleep, spontaneous activity renormalizes net synaptic strength and restores cellular homeostasis. Activity-dependent down-selection of synapses can also explain the benefits of sleep on memory acquisition, consolidation, and integration. This happens through the off-line, comprehensive sampling of statistical regularities incorporated in neuronal circuits over a lifetime. This review considers the rationale and evidence for SHY and points to open issues related to sleep and plasticity. PMID:24411729
Tononi, Giulio; Cirelli, Chiara
2014-01-08
Sleep is universal, tightly regulated, and its loss impairs cognition. But why does the brain need to disconnect from the environment for hours every day? The synaptic homeostasis hypothesis (SHY) proposes that sleep is the price the brain pays for plasticity. During a waking episode, learning statistical regularities about the current environment requires strengthening connections throughout the brain. This increases cellular needs for energy and supplies, decreases signal-to-noise ratios, and saturates learning. During sleep, spontaneous activity renormalizes net synaptic strength and restores cellular homeostasis. Activity-dependent down-selection of synapses can also explain the benefits of sleep on memory acquisition, consolidation, and integration. This happens through the offline, comprehensive sampling of statistical regularities incorporated in neuronal circuits over a lifetime. This Perspective considers the rationale and evidence for SHY and points to open issues related to sleep and plasticity. Copyright © 2014 Elsevier Inc. All rights reserved.
Ultrastructural evidence for synaptic scaling across the wake/sleep cycle.
de Vivo, Luisa; Bellesi, Michele; Marshall, William; Bushong, Eric A; Ellisman, Mark H; Tononi, Giulio; Cirelli, Chiara
2017-02-03
It is assumed that synaptic strengthening and weakening balance throughout learning to avoid runaway potentiation and memory interference. However, energetic and informational considerations suggest that potentiation should occur primarily during wake, when animals learn, and depression should occur during sleep. We measured 6920 synapses in mouse motor and sensory cortices using three-dimensional electron microscopy. The axon-spine interface (ASI) decreased ~18% after sleep compared with wake. This decrease was proportional to ASI size, which is indicative of scaling. Scaling was selective, sparing synapses that were large and lacked recycling endosomes. Similar scaling occurred for spine head volume, suggesting a distinction between weaker, more plastic synapses (~80%) and stronger, more stable synapses. These results support the hypothesis that a core function of sleep is to renormalize overall synaptic strength increased by wake. Copyright © 2017, American Association for the Advancement of Science.
Gorkiewicz, Tomasz; Balcerzyk, Marcin; Kaczmarek, Leszek; Knapska, Ewelina
2015-01-01
It has been shown that matrix metalloproteinase 9 (MMP-9) is required for synaptic plasticity, learning and memory. In particular, MMP-9 involvement in long-term potentiation (LTP, the model of synaptic plasticity) in the hippocampus and prefrontal cortex has previously been demonstrated. Recent data suggest the role of MMP-9 in amygdala-dependent learning and memory. Nothing is known, however, about its physiological correlates in the specific pathways in the amygdala. In the present study we show that LTP in the basal and central but not lateral amygdala (LA) is affected by MMP-9 knock-out. The MMP-9 dependency of LTP was confirmed in brain slices treated with a specific MMP-9 inhibitor. The results suggest that MMP-9 plays different roles in synaptic plasticity in different nuclei of the amygdala.
An Imperfect Dopaminergic Error Signal Can Drive Temporal-Difference Learning
Potjans, Wiebke; Diesmann, Markus; Morrison, Abigail
2011-01-01
An open problem in the field of computational neuroscience is how to link synaptic plasticity to system-level learning. A promising framework in this context is temporal-difference (TD) learning. Experimental evidence that supports the hypothesis that the mammalian brain performs temporal-difference learning includes the resemblance of the phasic activity of the midbrain dopaminergic neurons to the TD error and the discovery that cortico-striatal synaptic plasticity is modulated by dopamine. However, as the phasic dopaminergic signal does not reproduce all the properties of the theoretical TD error, it is unclear whether it is capable of driving behavior adaptation in complex tasks. Here, we present a spiking temporal-difference learning model based on the actor-critic architecture. The model dynamically generates a dopaminergic signal with realistic firing rates and exploits this signal to modulate the plasticity of synapses as a third factor. The predictions of our proposed plasticity dynamics are in good agreement with experimental results with respect to dopamine, pre- and post-synaptic activity. An analytical mapping from the parameters of our proposed plasticity dynamics to those of the classical discrete-time TD algorithm reveals that the biological constraints of the dopaminergic signal entail a modified TD algorithm with self-adapting learning parameters and an adapting offset. We show that the neuronal network is able to learn a task with sparse positive rewards as fast as the corresponding classical discrete-time TD algorithm. However, the performance of the neuronal network is impaired with respect to the traditional algorithm on a task with both positive and negative rewards and breaks down entirely on a task with purely negative rewards. Our model demonstrates that the asymmetry of a realistic dopaminergic signal enables TD learning when learning is driven by positive rewards but not when driven by negative rewards. PMID:21589888
Framework and Implications of Virtual Neurorobotics
Goodman, Philip H.; Zou, Quan; Dascalu, Sergiu-Mihai
2008-01-01
Despite decades of societal investment in artificial learning systems, truly “intelligent” systems have yet to be realized. These traditional models are based on input-output pattern optimization and/or cognitive production rule modeling. One response has been social robotics, using the interaction of human and robot to capture important cognitive dynamics such as cooperation and emotion; to date, these systems still incorporate traditional learning algorithms. More recently, investigators are focusing on the core assumptions of the brain “algorithm” itself—trying to replicate uniquely “neuromorphic” dynamics such as action potential spiking and synaptic learning. Only now are large-scale neuromorphic models becoming feasible, due to the availability of powerful supercomputers and an expanding supply of parameters derived from research into the brain's interdependent electrophysiological, metabolomic and genomic networks. Personal computer technology has also led to the acceptance of computer-generated humanoid images, or “avatars”, to represent intelligent actors in virtual realities. In a recent paper, we proposed a method of virtual neurorobotics (VNR) in which the approaches above (social-emotional robotics, neuromorphic brain architectures, and virtual reality projection) are hybridized to rapidly forward-engineer and develop increasingly complex, intrinsically intelligent systems. In this paper, we synthesize our research and related work in the field and provide a framework for VNR, with wider implications for research and practical applications. PMID:18982115
Novitskaya, Yulia; Sara, Susan J.; Logothetis, Nikos K.
2016-01-01
Experience-induced replay of neuronal ensembles occurs during hippocampal high-frequency oscillations, or ripples. Post-learning increase in ripple rate is predictive of memory recall, while ripple disruption impairs learning. Ripples may thus present a fundamental component of a neurophysiological mechanism of memory consolidation. In addition to system-level local and cross-regional interactions, a consolidation mechanism involves stabilization of memory representations at the synaptic level. Synaptic plasticity within experience-activated neuronal networks is facilitated by noradrenaline release from the axon terminals of the locus coeruleus (LC). Here, to better understand interactions between the system and synaptic mechanisms underlying “off-line” consolidation, we examined the effects of ripple-associated LC activation on hippocampal and cortical activity and on spatial memory. Rats were trained on a radial maze; after each daily learning session neural activity was monitored for 1 h via implanted electrode arrays. Immediately following “on-line” detection of ripple, a brief train of electrical pulses (0.05 mA) was applied to LC. Low-frequency (20 Hz) stimulation had no effect on spatial learning, while higher-frequency (100 Hz) trains transiently blocked generation of ripple-associated cortical spindles and caused a reference memory deficit. Suppression of synchronous ripple/spindle events appears to interfere with hippocampal-cortical communication, thereby reducing the efficiency of “off-line” memory consolidation. PMID:27084931
Enhancement of learning capacity and cholinergic synaptic function by carnitine in aging rats.
Ando, S; Tadenuma, T; Tanaka, Y; Fukui, F; Kobayashi, S; Ohashi, Y; Kawabata, T
2001-10-15
The effects of a carnitine derivative, acetyl-L-carnitine (ALCAR), on the cognitive and cholinergic activities of aging rats were examined. Rats were given ALCAR (100 mg/kg) per os for 3 months and were subjected to the Hebb-Williams tasks and a new maze task, AKON-1, to assess their learning capacity. The learning capacity of the ALCAR-treated group was superior to that of the control. Cholinergic activities were determined with synaptosomes isolated from the cortices. The high-affinity choline uptake by synaptosomes, acetylcholine synthesis in synaptosomes, and acetylcholine release from synaptosomes on membrane depolarization were all enhanced in the ALCAR group. This study indicates that chronic administration of ALCAR increases cholinergic synaptic transmission and consequently enhances learning capacity as a cognitive function in aging rats. Copyright 2001 Wiley-Liss, Inc.
Synaptic Effects of Electric Fields
NASA Astrophysics Data System (ADS)
Rahman, Asif
Learning and sensory processing in the brain relies on the effective transmission of information across synapses. The strength and efficacy of synaptic transmission is modifiable through training and can be modulated with noninvasive electrical brain stimulation. Transcranial electrical stimulation (TES), specifically, induces weak intensity and spatially diffuse electric fields in the brain. Despite being weak, electric fields modulate spiking probability and the efficacy of synaptic transmission. These effects critically depend on the direction of the electric field relative to the orientation of the neuron and on the level of endogenous synaptic activity. TES has been used to modulate a wide range of neuropsychiatric indications, for various rehabilitation applications, and cognitive performance in diverse tasks. How can a weak and diffuse electric field, which simultaneously polarizes neurons across the brain, have precise changes in brain function? Designing therapies to maximize desired outcomes and minimize undesired effects presents a challenging problem. A series of experiments and computational models are used to define the anatomical and functional factors leading to specificity of TES. Anatomical specificity derives from guiding current to targeted brain structures and taking advantage of the direction-sensitivity of neurons with respect to the electric field. Functional specificity originates from preferential modulation of neuronal networks that are already active. Diffuse electric fields may recruit connected brain networks involved in a training task and promote plasticity along active synaptic pathways. In vitro, electric fields boost endogenous synaptic plasticity and raise the ceiling for synaptic learning with repeated stimulation sessions. Synapses undergoing strong plasticity are preferentially modulated over weak synapses. Therefore, active circuits that are involved in a task could be more susceptible to stimulation than inactive circuits. Moreover, stimulation polarity has asymmetric effects on synaptic strength making it easier to enhance ongoing plasticity. These results suggest that the susceptibility of brain networks to an electric field depends on the state of synaptic activity. Combining a training task, which activates specific circuits, with TES may lead to functionally-specific effects. Given the simplicity of TES and the complexity of brain function, understanding the mechanisms leading to specificity is fundamental to the rational advancement of TES.
Ketamine Protects Gamma Oscillations by Inhibiting Hippocampal LTD
Huang, Lanting; Yang, Xiu-Juan; Huang, Ying; Sun, Eve Y.
2016-01-01
NMDA receptors have been widely reported to be involved in the regulation of synaptic plasticity through effects on long-term potentiation (LTP) and long-term depression (LTD). LTP and LTD have been implicated in learning and memory processes. Besides synaptic plasticity, it is known that the phenomenon of gamma oscillations is critical in cognitive functions. Synaptic plasticity has been widely studied, however it is still not clear, to what degree synaptic plasticity regulates the oscillations of neuronal networks. Two NMDA receptor antagonists, ketamine and memantine, have been shown to regulate LTP and LTD, to promote cognitive functions, and have even been reported to bring therapeutic effects in major depression and Alzheimer’s disease respectively. These compounds allow us to investigate the putative interrelationship between network oscillations and synaptic plasticity and to learn more about the mechanisms of their therapeutic effects. In the present study, we have identified that ketamine and memantine could inhibit LTD, without impairing LTP in the CA1 region of mouse hippocampus, which may underlie the mechanism of these drugs’ therapeutic effects. Our results suggest that NMDA-induced LTD caused a marked loss in the gamma power, and pretreatment with 10 μM ketamine prevented the oscillatory loss via its inhibitory effect on LTD. Our study provides a new understanding of the role of NMDA receptors on hippocampal plasticity and oscillations. PMID:27467732
Synaptic Scaling Enables Dynamically Distinct Short- and Long-Term Memory Formation
Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Tsodyks, Misha; Wörgötter, Florentin
2013-01-01
Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling – a slow process usually associated with the maintenance of activity homeostasis – combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes. PMID:24204240
Synaptic scaling enables dynamically distinct short- and long-term memory formation.
Tetzlaff, Christian; Kolodziejski, Christoph; Timme, Marc; Tsodyks, Misha; Wörgötter, Florentin
2013-10-01
Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short- from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes.
Structure activity relationship of synaptic and junctional neurotransmission.
Goyal, Raj K; Chaudhury, Arun
2013-06-01
Chemical neurotransmission may include transmission to local or remote sites. Locally, contact between 'bare' portions of the bulbous nerve terminal termed a varicosity and the effector cell may be in the form of either synapse or non-synaptic contact. Traditionally, all local transmissions between nerves and effector cells are considered synaptic in nature. This is particularly true for communication between neurons. However, communication between nerves and other effectors such as smooth muscles has been described as nonsynaptic or junctional in nature. Nonsynaptic neurotransmission is now also increasingly recognized in the CNS. This review focuses on the relationship between structure and function that orchestrate synaptic and junctional neurotransmissions. A synapse is a specialized focal contact between the presynaptic active zone capable of ultrafast release of soluble transmitters and the postsynaptic density that cluster ionotropic receptors. The presynaptic and the postsynaptic areas are separated by the 'closed' synaptic cavity. The physiological hallmark of the synapse is ultrafast postsynaptic potentials lasting milliseconds. In contrast, junctions are juxtapositions of nerve terminals and the effector cells without clear synaptic specializations and the junctional space is 'open' to the extracellular space. Based on the nature of the transmitters, postjunctional receptors and their separation from the release sites, the junctions can be divided into 'close' and 'wide' junctions. Functionally, the 'close' and the 'wide' junctions can be distinguished by postjunctional potentials lasting ~1s and tens of seconds, respectively. Both synaptic and junctional communications are common between neurons; however, junctional transmission is the rule at many neuro-non-neural effectors. Published by Elsevier B.V.
Structure activity relationship of synaptic and junctional neurotransmission
Goyal, Raj K; Chaudhury, Arun
2013-01-01
Chemical neurotransmission may include transmission to local or remote sites. Locally, contact between ‘bare’ portions of the bulbous nerve terminal termed a varicosity and the effector cell may be in the form of either synapse or non-synaptic contact. Traditionally, all local transmissions between nerves and effector cells are considered synaptic in nature. This is particularly true for communication between neurons. However, communication between nerves and other effectors such as smooth muscles has been described as nonsynaptic or junctional in nature. Nonsynaptic neurotransmission is now also increasing recognized in the CNS. This review focuses on the relationship between structure and function that orchestrate synaptic and junctional neurotransmissions. A synapse is a specialized focal contact between the presynaptic active zone capable for ultrafast release of soluble transmitters and the postsynaptic density that cluster ionotropic receptors. The presynaptic and the postsynaptic areas are separated by the ‘closed’ synaptic cavity. The physiological hallmark of the synapse is ultrafast postsynaptic potentials lasting in milliseconds. In contrast, junctions are juxtapositions of nerve terminals and the effector cells without clear synaptic specializations and the junctional space is ‘open’ to the extracellular space. Based on the nature of the transmitters, postjunctional receptors and their separation from the release sites, the junctions can be divided into ‘close’ and ‘wide’ junctions. Functionally, the ‘close’ and the ‘wide’ junctions can be distinguished by postjunctional potentials lasting ~1 second and 10s of seconds, respectively. Both synaptic and junctional communications are common between neurons; however, junctional transmission is the rule at many neuro-non-neural effectors. PMID:23535140
Aβ Damages Learning and Memory in Alzheimer's Disease Rats with Kidney-Yang Deficiency
Qi, Dongmei; Qiao, Yongfa; Zhang, Xin; Yu, Huijuan; Cheng, Bin; Qiao, Haifa
2012-01-01
Previous studies demonstrated that Alzheimer's disease was considered as the consequence produced by deficiency of Kidney essence. However, the mechanism underlying the symptoms also remains elusive. Here we report that spatial learning and memory, escape, and swimming capacities were damaged significantly in Kidney-yang deficiency rats. Indeed, both hippocampal Aβ 40 and 42 increases in Kidney-yang deficiency contribute to the learning and memory impairments. Specifically, damage of synaptic plasticity is involved in the learning and memory impairment of Kidney-yang deficiency rats. We determined that the learning and memory damage in Kidney-yang deficiency due to synaptic plasticity impairment and increases of Aβ 40 and 42 was not caused via NMDA receptor internalization induced by Aβ increase. β-Adrenergic receptor agonist can rescue the impaired long-term potential (LTP) in Kidney-yang rats. Taken together, our results suggest that spatial learning and memory inhibited in Kidney-yang deficiency might be induced by Aβ increase and the decrease of β 2 receptor function in glia. PMID:22645624
Fontán-Lozano, Angela; Suárez-Pereira, Irene; Delgado-García, José María; Carrión, Angel Manuel
2011-01-01
Aging, mental retardation, number of psychiatric and neurological disorders are all associated with learning and memory impairments. As the underlying causes of such conditions are very heterogeneous, manipulations that can enhance learning and memory in mice under different circumstances might be able to overcome the cognitive deficits in patients. The M-current regulates neuronal excitability and action potential firing, suggesting that its inhibition may increase cognitive capacities. We demonstrate that XE991, a specific M-current blocker, enhances learning and memory in healthy mice. This effect may be achieved by altering basal hippocampal synaptic activity and by diminishing the stimulation threshold for long-term changes in synaptic efficacy and learning-related gene expression. We also show that training sessions regulate the M-current by transiently decreasing the levels of KCNQ/Kv7.3 protein, a pivotal subunit for the M-current. Furthermore, we found that XE991 can revert the cognitive impairment associated with acetylcholine depletion and the neurodegeneration induced by kainic acid. Together, these results show that inhibition of the M-current as a general strategy may be useful to enhance cognitive capacities in healthy and aging individuals, as well as in those with neurodegenerative diseases. Copyright © 2010 Wiley-Liss, Inc.
Engineering incremental resistive switching in TaOx based memristors for brain-inspired computing
NASA Astrophysics Data System (ADS)
Wang, Zongwei; Yin, Minghui; Zhang, Teng; Cai, Yimao; Wang, Yangyuan; Yang, Yuchao; Huang, Ru
2016-07-01
Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament growth (dissolution) dynamics with time, therefore significantly improving the conductance modulation linearity that is desirable in neuromorphic systems. In addition, the introduction of the DLL also served to reduce the power consumption of the memristor, and important synaptic learning rules in biological brains such as spike timing dependent plasticity were successfully implemented using these optimized devices. This study could provide general implications for continued optimizations of memristor performance for neuromorphic applications, by carefully tuning the dynamics involved in filament growth and dissolution.Brain-inspired neuromorphic computing is expected to revolutionize the architecture of conventional digital computers and lead to a new generation of powerful computing paradigms, where memristors with analog resistive switching are considered to be potential solutions for synapses. Here we propose and demonstrate a novel approach to engineering the analog switching linearity in TaOx based memristors, that is, by homogenizing the filament growth/dissolution rate via the introduction of an ion diffusion limiting layer (DLL) at the TiN/TaOx interface. This has effectively mitigated the commonly observed two-regime conductance modulation behavior and led to more uniform filament growth (dissolution) dynamics with time, therefore significantly improving the conductance modulation linearity that is desirable in neuromorphic systems. In addition, the introduction of the DLL also served to reduce the power consumption of the memristor, and important synaptic learning rules in biological brains such as spike timing dependent plasticity were successfully implemented using these optimized devices. This study could provide general implications for continued optimizations of memristor performance for neuromorphic applications, by carefully tuning the dynamics involved in filament growth and dissolution. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00476h
ERIC Educational Resources Information Center
Zhang, Ming; Wang, Hongbing
2013-01-01
There is significant interest in understanding the contribution of intracellular signaling and synaptic substrates to memory flexibility, which involves new learning and suppression of obsolete memory. Here, we report that enhancement of Ca[superscript 2+]-stimulated cAMP signaling by overexpressing type 1 adenylyl cyclase (AC1) facilitated…
ERIC Educational Resources Information Center
Zheng, Fei; Zhang, Ming; Ding, Qi; Sethna, Ferzin; Yan, Lily; Moon, Changjong; Yang, Miyoung; Wang, Hongbing
2016-01-01
Mental health and cognitive functions are influenced by both genetic and environmental factors. Although having active lifestyle with physical exercise improves learning and memory, how it interacts with the specific key molecular regulators of synaptic plasticity is largely unknown. Here, we examined the effects of voluntary running on long-term…
Synaptic Modifications in the Medial Prefrontal Cortex in Susceptibility and Resilience to Stress
Wang, Minghui; Perova, Zinaida; Arenkiel, Benjamin R.
2014-01-01
When facing stress, most individuals are resilient whereas others are prone to developing mood disorders. The brain mechanisms underlying such divergent behavioral responses remain unclear. Here we used the learned helplessness procedure in mice to examine the role of the medial prefrontal cortex (mPFC), a brain region highly implicated in both clinical and animal models of depression, in adaptive and maladaptive behavioral responses to stress. We found that uncontrollable and inescapable stress induced behavioral state-dependent changes in the excitatory synapses onto a subset of mPFC neurons: those that were activated during behavioral responses as indicated by their expression of the activity reporter c-Fos. Whereas synaptic potentiation was linked to learned helplessness, a depression-like behavior, synaptic weakening, was associated with resilience to stress. Notably, enhancing the activity of mPFC neurons using a chemical–genetic method was sufficient to convert the resilient behavior into helplessness. Our results provide direct evidence that mPFC dysfunction is linked to maladaptive behavioral responses to stress, and suggest that enhanced excitatory synaptic drive onto mPFC neurons may underlie the previously reported hyperactivity of this brain region in depression. PMID:24872553
Havekes, Robbert; Canton, David A.; Park, Alan J.; Huang, Ted; Nie, Ting; Day, Jonathan P.; Guercio, Leonardo A.; Grimes, Quinn; Luczak, Vincent; Gelman, Irwin H.; Baillie, George S.; Scott, John D.; Abel, Ted
2012-01-01
A kinase-anchoring proteins (AKAPs) organize compartmentalized pools of Protein Kinase A (PKA) to enable localized signaling events within neurons. However, it is unclear which of the many expressed AKAPs in neurons target PKA to signaling complexes important for long-lasting forms of synaptic plasticity and memory storage. In the forebrain, the anchoring protein gravin recruits a signaling complex containing PKA, PKC, calmodulin, and PDE4D to the β2-adrenergic receptor. Here, we show that mice lacking the α-isoform of gravin have deficits in PKA-dependent long-lasting forms of hippocampal synaptic plasticity including β2-adrenergic receptor-mediated plasticity, and selective impairments of long-term memory storage. Further, both hippocampal β2-adrenergic receptor phosphorylation by PKA, and learning-induced activation of ERK, are attenuated in the CA1 region of the hippocampus in mice lacking gravin-α. We conclude that gravin compartmentalizes a significant pool of PKA that regulates learning-induced β2-adrenergic receptor signaling and ERK activation in the hippocampus in vivo, organizing molecular interactions between glutamatergic and noradrenergic signaling pathways for long-lasting synaptic plasticity, and memory storage. PMID:23238728
Learning to learn – intrinsic plasticity as a metaplasticity mechanism for memory formation
Sehgal, Megha; Song, Chenghui; Ehlers, Vanessa L.; Moyer, James R.
2013-01-01
“Use it or lose it” is a popular adage often associated with use-dependent enhancement of cognitive abilities. Much research has focused on understanding exactly how the brain changes as a function of experience. Such experience-dependent plasticity involves both structural and functional alterations that contribute to adaptive behaviors, such as learning and memory, as well as maladaptive behaviors, including anxiety disorders, phobias, and posttraumatic stress disorder. With the advancing age of our population, understanding how use-dependent plasticity changes across the lifespan may also help to promote healthy brain aging. A common misconception is that such experience-dependent plasticity (e.g., associative learning) is synonymous with synaptic plasticity. Other forms of plasticity also play a critical role in shaping adaptive changes within the nervous system, including intrinsic plasticity – a change in the intrinsic excitability of a neuron. Intrinsic plasticity can result from a change in the number, distribution or activity of various ion channels located throughout the neuron. Here, we review evidence that intrinsic plasticity is an important and evolutionarily conserved neural correlate of learning. Intrinsic plasticity acts as a metaplasticity mechanism by lowering the threshold for synaptic changes. Thus, learning-related intrinsic changes can facilitate future synaptic plasticity and learning. Such intrinsic changes can impact the allocation of a memory trace within a brain structure, and when compromised, can contribute to cognitive decline during the aging process. This unique role of intrinsic excitability can provide insight into how memories are formed and, more interestingly, how neurons that participate in a memory trace are selected. Most importantly, modulation of intrinsic excitability can allow for regulation of learning ability – this can prevent or provide treatment for cognitive decline not only in patients with clinical disorders but also in the aging population. PMID:23871744
Expression of VGLUTs contributes to degeneration and acquisition of learning and memory.
Cheng, Xiao-Rui; Yang, Yong; Zhou, Wen-Xia; Zhang, Yong-Xiang
2011-03-01
Vesicular glutamate transporters (VGLUTs), which include VGLUT1, VGLUT2 and VGLUT3, are responsible for the uploading of L-glutamate into synaptic vesicles. The expression pattern of VGLUTs determines the level of synaptic vesicle filling (i.e., glutamate quantal size) and directly influences glutamate receptors and glutamatergic synaptic transmission; thus, VGLUTs may play a key role in learning and memory in the central nervous system. To determine whether VGLUTs contribute to the degeneration or acquisition of learning and memory, we used an animal model for the age-related impairment of learning and memory, senescence-accelerated mouse/prone 8 (SAMP8). KM mice were divided into groups based on their learning and memory performance in a shuttle-box test. The expression of VGLUTs and synaptophysin (Syp) mRNA and protein in the cerebral cortex and hippocampus were investigated with real-time fluorescence quantitative PCR and western blot, respectively. Our results demonstrate that, in the cerebral cortex, protein expression of VGLUT1, VGLUT2, VGLUT3 and Syp was decreased in SAMP8 with age and increased in KM mice, which displayed an enhanced capacity for learning and memory. The protein expression of VGLUT2 and Syp was decreased in the hippocampus of SAMP8 with aging. The expression level of VGLUT1 and VGLUT2 proteins were highest in KM mouse group with a 76-100% avoidance score in the shuttle-box test. These data demonstrate that protein expression of VGLUT1, VGLUT2 and Syp decreases age-dependently in SAMP8 and increases in a learning- and memory-dependent manner in KM mice. Correlation analysis indicated the protein expression of VGLUT1, VGLUT2 and Syp has a positive correlation with the capacity of learning and memory. Copyright © 2011 Elsevier Inc. All rights reserved.
Palavicini, Juan Pablo; Wang, Hongjie; Minond, Dmitriy; Bianchi, Elisabetta; Xu, Shaohua; Lakshmana, Madepalli K
2014-01-01
Loss of synaptic proteins and functional synapses in the brains of patients with Alzheimer's disease (AD) as well as transgenic mouse models expressing amyloid-β protein precursor is now well established. However, the earliest age at which such loss of synapses occurs, and whether known markers of AD progression accelerate functional deficits is completely unknown. We previously showed that RanBP9 overexpression leads to enhanced amyloid plaque burden in a mouse model of AD. In this study, we found significant reductions in the levels of synaptophysin and spinophilin, compared with wild-type controls, in both the cortex and the hippocampus of 5- and 6-month old but not 3- or 4-month old APΔE9/RanBP9 triple transgenic mice, and not in APΔE9 double transgenic mice, nor in RanBP9 single transgenic mice. Interestingly, amyloid plaque burden was also increased in the APΔE9/RanBP9 mice at 5-6 months. Consistent with these results, we found significant deficits in learning and memory in the APΔE9/RanBP9 mice at 5 and 6 month. These data suggest that increased amyloid plaques and accelerated learning and memory deficits and loss of synaptic proteins induced by RanBP9 are correlated. Most importantly, APΔE9/RanBP9 mice also showed significantly reduced levels of the phosphorylated form of cofilin in the hippocampus. Taken together these data suggest that RanBP9 overexpression down-regulates cofilin, causes early synaptic deficits and impaired learning, and accelerates accumulation of amyloid plaques in the mouse brain.
A Machine Learning Method for the Prediction of Receptor Activation in the Simulation of Synapses
Montes, Jesus; Gomez, Elena; Merchán-Pérez, Angel; DeFelipe, Javier; Peña, Jose-Maria
2013-01-01
Chemical synaptic transmission involves the release of a neurotransmitter that diffuses in the extracellular space and interacts with specific receptors located on the postsynaptic membrane. Computer simulation approaches provide fundamental tools for exploring various aspects of the synaptic transmission under different conditions. In particular, Monte Carlo methods can track the stochastic movements of neurotransmitter molecules and their interactions with other discrete molecules, the receptors. However, these methods are computationally expensive, even when used with simplified models, preventing their use in large-scale and multi-scale simulations of complex neuronal systems that may involve large numbers of synaptic connections. We have developed a machine-learning based method that can accurately predict relevant aspects of the behavior of synapses, such as the percentage of open synaptic receptors as a function of time since the release of the neurotransmitter, with considerably lower computational cost compared with the conventional Monte Carlo alternative. The method is designed to learn patterns and general principles from a corpus of previously generated Monte Carlo simulations of synapses covering a wide range of structural and functional characteristics. These patterns are later used as a predictive model of the behavior of synapses under different conditions without the need for additional computationally expensive Monte Carlo simulations. This is performed in five stages: data sampling, fold creation, machine learning, validation and curve fitting. The resulting procedure is accurate, automatic, and it is general enough to predict synapse behavior under experimental conditions that are different to the ones it has been trained on. Since our method efficiently reproduces the results that can be obtained with Monte Carlo simulations at a considerably lower computational cost, it is suitable for the simulation of high numbers of synapses and it is therefore an excellent tool for multi-scale simulations. PMID:23894367
Diaz-Ruiz, Oscar; Zhang, Yajun; Shan, Lufei; Malik, Nasir; Hoffman, Alexander F; Ladenheim, Bruce; Cadet, Jean Lud; Lupica, Carl R; Tagliaferro, Adriana; Brusco, Alicia; Bäckman, Cristina M
2012-07-20
In the present study, we analyzed mice with a targeted deletion of β-catenin in DA neurons (DA-βcat KO mice) to address the functional significance of this molecule in the shaping of synaptic responses associated with motor learning and following exposure to drugs of abuse. Relative to controls, DA-βcat KO mice showed significant deficits in their ability to form long-term memories and displayed reduced expression of methamphetamine-induced behavioral sensitization after subsequent challenge doses with this drug, suggesting that motor learning and drug-induced learning plasticity are altered in these mice. Morphological analyses showed no changes in the number or distribution of tyrosine hydroxylase-labeled neurons in the ventral midbrain. While electrochemical measurements in the striatum determined no changes in acute DA release and uptake, a small but significant decrease in DA release was detected in mutant animals after prolonged repetitive stimulation, suggesting a possible deficit in the DA neurotransmitter vesicle reserve pool. However, electron microscopy analyses did not reveal significant differences in the content of synaptic vesicles per terminal, and striatal DA levels were unchanged in DA-βcat KO animals. In contrast, striatal mRNA levels for several markers known to regulate synaptic plasticity and DA neurotransmission were altered in DA-βcat KO mice. This study demonstrates that ablation of β-catenin in DA neurons leads to alterations of motor and reward-associated memories and to adaptations of the DA neurotransmitter system and suggests that β-catenin signaling in DA neurons is required to facilitate the synaptic remodeling underlying the consolidation of long-term memories.
Proteomic Analysis of Rat Hippocampus under Simulated Microgravity
NASA Astrophysics Data System (ADS)
Wang, Yun; Li, Yujuan; Zhang, Yongqian; Liu, Yahui; Deng, Yulin
It has been found that microgravity may lead to impairments in cognitive functions performed by CNS. However, the exact mechanism of effects of microgravity on the learning and memory function in animal nervous system is not elucidated yet. Brain function is mainly mediated by membrane proteins and their dysfunction causes degeneration of the learning and memory. To induce simulated microgravity, the rat tail suspension model was established. Comparative O (18) labeling quantitative proteomic strategy was applied to detect the differentially expressed proteins in rat brain hippocampus. The proteins in membrane fraction from rat hippocampus were digested by trypsin and then the peptides were separated by off-gel for the first dimension with 24 wells device encompassing the pH range of 3 - 10. An off-gel fraction was subjected into LC-ESI-QTOF in triplicate. Preliminary results showed that nearly 77% of the peptides identified were specific to one fraction. 676 proteins were identified among which 108 proteins were found differentially expressed under simulated microgravity. Using the KOBAS server, many enriched pathways, such as metabolic pathway, synaptic vesicle cycle, endocytosis, calcium signaling pathway, and SNAREs pathway were identified. Furthermore, it has been found that neurotransmitter released by Ca (2+) -triggered synaptic vesicles fusion may play key role in neural function. Rab 3A might inhibit the membrane fusion and neurotransmitter release. The protein alteration of the synaptic vesicle cycle may further explain the effects of microgravity on learning and memory function in rats. Key words: Microgravity; proteomics; synaptic vesicle; O (18) ({}) -labeling
ERIC Educational Resources Information Center
Wang, Fuxing; Li, Wenjing; Mayer, Richard E.; Liu, Huashan
2018-01-01
The goal of the present study is to determine how to incorporate social cues such as gesturing in animated pedagogical agents (PAs) for online multimedia lessons in ways that promote student learning. In 3 experiments, college students learned about synaptic transmission from a multimedia narrated presentation while their eye movements were…
Reinforcement Learning Using a Continuous Time Actor-Critic Framework with Spiking Neurons
Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram
2013-01-01
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity. PMID:23592970
Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram
2013-04-01
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Hippocampal ripples down-regulate synapses.
Norimoto, Hiroaki; Makino, Kenichi; Gao, Mengxuan; Shikano, Yu; Okamoto, Kazuki; Ishikawa, Tomoe; Sasaki, Takuya; Hioki, Hiroyuki; Fujisawa, Shigeyoshi; Ikegaya, Yuji
2018-03-30
The specific effects of sleep on synaptic plasticity remain unclear. We report that mouse hippocampal sharp-wave ripple oscillations serve as intrinsic events that trigger long-lasting synaptic depression. Silencing of sharp-wave ripples during slow-wave states prevented the spontaneous down-regulation of net synaptic weights and impaired the learning of new memories. The synaptic down-regulation was dependent on the N -methyl-d-aspartate receptor and selective for a specific input pathway. Thus, our findings are consistent with the role of slow-wave states in refining memory engrams by reducing recent memory-irrelevant neuronal activity and suggest a previously unrecognized function for sharp-wave ripples. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Chen, Shanping; Cai, Diancai; Pearce, Kaycey; Sun, Philip Y-W; Roberts, Adam C; Glanzman, David L
2014-01-01
Long-term memory (LTM) is believed to be stored in the brain as changes in synaptic connections. Here, we show that LTM storage and synaptic change can be dissociated. Cocultures of Aplysia sensory and motor neurons were trained with spaced pulses of serotonin, which induces long-term facilitation. Serotonin (5HT) triggered growth of new presynaptic varicosities, a synaptic mechanism of long-term sensitization. Following 5HT training, two antimnemonic treatments—reconsolidation blockade and inhibition of PKM—caused the number of presynaptic varicosities to revert to the original, pretraining value. Surprisingly, the final synaptic structure was not achieved by targeted retraction of the 5HT-induced varicosities but, rather, by an apparently arbitrary retraction of both 5HT-induced and original synapses. In addition, we find evidence that the LTM for sensitization persists covertly after its apparent elimination by the same antimnemonic treatments that erase learning-related synaptic growth. These results challenge the idea that stable synapses store long-term memories. DOI: http://dx.doi.org/10.7554/eLife.03896.001 PMID:25402831
Lim, Chae-Seok; Hoang, Elizabeth T; Viar, Kenneth E; Stornetta, Ruth L; Scott, Michael M; Zhu, J Julius
2014-02-01
Fragile X syndrome, caused by the loss of Fmr1 gene function, is the most common form of inherited mental retardation, with no effective treatment. Using a tractable animal model, we investigated mechanisms of action of a few FDA-approved psychoactive drugs that modestly benefit the cognitive performance in fragile X patients. Here we report that compounds activating serotonin (5HT) subtype 2B receptors (5HT2B-Rs) or dopamine (DA) subtype 1-like receptors (D1-Rs) and/or those inhibiting 5HT2A-Rs or D2-Rs moderately enhance Ras-PI3K/PKB signaling input, GluA1-dependent synaptic plasticity, and learning in Fmr1 knockout mice. Unexpectedly, combinations of these 5HT and DA compounds at low doses synergistically stimulate Ras-PI3K/PKB signal transduction and GluA1-dependent synaptic plasticity and remarkably restore normal learning in Fmr1 knockout mice without causing anxiety-related side effects. These findings suggest that properly dosed and combined FDA-approved psychoactive drugs may effectively treat the cognitive impairment associated with fragile X syndrome.
ERIC Educational Resources Information Center
Sajikumar, Sreedharan; Li, Qin; Abraham, Wickliffe C.; Xiao, Zhi Cheng
2009-01-01
Activity-dependent changes in synaptic strength such as long-term potentiation (LTP) and long-term depression (LTD) are considered to be cellular mechanisms underlying learning and memory. Strengthening of a synapse for a few seconds or minutes is termed short-term potentiation (STP) and is normally unable to take part in the processes of synaptic…
ERIC Educational Resources Information Center
Zhang, Xiaoqun; Yao, Ning; Chergui, Karima
2016-01-01
Several forms of long-term depression (LTD) of glutamatergic synaptic transmission have been identified in the dorsal striatum and in the nucleus accumbens (NAc). Such experience-dependent synaptic plasticity might play important roles in reward-related learning. The GABA[subscript A] receptor agonist muscimol was recently found to trigger a…
BK Channels Mediate Synaptic Plasticity Underlying Habituation in Rats.
Zaman, Tariq; De Oliveira, Cleusa; Smoka, Mahabba; Narla, Chakravarthi; Poulter, Michael O; Schmid, Susanne
2017-04-26
Habituation is a basic form of implicit learning and represents a sensory filter that is disrupted in autism, schizophrenia, and several other mental disorders. Despite extensive research in the past decades on habituation of startle and other escape responses, the underlying neural mechanisms are still not fully understood. There is evidence from previous studies indicating that BK channels might play a critical role in habituation. We here used a wide array of approaches to test this hypothesis. We show that BK channel activation and subsequent phosphorylation of these channels are essential for synaptic depression presumably underlying startle habituation in rats, using patch-clamp recordings and voltage-sensitive dye imaging in slices. Furthermore, positive modulation of BK channels in vivo can enhance short-term habituation. Although results using different approaches do not always perfectly align, together they provide convincing evidence for a crucial role of BK channel phosphorylation in synaptic depression underlying short-term habituation of startle. We also show that this mechanism can be targeted to enhance short-term habituation and therefore to potentially ameliorate sensory filtering deficits associated with psychiatric disorders. SIGNIFICANCE STATEMENT Short-term habituation is the most fundamental form of implicit learning. Habituation also represents a filter for inundating sensory information, which is disrupted in autism, schizophrenia, and other psychiatric disorders. Habituation has been studied in different organisms and behavioral models and is thought to be caused by synaptic depression in respective pathways. The underlying molecular mechanisms, however, are poorly understood. We here identify, for the first time, a BK channel-dependent molecular synaptic mechanism leading to synaptic depression that is crucial for habituation, and we discuss the significance of our findings for potential treatments enhancing habituation. Copyright © 2017 the authors 0270-6474/17/374540-12$15.00/0.
Diverse strategy-learning styles promote cooperation in evolutionary spatial prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Liu, Run-Ran; Jia, Chun-Xiao; Rong, Zhihai
2015-11-01
Observational learning and practice learning are two important learning styles and play important roles in our information acquisition. In this paper, we study a spacial evolutionary prisoner's dilemma game, where players can choose the observational learning rule or the practice learning rule when updating their strategies. In the proposed model, we use a parameter p controlling the preference of players choosing the observational learning rule, and found that there exists an optimal value of p leading to the highest cooperation level, which indicates that the cooperation can be promoted by these two learning rules collaboratively and one single learning rule is not favor the promotion of cooperation. By analysing the dynamical behavior of the system, we find that the observational learning rule can make the players residing on cooperative clusters more easily realize the bad sequence of mutual defection. However, a too high observational learning probability suppresses the players to form compact cooperative clusters. Our results highlight the importance of a strategy-updating rule, more importantly, the observational learning rule in the evolutionary cooperation.
2009-01-01
A breakthrough for studying the neuronal basis of learning emerged when invertebrates with simple nervous systems, such as the sea slug Hermissenda crassicornis, were shown to exhibit classical conditioning. Hermissenda learns to associate light with turbulence: prior to learning, naive animals move toward light (phototaxis) and contract their foot in response to turbulence; after learning, conditioned animals delay phototaxis in response to light. The photoreceptors of the eye, which receive monosynaptic inputs from statocyst hair cells, are both sensory neurons and the first site of sensory convergence. The memory of light associated with turbulence is stored as changes in intrinsic and synaptic currents in these photoreceptors. The subcellular mechanisms producing these changes include activation of protein kinase C and MAP kinase, which act as coincidence detectors because they are activated by convergent signaling pathways. Pathways of interneurons and motorneurons, where additional changes in excitability and synaptic connections are found, contribute to delayed phototaxis. Bursting activity recorded at several points suggest the existence of small networks that produce complex spatio-temporal firing patterns. Thus, the change in behavior may be produced by a non-linear transformation of spatio-temporal firing patterns caused by plasticity of synaptic and intrinsic channels. The change in currents and the activation of PKC and MAPK produced by associative learning are similar to that observed in hippocampal and cerebellar neurons after rabbit classical conditioning, suggesting that these represent general mechanisms of memory storage. Thus, the knowledge gained from further study of Hermissenda will continue to illuminate mechanisms of mammalian learning. PMID:16437555
Lee, Yong-Seok; Ehninger, Dan; Zhou, Miou; Oh, Jun-Young; Kang, Minkyung; Kwak, Chuljung; Ryu, Hyun-Hee; Butz, Delana; Araki, Toshiyuki; Cai, Ying; Balaji, J.; Sano, Yoshitake; Nam, Christine I.; Kim, Hyong Kyu; Kaang, Bong-Kiun; Burger, Corinna; Neel, Benjamin G.; Silva, Alcino J.
2015-01-01
In Noonan Syndrome (NS) 30% to 50% of subjects show cognitive deficits of unknown etiology and with no known treatment. Here, we report that knock-in mice expressing either of two NS-associated Ptpn11 mutations show hippocampal-dependent spatial learning impairments and deficits in hippocampal long-term potentiation (LTP). In addition, viral overexpression of the PTPN11D61G in adult hippocampus results in increased baseline excitatory synaptic function, deficits in LTP and spatial learning, which can all be reversed by a MEK inhibitor. Furthermore, brief treatment with lovastatin reduces Ras-Erk activation in the brain, and normalizes the LTP and learning deficits in adult Ptpn11D61G/+ mice. Our results demonstrate that increased basal Erk activity and corresponding baseline increases in excitatory synaptic function are responsible for the LTP impairments and, consequently, the learning deficits in mouse models of NS. These data also suggest that lovastatin or MEK inhibitors may be useful for treating the cognitive deficits in NS. PMID:25383899
MOLECULAR MECHANISMS OF FEAR LEARNING AND MEMORY
Johansen, Joshua P.; Cain, Christopher K.; Ostroff, Linnaea E.; LeDoux, Joseph E.
2011-01-01
Pavlovian fear conditioning is a useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Together, this research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals, and potentially for understanding fear related disorders, such as PTSD and phobias. PMID:22036561
[Changes of the neuronal membrane excitability as cellular mechanisms of learning and memory].
Gaĭnutdinov, Kh L; Andrianov, V V; Gaĭnutdinova, T Kh
2011-01-01
In the presented review given literature and results of own studies of dynamics of electrical characteristics of neurons, which change are included in processes both an elaboration of learning, and retention of the long-term memory. Literary datas and our results allow to conclusion, that long-term retention of behavioural reactions during learning is accompanied not only by changing efficiency of synaptic transmission, as well as increasing of excitability of command neurons of the defensive reflex. This means, that in the process of learning are involved long-term changes of the characteristics a membrane of certain elements of neuronal network, dependent from the metabolism of the cells. see text). Thou phenomena possible mark as cellular (electrophysiological) correlates of long-term plastic modifications of the behaviour. The analyses of having results demonstrates an important role of membrane characteristics of neurons (their excitability) and parameters an synaptic transmission not only in initial stage of learning, as well as in long-term modifications of the behaviour (long-term memory).
Novitskaya, Yulia; Sara, Susan J; Logothetis, Nikos K; Eschenko, Oxana
2016-05-01
Experience-induced replay of neuronal ensembles occurs during hippocampal high-frequency oscillations, or ripples. Post-learning increase in ripple rate is predictive of memory recall, while ripple disruption impairs learning. Ripples may thus present a fundamental component of a neurophysiological mechanism of memory consolidation. In addition to system-level local and cross-regional interactions, a consolidation mechanism involves stabilization of memory representations at the synaptic level. Synaptic plasticity within experience-activated neuronal networks is facilitated by noradrenaline release from the axon terminals of the locus coeruleus (LC). Here, to better understand interactions between the system and synaptic mechanisms underlying "off-line" consolidation, we examined the effects of ripple-associated LC activation on hippocampal and cortical activity and on spatial memory. Rats were trained on a radial maze; after each daily learning session neural activity was monitored for 1 h via implanted electrode arrays. Immediately following "on-line" detection of ripple, a brief train of electrical pulses (0.05 mA) was applied to LC. Low-frequency (20 Hz) stimulation had no effect on spatial learning, while higher-frequency (100 Hz) trains transiently blocked generation of ripple-associated cortical spindles and caused a reference memory deficit. Suppression of synchronous ripple/spindle events appears to interfere with hippocampal-cortical communication, thereby reducing the efficiency of "off-line" memory consolidation. © 2016 Novitskaya et al.; Published by Cold Spring Harbor Laboratory Press.
Altered gene regulation and synaptic morphology in Drosophila learning and memory mutants
Guan, Zhuo; Buhl, Lauren K.; Quinn, William G.; Littleton, J. Troy
2011-01-01
Genetic studies in Drosophila have revealed two separable long-term memory pathways defined as anesthesia-resistant memory (ARM) and long-lasting long-term memory (LLTM). ARM is disrupted in radish (rsh) mutants, whereas LLTM requires CREB-dependent protein synthesis. Although the downstream effectors of ARM and LLTM are distinct, pathways leading to these forms of memory may share the cAMP cascade critical for associative learning. Dunce, which encodes a cAMP-specific phosphodiesterase, and rutabaga, which encodes an adenylyl cyclase, both disrupt short-term memory. Amnesiac encodes a pituitary adenylyl cyclase-activating peptide homolog and is required for middle-term memory. Here, we demonstrate that the Radish protein localizes to the cytoplasm and nucleus and is a PKA phosphorylation target in vitro. To characterize how these plasticity pathways may manifest at the synaptic level, we assayed synaptic connectivity and performed an expression analysis to detect altered transcriptional networks in rutabaga, dunce, amnesiac, and radish mutants. All four mutants disrupt specific aspects of synaptic connectivity at larval neuromuscular junctions (NMJs). Genome-wide DNA microarray analysis revealed ∼375 transcripts that are altered in these mutants, suggesting defects in multiple neuronal signaling pathways. In particular, the transcriptional target Lapsyn, which encodes a leucine-rich repeat cell adhesion protein, localizes to synapses and regulates synaptic growth. This analysis provides insights into the Radish-dependent ARM pathway and novel transcriptional targets that may contribute to memory processing in Drosophila. PMID:21422168
Recommendation System Based On Association Rules For Distributed E-Learning Management Systems
NASA Astrophysics Data System (ADS)
Mihai, Gabroveanu
2015-09-01
Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.
Smith, Caroline C.; Vedder, Lindsey C.; McMahon, Lori L.
2009-01-01
Summary When circulating estrogen levels decline as a natural consequence of menopause and aging in women, there is an increased incidence of deficits in working memory. In many cases, these deficits are rescued by estrogen replacement therapy. These clinical data therefore highlight the importance of defining the biological pathways linking estrogen to the cellular substrates of learning and memory. It has been known for nearly two decades that estrogen enhances dendritic spine density on apical dendrites of CA1 pyramidal cells in hippocampus, a brain region required for learning. Interestingly, at synapses between CA3-CA1 pyramidal cells, estrogen has also been shown to enhance synaptic NMDA receptor current and the magnitude of long term potentiation, a cellular correlate of learning and memory. Given that synapse density, NMDAR function, and long term potentiation at CA3-CA1 synapses in hippocampus are associated with normal learning, it is likely that modulation of these parameters by estrogen facilitates the improvement in learning observed in rats, primates and humans following estrogen replacement. To facilitate the design of clinical strategies to potentially prevent or reverse the age-related decline in learning and memory during menopause, the relationship between the estrogen-induced morphological and functional changes in hippocampus must be defined and the role these changes play in facilitating learning must be elucidated. The aim of this report is to provide a summary of the proposed mechanisms by which this hormone increases synaptic function and in doing so, it briefly addresses potential mechanisms contributing to the estrogen-induced increase in synaptic morphology and plasticity, as well as important future directions. PMID:19596521
Montgomery, Karienn S; Edwards, George; Levites, Yona; Kumar, Ashok; Myers, Catherine E; Gluck, Mark A; Setlow, Barry; Bizon, Jennifer L
2016-04-01
Elevated β-amyloid and impaired synaptic function in hippocampus are among the earliest manifestations of Alzheimer's disease (AD). Most cognitive assessments employed in both humans and animal models, however, are insensitive to this early disease pathology. One critical aspect of hippocampal function is its role in episodic memory, which involves the binding of temporally coincident sensory information (e.g., sights, smells, and sounds) to create a representation of a specific learning epoch. Flexible associations can be formed among these distinct sensory stimuli that enable the "transfer" of new learning across a wide variety of contexts. The current studies employed a mouse analog of an associative "transfer learning" task that has previously been used to identify risk for prodromal AD in humans. The rodent version of the task assesses the transfer of learning about stimulus features relevant to a food reward across a series of compound discrimination problems. The relevant feature that predicts the food reward is unchanged across problems, but an irrelevant feature (i.e., the context) is altered. Experiment 1 demonstrated that C57BL6/J mice with bilateral ibotenic acid lesions of hippocampus were able to discriminate between two stimuli on par with control mice; however, lesioned mice were unable to transfer or apply this learning to new problem configurations. Experiment 2 used the APPswe PS1 mouse model of amyloidosis to show that robust impairments in transfer learning are evident in mice with subtle β-amyloid-induced synaptic deficits in the hippocampus. Finally, Experiment 3 confirmed that the same transfer learning impairments observed in APPswePS1 mice were also evident in the Tg-SwDI mouse, a second model of amyloidosis. Together, these data show that the ability to generalize learned associations to new contexts is disrupted even in the presence of subtle hippocampal dysfunction and suggest that, across species, this aspect of hippocampal-dependent learning may be useful for early identification of AD-like pathology. © 2015 Wiley Periodicals, Inc.
Bowling, Heather; Bhattacharya, Aditi; Klann, Eric; Chao, Moses V
2016-03-01
Brain-derived neurotrophic factor (BDNF) plays an important role in neurodevelopment, synaptic plasticity, learning and memory, and in preventing neurodegeneration. Despite decades of investigations into downstream signaling cascades and changes in cellular processes, the mechanisms of how BDNF reshapes circuits in vivo remain unclear. This informational gap partly arises from the fact that the bulk of studies into the molecular actions of BDNF have been performed in dissociated neuronal cultures, while the majority of studies on synaptic plasticity, learning and memory were performed in acute brain slices or in vivo. A recent study by Bowling-Bhattacharya et al., measured the proteomic changes in acute adult hippocampal slices following treatment and reported changes in proteins of neuronal and non-neuronal origin that may in concert modulate synaptic release and secretion in the slice. In this paper, we place these findings into the context of existing literature and discuss how they impact our understanding of how BDNF can reshape the brain.
Feng, Jian; Zhou, Yu; Campbell, Susan L.; Le, Thuc; Li, En; Sweatt, J. David; Silva, Alcino J.; Fan, Guoping
2011-01-01
Dnmt1 and Dnmt3a, two major DNA methyltransferases, are expressed in postmitotic neurons, but their function in the central nervous system (CNS) is unclear. We generated conditional mutant mice that lack either Dnmt1, or Dnmt3a, or both exclusively in forebrain excitatory neurons and found only double knockout (DKO) mice exhibited abnormal hippocampal CA1 long-term plasticity and deficits of learning and memory. While no neuronal loss was found, the size of hippocampal neurons in DKO was smaller; furthermore, DKO neurons showed a deregulation of gene expression including class I MHC and Stat1 that are known to play a role in synaptic plasticity. In addition, we observed a significant decrease in DNA methylation in DKO neurons. We conclude that Dnmt1 and Dnmt3a are required for synaptic plasticity, learning and memory through their overlapping roles in maintaining DNA methylation and modulating neuronal gene expression in adult CNS neurons. PMID:20228804
Mechanisms of dendritic mRNA transport and its role in synaptic tagging
Doyle, Michael; Kiebler, Michael A
2011-01-01
The localization of RNAs critically contributes to many important cellular processes in an organism, such as the establishment of polarity, asymmetric division and migration during development. Moreover, in the central nervous system, the local translation of mRNAs is thought to induce plastic changes that occur at synapses triggered by learning and memory. Here, we will critically review the physiological functions of well-established dendritically localized mRNAs and their associated factors, which together form ribonucleoprotein particles (RNPs). Second, we will discuss the life of a localized transcript from transcription in the nucleus to translation at the synapse and introduce the concept of the ‘RNA signature' that is characteristic for each transcript. Finally, we present the ‘sushi belt model' of how localized RNAs within neuronal RNPs may dynamically patrol multiple synapses rather than being anchored at a single synapse. This new model integrates our current understanding of synaptic function ranging from synaptic tagging and capture to functional and structural reorganization of the synapse upon learning and memory. PMID:21878995
Morimura, Naoko; Yasuda, Hiroki; Yamaguchi, Kazuhiko; Katayama, Kei-Ichi; Hatayama, Minoru; Tomioka, Naoko H; Odagawa, Maya; Kamiya, Akiko; Iwayama, Yoshimi; Maekawa, Motoko; Nakamura, Kazuhiko; Matsuzaki, Hideo; Tsujii, Masatsugu; Yamada, Kazuyuki; Yoshikawa, Takeo; Aruga, Jun
2017-06-12
Lrfn2/SALM1 is a PSD-95-interacting synapse adhesion molecule, and human LRFN2 is associated with learning disabilities. However its role in higher brain function and underlying mechanisms remain unknown. Here, we show that Lrfn2 knockout mice exhibit autism-like behavioural abnormalities, including social withdrawal, decreased vocal communications, increased stereotyped activities and prepulse inhibition deficits, together with enhanced learning and memory. In the hippocampus, the levels of synaptic PSD-95 and GluA1 are decreased. The synapses are structurally and functionally immature with spindle shaped spines, smaller postsynaptic densities, reduced AMPA/NMDA ratio, and enhanced LTP. In vitro experiments reveal that synaptic surface expression of AMPAR depends on the direct interaction between Lrfn2 and PSD-95. Furthermore, we detect functionally defective LRFN2 missense mutations in autism and schizophrenia patients. Together, these findings indicate that Lrfn2/LRFN2 serve as core components of excitatory synapse maturation and maintenance, and their dysfunction causes immature/silent synapses with pathophysiological state.
Memristive Ion Channel-Doped Biomembranes as Synaptic Mimics.
Najem, Joseph S; Taylor, Graham J; Weiss, Ryan J; Hasan, Md Sakib; Rose, Garrett; Schuman, Catherine D; Belianinov, Alex; Collier, C Patrick; Sarles, Stephen A
2018-05-22
Solid-state neuromorphic systems based on transistors or memristors have yet to achieve the interconnectivity, performance, and energy efficiency of the brain due to excessive noise, undesirable material properties, and nonbiological switching mechanisms. Here we demonstrate that an alamethicin-doped, synthetic biomembrane exhibits memristive behavior, emulates key synaptic functions including paired-pulse facilitation and depression, and enables learning and computing. Unlike state-of-the-art devices, our two-terminal, biomolecular memristor features similar structure (biomembrane), switching mechanism (ion channels), and ionic transport modality as biological synapses while operating at considerably lower power. The reversible and volatile voltage-driven insertion of alamethicin peptides into an insulating lipid bilayer creates conductive pathways that exhibit pinched current-voltage hysteresis at potentials above their insertion threshold. Moreover, the synapse-like dynamic properties of the biomolecular memristor allow for simplified learning circuit implementations. Low-power memristive devices based on stimuli-responsive biomolecules represent a major advance toward implementation of full synaptic functionality in neuromorphic hardware.
Duran, Jordi; Saez, Isabel; Gruart, Agnès; Guinovart, Joan J; Delgado-García, José M
2013-01-01
Glycogen is the only carbohydrate reserve of the brain, but its overall contribution to brain functions remains unclear. Although it has traditionally been considered as an emergency energetic reservoir, increasing evidence points to a role of glycogen in the normal activity of the brain. To address this long-standing question, we generated a brain-specific Glycogen Synthase knockout (GYS1Nestin-KO) mouse and studied the functional consequences of the lack of glycogen in the brain under alert behaving conditions. These animals showed a significant deficiency in the acquisition of an associative learning task and in the concomitant activity-dependent changes in hippocampal synaptic strength. Long-term potentiation (LTP) evoked in the hippocampal CA3-CA1 synapse was also decreased in behaving GYS1Nestin-KO mice. These results unequivocally show a key role of brain glycogen in the proper acquisition of new motor and cognitive abilities and in the underlying changes in synaptic strength. PMID:23281428
Tunicamycin impairs olfactory learning and synaptic plasticity in the olfactory bulb.
Tong, Jia; Okutani, Fumino; Murata, Yoshihiro; Taniguchi, Mutsuo; Namba, Toshiharu; Wang, Yu-Jie; Kaba, Hideto
2017-03-06
Tunicamycin (TM) induces endoplasmic reticulum (ER) stress and inhibits N-glycosylation in cells. ER stress is associated with neuronal death in neurodegenerative disorders, such as Parkinson's disease and Alzheimer's disease, and most patients complain of the impairment of olfactory recognition. Here we examined the effects of TM on aversive olfactory learning and the underlying synaptic plasticity in the main olfactory bulb (MOB). Behavioral experiments demonstrated that the intrabulbar infusion of TM disabled aversive olfactory learning without affecting short-term memory. Histological analyses revealed that TM infusion upregulated C/EBP homologous protein (CHOP), a marker of ER stress, in the mitral and granule cell layers of MOB. Electrophysiological data indicated that TM inhibited tetanus-induced long-term potentiation (LTP) at the dendrodendritic excitatory synapse from mitral to granule cells. A low dose of TM (250nM) abolished the late phase of LTP, and a high dose (1μM) inhibited the early and late phases of LTP. Further, high-dose, but not low-dose, TM reduced the paired-pulse facilitation ratio, suggesting that the inhibitory effects of TM on LTP are partially mediated through the presynaptic machinery. Thus, our results support the hypothesis that TM-induced ER stress impairs olfactory learning by inhibiting synaptic plasticity via presynaptic and postsynaptic mechanisms in MOB. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Structural Basis of Arc Binding to Synaptic Proteins: Implications for Cognitive Disease
Zhang, Wenchi; Wu, Jing; Ward, Matthew D.; ...
2015-04-09
Arc is a cellular immediate-early gene (IEG) that functions at excitatory synapses and is required for learning and memory. Here we report crystal structures of Arc subdomains that form a bi-lobar architecture remarkably similar to the capsid domain of human immunodeficiency virus (HIV) gag protein. Analysis indicates Arc originated from the Ty3/Gypsy retrotransposon family and was “domesticated” in higher vertebrates for synaptic functions. The Arc N-terminal lobe evolved a unique hydrophobic pocket that mediates intermolecular binding with synaptic proteins as resolved in complexes with TARPγ2 (Stargazin) and CaMKII peptides and is essential for Arc’s synaptic function. A consensus sequence formore » Arc binding identifies several additional partners that include genes implicated in schizophrenia. Arc N-lobe binding is inhibited by small chemicals suggesting Arc’s synaptic action may be druggable. Finally, these studies reveal the remarkable evolutionary origin of Arc and provide a structural basis for understanding Arc’s contribution to neural plasticity and disease.« less
Structural Basis of Arc Binding to Synaptic Proteins: Implications for Cognitive Disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Wenchi; Wu, Jing; Ward, Matthew D.
Arc is a cellular immediate-early gene (IEG) that functions at excitatory synapses and is required for learning and memory. Here we report crystal structures of Arc subdomains that form a bi-lobar architecture remarkably similar to the capsid domain of human immunodeficiency virus (HIV) gag protein. Analysis indicates Arc originated from the Ty3/Gypsy retrotransposon family and was “domesticated” in higher vertebrates for synaptic functions. The Arc N-terminal lobe evolved a unique hydrophobic pocket that mediates intermolecular binding with synaptic proteins as resolved in complexes with TARPγ2 (Stargazin) and CaMKII peptides and is essential for Arc’s synaptic function. A consensus sequence formore » Arc binding identifies several additional partners that include genes implicated in schizophrenia. Arc N-lobe binding is inhibited by small chemicals suggesting Arc’s synaptic action may be druggable. Finally, these studies reveal the remarkable evolutionary origin of Arc and provide a structural basis for understanding Arc’s contribution to neural plasticity and disease.« less
Kassabov, Stefan R.; Choi, Yun-Beom; Karl, Kevin A.; Vishwasrao, Harshad D.; Bailey, Craig H.; Kandel, Eric R.
2014-01-01
Summary Neurotrophins control the development and adult plasticity of the vertebrate nervous system. Failure to identify invertebrate neurotrophin orthologs, however, has precluded studies in invertebrate models, limiting understanding of fundamental aspects of neurotrophin biology and function. We identified a neurotrophin (ApNT) and Trk receptor (ApTrk) in the mollusk Aplysia and find they play a central role in learning related synaptic plasticity. ApNT increases the magnitude and lowers the threshold for induction of long-term facilitation and initiates the growth of new synaptic varicosities at the monosynaptic connection between sensory and motor neurons of the gill-withdrawal reflex. Unlike vertebrate neurotrophins, ApNT has multiple coding exons and exerts distinct synaptic effects through differentially processed and secreted splice isoforms. Our findings demonstrate the existence of bona-fide neurotrophin signaling in invertebrates and reveal a novel, post-transcriptional mechanism, regulating neurotrophin processing and the release of pro- and mature neurotrophins which differentially modulate synaptic plasticity. PMID:23562154
Structural Basis of Arc Binding to Synaptic Proteins: Implications for Cognitive Disease
Zhang, Wenchi; Wu, Jing; Ward, Matthew D.; Yang, Sunggu; Chuang, Yang-An; Xiao, Meifang; Li, Ruojing; Leahy, Daniel J.; Worley, Paul F.
2015-01-01
SUMMARY Arc is a cellular immediate early gene (IEG) that functions at excitatory synapses and is required for learning and memory. We report crystal structures of Arc subdomains that form a bi-lobar architecture remarkably similar to the capsid domain of human immunodeficiency virus (HIV) gag protein. Analysis indicates Arc originated from the Ty3/Gypsy retrotransposon family and was “domesticated” in higher vertebrates for synaptic functions. The Arc N-terminal lobe evolved a unique hydrophobic pocket that mediates intermolecular binding with synaptic proteins as resolved in complexes with TARPγ2 (Stargazin) and CaMKII peptides, and is essential for Arc’s synaptic function. A consensus sequence for Arc binding identifies several additional partners that include genes implicated in schizophrenia. Arc N-lobe binding is inhibited by small chemicals suggesting Arc’s synaptic action may be druggable. These studies reveal the remarkable evolutionary origin of Arc and provide a structural basis for understanding Arc’s contribution to neural plasticity and disease. PMID:25864631
Modulation of neuronal signal transduction and memory formation by synaptic zinc.
Sindreu, Carlos; Storm, Daniel R
2011-01-01
The physiological role of synaptic zinc has remained largely enigmatic since its initial detection in hippocampal mossy fibers over 50 years ago. The past few years have witnessed a number of studies highlighting the ability of zinc ions to regulate ion channels and intracellular signaling pathways implicated in neuroplasticity, and others that shed some light on the elusive role of synaptic zinc in learning and memory. Recent behavioral studies using knock-out mice for the synapse-specific zinc transporter ZnT-3 indicate that vesicular zinc is required for the formation of memories dependent on the hippocampus and the amygdala, two brain centers that are prominently innervated by zinc-rich fibers. A common theme emerging from this research is the activity-dependent regulation of the Erk1/2 mitogen-activated-protein kinase pathway by synaptic zinc through diverse mechanisms in neurons. Here we discuss current knowledge on how synaptic zinc may play a role in cognition through its impact on neuronal signaling.
Modulation of Neuronal Signal Transduction and Memory Formation by Synaptic Zinc
Sindreu, Carlos; Storm, Daniel R.
2011-01-01
The physiological role of synaptic zinc has remained largely enigmatic since its initial detection in hippocampal mossy fibers over 50 years ago. The past few years have witnessed a number of studies highlighting the ability of zinc ions to regulate ion channels and intracellular signaling pathways implicated in neuroplasticity, and others that shed some light on the elusive role of synaptic zinc in learning and memory. Recent behavioral studies using knock-out mice for the synapse-specific zinc transporter ZnT-3 indicate that vesicular zinc is required for the formation of memories dependent on the hippocampus and the amygdala, two brain centers that are prominently innervated by zinc-rich fibers. A common theme emerging from this research is the activity-dependent regulation of the Erk1/2 mitogen-activated-protein kinase pathway by synaptic zinc through diverse mechanisms in neurons. Here we discuss current knowledge on how synaptic zinc may play a role in cognition through its impact on neuronal signaling. PMID:22084630
Navakkode, Sheeja; Chew, Katherine C M; Tay, Sabrina Jia Ning; Lin, Qingshu; Behnisch, Thomas; Soong, Tuck Wah
2017-11-14
Long-term potentiation (LTP) is the persistent increase in the strength of the synapses. However, the neural networks would become saturated if there is only synaptic strenghthening. Synaptic weakening could be facilitated by active processes like long-term depression (LTD). Molecular mechanisms that facilitate the weakening of synapses and thereby stabilize the synapses are also important in learning and memory. Here we show that blockade of dopaminergic D4 receptors (D4R) promoted the formation of late-LTP and transformed early-LTP into late-LTP. This effect was dependent on protein synthesis, activation of NMDA-receptors and CaMKII. We also show that GABA A -receptor mediated mechanisms are involved in the enhancement of late-LTP. We could show that short-term plasticity and baseline synaptic transmission were unaffected by D4R inhibition. On the other hand, antagonizing D4R prevented both early and late forms of LTD, showing that activation of D4Rs triggered a dual function. Synaptic tagging experiments on LTD showed that D4Rs act as plasticity related proteins rather than the setting of synaptic tags. D4R activation by PD 168077 induced a slow-onset depression that was protein synthesis, NMDAR and CaMKII dependent. The D4 receptors, thus exert a bidirectional modulation of CA1 pyramidal neurons by restricting synaptic strengthening and facilitating synaptic weakening.
Developmental changes in automatic rule-learning mechanisms across early childhood.
Mueller, Jutta L; Friederici, Angela D; Männel, Claudia
2018-06-27
Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.
Habituation based synaptic plasticity and organismic learning in a quantum perovskite.
Zuo, Fan; Panda, Priyadarshini; Kotiuga, Michele; Li, Jiarui; Kang, Mingu; Mazzoli, Claudio; Zhou, Hua; Barbour, Andi; Wilkins, Stuart; Narayanan, Badri; Cherukara, Mathew; Zhang, Zhen; Sankaranarayanan, Subramanian K R S; Comin, Riccardo; Rabe, Karin M; Roy, Kaushik; Ramanathan, Shriram
2017-08-14
A central characteristic of living beings is the ability to learn from and respond to their environment leading to habit formation and decision making. This behavior, known as habituation, is universal among all forms of life with a central nervous system, and is also observed in single-cell organisms that do not possess a brain. Here, we report the discovery of habituation-based plasticity utilizing a perovskite quantum system by dynamical modulation of electron localization. Microscopic mechanisms and pathways that enable this organismic collective charge-lattice interaction are elucidated by first-principles theory, synchrotron investigations, ab initio molecular dynamics simulations, and in situ environmental breathing studies. We implement a learning algorithm inspired by the conductance relaxation behavior of perovskites that naturally incorporates habituation, and demonstrate learning to forget: a key feature of animal and human brains. Incorporating this elementary skill in learning boosts the capability of neural computing in a sequential, dynamic environment.Habituation is a learning mechanism that enables control over forgetting and learning. Zuo, Panda et al., demonstrate adaptive synaptic plasticity in SmNiO 3 perovskites to address catastrophic forgetting in a dynamic learning environment via hydrogen-induced electron localization.
Age-Dependent Deficits in Fear Learning in Heterozygous BDNF Knock-Out Mice
ERIC Educational Resources Information Center
Endres, Thomas; Lessmann, Volkmar
2012-01-01
Beyond its trophic function, the neurotrophin BDNF (brain-derived neurotrophic factor) is well known to crucially mediate synaptic plasticity and memory formation. Whereas recent studies suggested that acute BDNF/TrkB signaling regulates amygdala-dependent fear learning, no impairments of cued fear learning were reported in heterozygous BDNF…
Inflammation Subverts Hippocampal Synaptic Plasticity in Experimental Multiple Sclerosis
Mandolesi, Georgia; Piccinin, Sonia; Berretta, Nicola; Pignatelli, Marco; Feligioni, Marco; Musella, Alessandra; Gentile, Antonietta; Mori, Francesco; Bernardi, Giorgio; Nicoletti, Ferdinando; Mercuri, Nicola B.; Centonze, Diego
2013-01-01
Abnormal use-dependent synaptic plasticity is universally accepted as the main physiological correlate of memory deficits in neurodegenerative disorders. It is unclear whether synaptic plasticity deficits take place during neuroinflammatory diseases, such as multiple sclerosis (MS) and its mouse model, experimental autoimmune encephalomyelitis (EAE). In EAE mice, we found significant alterations of synaptic plasticity rules in the hippocampus. When compared to control mice, in fact, hippocampal long-term potentiation (LTP) induction was favored over long-term depression (LTD) in EAE, as shown by a significant rightward shift in the frequency–synaptic response function. Notably, LTP induction was also enhanced in hippocampal slices from control mice following interleukin-1β (IL-1β) perfusion, and both EAE and IL-1β inhibited GABAergic spontaneous inhibitory postsynaptic currents (sIPSC) without affecting glutamatergic transmission and AMPA/NMDA ratio. EAE was also associated with selective loss of GABAergic interneurons and with reduced gamma-frequency oscillations in the CA1 region of the hippocampus. Finally, we provided evidence that microglial activation in the EAE hippocampus was associated with IL-1β expression, and hippocampal slices from control mice incubated with activated microglia displayed alterations of GABAergic transmission similar to those seen in EAE brains, through a mechanism dependent on enhanced IL-1β signaling. These data may yield novel insights into the basis of cognitive deficits in EAE and possibly of MS. PMID:23355887
A synaptic organizing principle for cortical neuronal groups
Perin, Rodrigo; Berger, Thomas K.; Markram, Henry
2011-01-01
Neuronal circuitry is often considered a clean slate that can be dynamically and arbitrarily molded by experience. However, when we investigated synaptic connectivity in groups of pyramidal neurons in the neocortex, we found that both connectivity and synaptic weights were surprisingly predictable. Synaptic weights follow very closely the number of connections in a group of neurons, saturating after only 20% of possible connections are formed between neurons in a group. When we examined the network topology of connectivity between neurons, we found that the neurons cluster into small world networks that are not scale-free, with less than 2 degrees of separation. We found a simple clustering rule where connectivity is directly proportional to the number of common neighbors, which accounts for these small world networks and accurately predicts the connection probability between any two neurons. This pyramidal neuron network clusters into multiple groups of a few dozen neurons each. The neurons composing each group are surprisingly distributed, typically more than 100 μm apart, allowing for multiple groups to be interlaced in the same space. In summary, we discovered a synaptic organizing principle that groups neurons in a manner that is common across animals and hence, independent of individual experiences. We speculate that these elementary neuronal groups are prescribed Lego-like building blocks of perception and that acquired memory relies more on combining these elementary assemblies into higher-order constructs. PMID:21383177
Chambers, R. Andrew; Conroy, Susan K.
2010-01-01
Apoptotic and neurogenic events in the adult hippocampus are hypothesized to play a role in cognitive responses to new contexts. Corticosteroid-mediated stress responses and other neural processes invoked by substantially novel contextual changes may regulate these processes. Using elementary three-layer neural networks that learn by incremental synaptic plasticity, we explored whether the cognitive effects of differential regimens of neuronal turnover depend on the environmental context in terms of the degree of novelty in the new information to be learned. In “adult” networks that had achieved mature synaptic connectivity upon prior learning of the Roman alphabet, imposition of apoptosis/neurogenesis before learning increasingly novel information (alternate Roman < Russian < Hebrew) reveals optimality of informatic cost benefits when rates of turnover are geared in proportion to the degree of novelty. These findings predict that flexible control of rates of apoptosis–neurogenesis within plastic, mature neural systems optimizes learning attributes under varying degrees of contextual change, and that failures in this regulation may define a role for adult hippocampal neurogenesis in novelty- and stress-responsive psychiatric disorders. PMID:17214558
Chambers, R Andrew; Conroy, Susan K
2007-01-01
Apoptotic and neurogenic events in the adult hippocampus are hypothesized to play a role in cognitive responses to new contexts. Corticosteroid-mediated stress responses and other neural processes invoked by substantially novel contextual changes may regulate these processes. Using elementary three-layer neural networks that learn by incremental synaptic plasticity, we explored whether the cognitive effects of differential regimens of neuronal turnover depend on the environmental context in terms of the degree of novelty in the new information to be learned. In "adult" networks that had achieved mature synaptic connectivity upon prior learning of the Roman alphabet, imposition of apoptosis/neurogenesis before learning increasingly novel information (alternate Roman < Russian < Hebrew) reveals optimality of informatic cost benefits when rates of turnover are geared in proportion to the degree of novelty. These findings predict that flexible control of rates of apoptosis-neurogenesis within plastic, mature neural systems optimizes learning attributes under varying degrees of contextual change, and that failures in this regulation may define a role for adult hippocampal neurogenesis in novelty- and stress-responsive psychiatric disorders.
DEVELOPMENTAL HYPOTHYROIDISM IMPAIRS HIPPOCAMPAL LEARNING AND SYNAPTIC TRANSMISSION IN VIVO.
A number of environmental chemicals have been reported to alter thyroid hormone (TH) function. It is well established that severe hypothyroidism during critical periods of brain development leads to alterations in hippocampal structure and learning deficits, yet evaluation of ...
Reward-dependent learning in neuronal networks for planning and decision making.
Dehaene, S; Changeux, J P
2000-01-01
Neuronal network models have been proposed for the organization of evaluation and decision processes in prefrontal circuitry and their putative neuronal and molecular bases. The models all include an implementation and simulation of an elementary reward mechanism. Their central hypothesis is that tentative rules of behavior, which are coded by clusters of active neurons in prefrontal cortex, are selected or rejected based on an evaluation by this reward signal, which may be conveyed, for instance, by the mesencephalic dopaminergic neurons with which the prefrontal cortex is densely interconnected. At the molecular level, the reward signal is postulated to be a neurotransmitter such as dopamine, which exerts a global modulatory action on prefrontal synaptic efficacies, either via volume transmission or via targeted synaptic triads. Negative reinforcement has the effect of destabilizing the currently active rule-coding clusters; subsequently, spontaneous activity varies again from one cluster to another, giving the organism the chance to discover and learn a new rule. Thus, reward signals function as effective selection signals that either maintain or suppress currently active prefrontal representations as a function of their current adequacy. Simulations of this variation-selection have successfully accounted for the main features of several major tasks that depend on prefrontal cortex integrity, such as the delayed-response test, the Wisconsin card sorting test, the Tower of London test and the Stroop test. For the more complex tasks, we have found it necessary to supplement the external reward input with a second mechanism that supplies an internal reward; it consists of an auto-evaluation loop which short-circuits the reward input from the exterior. This allows for an internal evaluation of covert motor intentions without actualizing them as behaviors, by simply testing them covertly by comparison with memorized former experiences. This element of architecture gives access to enhanced rates of learning via an elementary process of internal or covert mental simulation. We have recently applied these ideas to a new model, developed with M. Kerszberg, which hypothesizes that prefrontal cortex and its reward-related connections contribute crucially to conscious effortful tasks. This model distinguishes two main computational spaces within the human brain: a unique global workspace composed of distributed and heavily interconnected neurons with long-range axons, and a set of specialized and modular perceptual, motor, memory, evaluative and attentional processors. We postulate that workspace neurons are mobilized in effortful tasks for which the specialized processors do not suffice; they selectively mobilize or suppress, through descending connections, the contribution of specific processor neurons. In the course of task performance, workspace neurons become spontaneously co-activated, forming discrete though variable spatio-temporal patterns subject to modulation by vigilance signals and to selection by reward signals. A computer simulation of the Stroop task shows workspace activation to increase during acquisition of a novel task, effortful execution, and after errors. This model makes predictions concerning the spatio-temporal activation patterns during brain imaging of cognitive tasks, particularly concerning the conditions of activation of dorsolateral prefrontal cortex and anterior cingulate, their relation to reward mechanisms, and their specific reaction during error processing.
Weiler, Nicholas C; Collman, Forrest; Vogelstein, Joshua T; Burns, Randal; Smith, Stephen J
2014-01-01
A major question in neuroscience is how diverse subsets of synaptic connections in neural circuits are affected by experience dependent plasticity to form the basis for behavioral learning and memory. Differences in protein expression patterns at individual synapses could constitute a key to understanding both synaptic diversity and the effects of plasticity at different synapse populations. Our approach to this question leverages the immunohistochemical multiplexing capability of array tomography (ATomo) and the columnar organization of mouse barrel cortex to create a dataset comprising high resolution volumetric images of spared and deprived cortical whisker barrels stained for over a dozen synaptic molecules each. These dataset has been made available through the Open Connectome Project for interactive online viewing, and may also be downloaded for offline analysis using web, Matlab, and other interfaces. PMID:25977797
Weiler, Nicholas C; Collman, Forrest; Vogelstein, Joshua T; Burns, Randal; Smith, Stephen J
2014-01-01
A major question in neuroscience is how diverse subsets of synaptic connections in neural circuits are affected by experience dependent plasticity to form the basis for behavioral learning and memory. Differences in protein expression patterns at individual synapses could constitute a key to understanding both synaptic diversity and the effects of plasticity at different synapse populations. Our approach to this question leverages the immunohistochemical multiplexing capability of array tomography (ATomo) and the columnar organization of mouse barrel cortex to create a dataset comprising high resolution volumetric images of spared and deprived cortical whisker barrels stained for over a dozen synaptic molecules each. These dataset has been made available through the Open Connectome Project for interactive online viewing, and may also be downloaded for offline analysis using web, Matlab, and other interfaces.
Cicvaric, Ana; Yang, Jiaye; Krieger, Sigurd; Khan, Deeba; Kim, Eun-Jung; Dominguez-Rodriguez, Manuel; Cabatic, Maureen; Molz, Barbara; Acevedo Aguilar, Juan Pablo; Milicevic, Radoslav; Smani, Tarik; Breuss, Johannes M.; Kerjaschki, Dontscho; Pollak, Daniela D.; Uhrin, Pavel; Monje, Francisco J.
2016-01-01
Abstract Introduction: Podoplanin is a cell-surface glycoprotein constitutively expressed in the brain and implicated in human brain tumorigenesis. The intrinsic function of podoplanin in brain neurons remains however uncharacterized. Materials and methods: Using an established podoplanin-knockout mouse model and electrophysiological, biochemical, and behavioral approaches, we investigated the brain neuronal role of podoplanin. Results: Ex-vivo electrophysiology showed that podoplanin deletion impairs dentate gyrus synaptic strengthening. In vivo, podoplanin deletion selectively impaired hippocampus-dependent spatial learning and memory without affecting amygdala-dependent cued fear conditioning. In vitro, neuronal overexpression of podoplanin promoted synaptic activity and neuritic outgrowth whereas podoplanin-deficient neurons exhibited stunted outgrowth and lower levels of p-Ezrin, TrkA, and CREB in response to nerve growth factor (NGF). Surface Plasmon Resonance data further indicated a physical interaction between podoplanin and NGF. Discussion: This work proposes podoplanin as a novel component of the neuronal machinery underlying neuritogenesis, synaptic plasticity, and hippocampus-dependent memory functions. The existence of a relevant cross-talk between podoplanin and the NGF/TrkA signaling pathway is also for the first time proposed here, thus providing a novel molecular complex as a target for future multidisciplinary studies of the brain function in the physiology and the pathology.Key messagesPodoplanin, a protein linked to the promotion of human brain tumors, is required in vivo for proper hippocampus-dependent learning and memory functions.Deletion of podoplanin selectively impairs activity-dependent synaptic strengthening at the neurogenic dentate-gyrus and hampers neuritogenesis and phospho Ezrin, TrkA and CREB protein levels upon NGF stimulation.Surface plasmon resonance data indicates a physical interaction between podoplanin and NGF. On these grounds, a relevant cross-talk between podoplanin and NGF as well as a role for podoplanin in plasticity-related brain neuronal functions is here proposed. PMID:27558977
Feduccia, Allison A.; Chatterjee, Susmita; Bartlett, Selena E.
2012-01-01
Addictive drugs can activate systems involved in normal reward-related learning, creating long-lasting memories of the drug's reinforcing effects and the environmental cues surrounding the experience. These memories significantly contribute to the maintenance of compulsive drug use as well as cue-induced relapse which can occur even after long periods of abstinence. Synaptic plasticity is thought to be a prominent molecular mechanism underlying drug-induced learning and memories. Ethanol and nicotine are both widely abused drugs that share a common molecular target in the brain, the neuronal nicotinic acetylcholine receptors (nAChRs). The nAChRs are ligand-gated ion channels that are vastly distributed throughout the brain and play a key role in synaptic neurotransmission. In this review, we will delineate the role of nAChRs in the development of ethanol and nicotine addiction. We will characterize both ethanol and nicotine's effects on nAChR-mediated synaptic transmission and plasticity in several key brain areas that are important for addiction. Finally, we will discuss some of the behavioral outcomes of drug-induced synaptic plasticity in animal models. An understanding of the molecular and cellular changes that occur following administration of ethanol and nicotine will lead to better therapeutic strategies. PMID:22876217
[Synapse maturation and autism: learning from neuroligin model mice].
Tabuchi, Katsuhiko; Chang, WenHsin; Hang, WenHsin; Asgar, Nur Farehan; Asgar, Nur Farehan Mohamed; Pramanik, Gopal
2014-02-01
Autism is a neurodevelopmental disorder characterized by impairments in social interaction, communication, and restricted and repetitive behavior. Synaptic defects have been implicated in autism; nevertheless, the cause is still largely unknown. A mutation that substitutes cysteine for arginine at residue 451 of Neuroligin-3 (R451C) is the first monogenic mutation identified in idiopathic autism patients. To study the relationship between this mutation and autism, we generated knock-in mice that recapitulated this mutation. The knock-in mice were born and grew up normally without showing any major physical phenotypes, but showed a deficit in social interaction. We studied synaptic function in the layer II/III pyramidal neurons in the somatosensory cortex and found inhibitory synaptic transmission was enhanced in the knock-in mice. The administration of GABA blocker rescued social interaction, suggesting that this caused autistic behavior in these mice. We also found, by Morris water maze test, that spatial learning and memory were significantly enhanced in the knock-in mice. Electrophysiology in the CA1 region of the hippocampus revealed that LTP, the NMDA/AMPA ratio, and NR2B function were enhanced, indicating that synaptic maturation was impaired in the knock-in mice. This may cause the deficit in social behavior and extraordinary memory ability occasionally seen in autistic patients.
Circuit mechanisms of hippocampal reactivation during sleep.
Malerba, Paola; Bazhenov, Maxim
2018-05-01
The hippocampus is important for memory and learning, being a brain site where initial memories are formed and where sharp wave - ripples (SWR) are found, which are responsible for mapping recent memories to long-term storage during sleep-related memory replay. While this conceptual schema is well established, specific intrinsic and network-level mechanisms driving spatio-temporal patterns of hippocampal activity during sleep, and specifically controlling off-line memory reactivation are unknown. In this study, we discuss a model of hippocampal CA1-CA3 network generating spontaneous characteristic SWR activity. Our study predicts the properties of CA3 input which are necessary for successful CA1 ripple generation and the role of synaptic interactions and intrinsic excitability in spike sequence replay during SWRs. Specifically, we found that excitatory synaptic connections promote reactivation in both CA3 and CA1, but the different dynamics of sharp waves in CA3 and ripples in CA1 result in a differential role for synaptic inhibition in modulating replay: promoting spike sequence specificity in CA3 but not in CA1 areas. Finally, we describe how awake learning of spatial trajectories leads to synaptic changes sufficient to drive hippocampal cells' reactivation during sleep, as required for sleep-related memory consolidation. Copyright © 2018 Elsevier Inc. All rights reserved.
3D reconstruction of synapses with deep learning based on EM Images
NASA Astrophysics Data System (ADS)
Xiao, Chi; Rao, Qiang; Zhang, Dandan; Chen, Xi; Han, Hua; Xie, Qiwei
2017-03-01
Recently, due to the rapid development of electron microscope (EM) with its high resolution, stacks delivered by EM can be used to analyze a variety of components that are critical to understand brain function. Since synaptic study is essential in neurobiology and can be analyzed by EM stacks, the automated routines for reconstruction of synapses based on EM Images can become a very useful tool for analyzing large volumes of brain tissue and providing the ability to understand the mechanism of brain. In this article, we propose a novel automated method to realize 3D reconstruction of synapses for Automated Tapecollecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) with deep learning. Being different from other reconstruction algorithms, which employ classifier to segment synaptic clefts directly. We utilize deep learning method and segmentation algorithm to obtain synaptic clefts as well as promote the accuracy of reconstruction. The proposed method contains five parts: (1) using modified Moving Least Square (MLS) deformation algorithm and Scale Invariant Feature Transform (SIFT) features to register adjacent sections, (2) adopting Faster Region Convolutional Neural Networks (Faster R-CNN) algorithm to detect synapses, (3) utilizing screening method which takes context cues of synapses into consideration to reduce the false positive rate, (4) combining a practical morphology algorithm with a suitable fitting function to segment synaptic clefts and optimize the shape of them, (5) applying the plugin in FIJI to show the final 3D visualization of synapses. Experimental results on ATUM-SEM images demonstrate the effectiveness of our proposed method.
JIP1-Mediated JNK Activation Negatively Regulates Synaptic Plasticity and Spatial Memory.
Morel, Caroline; Sherrin, Tessi; Kennedy, Norman J; Forest, Kelly H; Avcioglu Barutcu, Seda; Robles, Michael; Carpenter-Hyland, Ezekiel; Alfulaij, Naghum; Standen, Claire L; Nichols, Robert A; Benveniste, Morris; Davis, Roger J; Todorovic, Cedomir
2018-04-11
The c-Jun N-terminal kinase (JNK) signal transduction pathway is implicated in learning and memory. Here, we examined the role of JNK activation mediated by the JNK-interacting protein 1 (JIP1) scaffold protein. We compared male wild-type mice with a mouse model harboring a point mutation in the Jip1 gene that selectively blocks JIP1-mediated JNK activation. These male mutant mice exhibited increased NMDAR currents, increased NMDAR-mediated gene expression, and a lower threshold for induction of hippocampal long-term potentiation. The JIP1 mutant mice also displayed improved hippocampus-dependent spatial memory and enhanced associative fear conditioning. These results were confirmed using a second JIP1 mutant mouse model that suppresses JNK activity. Together, these observations establish that JIP1-mediated JNK activation contributes to the regulation of hippocampus-dependent, NMDAR-mediated synaptic plasticity and learning. SIGNIFICANCE STATEMENT The results of this study demonstrate that c-Jun N-terminal kinase (JNK) activation induced by the JNK-interacting protein 1 (JIP1) scaffold protein negatively regulates the threshold for induction of long-term synaptic plasticity through the NMDA-type glutamate receptor. This change in plasticity threshold influences learning. Indeed, mice with defects in JIP1-mediated JNK activation display enhanced memory in hippocampus-dependent tasks, such as contextual fear conditioning and Morris water maze, indicating that JIP1-JNK constrains spatial memory. This study identifies JIP1-mediated JNK activation as a novel molecular pathway that negatively regulates NMDAR-dependent synaptic plasticity and memory. Copyright © 2018 the authors 0270-6474/18/383708-21$15.00/0.
STRIATAL-ENRICHED PROTEIN TYROSINE PHOSPHATASE (STEP) KNOCKOUT MICE HAVE ENHANCED HIPPOCAMPAL MEMORY
Venkitaramani, Deepa V.; Moura, Paula J.; Picciotto, Marina R.; Lombroso, Paul J.
2011-01-01
STEP is a brain-specific phosphatase that opposes synaptic strengthening by the regulation of key synaptic signaling proteins. Previous studies suggest a possible role for STriatal-Enriched protein tyrosine Phosphatase (STEP) in learning and memory. To demonstrate the functional importance of STEP in learning and memory, we generated STEP knockout (KO) mice and examined the effect of deletion of STEP on behavioral performance, as well as the phosphorylation and expression of its substrates. Here we report that loss of STEP leads to significantly enhanced performance in hippocampal-dependent learning and memory tasks. In addition, STEP KO mice displayed greater dominance behavior, although they were normal in their motivation, motor coordination, visual acuity and social interactions. STEP KO mice displayed enhanced tyrosine phosphorylation of extracellular-signal regulated kinase 1/2 (ERK1/2), the NR2B subunit of the N-methyl-D-aspartate receptor (NMDAR), Proline-rich tyrosine kinase (Pyk2), as well as an increased phosphorylation of ERK1/2 substrates. Concomitant to the increased phosphorylation of NR2B, synaptosomal expression of NR1/NR2B NMDARs was increased in STEP KO mice, as was the GluR1/GluR2 containing α-amino-3-hydroxy-5-methyl-4-isoxazole-propionic acid receptors (AMPAR), providing a potential molecular mechanism for the improved cognitive performance. The data support a role for STEP in the regulation of synaptic strengthening. The absence of STEP improves cognitive performance, and may do so by the regulation of downstream effectors necessary for synaptic transmission. PMID:21501258
Learning Multisensory Integration and Coordinate Transformation via Density Estimation
Sabes, Philip N.
2013-01-01
Sensory processing in the brain includes three key operations: multisensory integration—the task of combining cues into a single estimate of a common underlying stimulus; coordinate transformations—the change of reference frame for a stimulus (e.g., retinotopic to body-centered) effected through knowledge about an intervening variable (e.g., gaze position); and the incorporation of prior information. Statistically optimal sensory processing requires that each of these operations maintains the correct posterior distribution over the stimulus. Elements of this optimality have been demonstrated in many behavioral contexts in humans and other animals, suggesting that the neural computations are indeed optimal. That the relationships between sensory modalities are complex and plastic further suggests that these computations are learned—but how? We provide a principled answer, by treating the acquisition of these mappings as a case of density estimation, a well-studied problem in machine learning and statistics, in which the distribution of observed data is modeled in terms of a set of fixed parameters and a set of latent variables. In our case, the observed data are unisensory-population activities, the fixed parameters are synaptic connections, and the latent variables are multisensory-population activities. In particular, we train a restricted Boltzmann machine with the biologically plausible contrastive-divergence rule to learn a range of neural computations not previously demonstrated under a single approach: optimal integration; encoding of priors; hierarchical integration of cues; learning when not to integrate; and coordinate transformation. The model makes testable predictions about the nature of multisensory representations. PMID:23637588
Montgomery, Karienn S.; Edwards, George; Levites, Yona; Kumar, Ashok; Myers, Catherine E.; Gluck, Mark A.; Setlow, Barry; Bizon, Jennifer L.
2015-01-01
Elevated β-amyloid and impaired synaptic function in hippocampus are among the earliest manifestations of Alzheimer’s disease (AD). Most cognitive assessments employed in both humans and animal models, however, are insensitive to this early disease pathology. One critical aspect of hippocampal function is its role in episodic memory, which involves the binding of temporally coincident sensory information (e.g., sights, smells, and sounds) to create a representation of a specific learning epoch. Flexible associations can be formed among these distinct sensory stimuli that enable the “transfer” of new learning across a wide variety of contexts. The current studies employed a mouse analog of an associative “transfer learning” task that has previously been used to identify risk for prodromal AD in humans. The rodent version of the task assesses the transfer of learning about stimulus features relevant to a food reward across a series of compound discrimination problems. The relevant feature that predicts the food reward is unchanged across problems, but an irrelevant feature (i.e., the context) is altered. Experiment 1 demonstrated that C57BL6/J mice with bilateral ibotenic acid lesions of hippocampus were able to discriminate between two stimuli on par with control mice; however, lesioned mice were unable to transfer or apply this learning to new problem configurations. Experiment 2 used the APPswePS1 mouse model of amyloidosis to show that robust impairments in transfer learning are evident in mice with subtle β amyloid-induced synaptic deficits in the hippocampus. Finally, Experiment 3 confirmed that the same transfer learning impairments observed in APPswePS1 mice were also evident in the Tg-SwDI mouse, a second model of amyloidosis. Together, these data show that the ability to generalize learned associations to new contexts is disrupted even in the presence of subtle hippocampal dysfunction and suggest that, across species, this aspect of hippocampal-dependent learning may be useful for early identification of AD-like pathology. PMID:26418152
Learning Reward Uncertainty in the Basal Ganglia
Bogacz, Rafal
2016-01-01
Learning the reliability of different sources of rewards is critical for making optimal choices. However, despite the existence of detailed theory describing how the expected reward is learned in the basal ganglia, it is not known how reward uncertainty is estimated in these circuits. This paper presents a class of models that encode both the mean reward and the spread of the rewards, the former in the difference between the synaptic weights of D1 and D2 neurons, and the latter in their sum. In the models, the tendency to seek (or avoid) options with variable reward can be controlled by increasing (or decreasing) the tonic level of dopamine. The models are consistent with the physiology of and synaptic plasticity in the basal ganglia, they explain the effects of dopaminergic manipulations on choices involving risks, and they make multiple experimental predictions. PMID:27589489
Arc expression identifies the lateral amygdala fear memory trace
Gouty-Colomer, L A; Hosseini, B; Marcelo, I M; Schreiber, J; Slump, D E; Yamaguchi, S; Houweling, A R; Jaarsma, D; Elgersma, Y; Kushner, S A
2016-01-01
Memories are encoded within sparsely distributed neuronal ensembles. However, the defining cellular properties of neurons within a memory trace remain incompletely understood. Using a fluorescence-based Arc reporter, we were able to visually identify the distinct subset of lateral amygdala (LA) neurons activated during auditory fear conditioning. We found that Arc-expressing neurons have enhanced intrinsic excitability and are preferentially recruited into newly encoded memory traces. Furthermore, synaptic potentiation of thalamic inputs to the LA during fear conditioning is learning-specific, postsynaptically mediated and highly localized to Arc-expressing neurons. Taken together, our findings validate the immediate-early gene Arc as a molecular marker for the LA neuronal ensemble recruited during fear learning. Moreover, these results establish a model of fear memory formation in which intrinsic excitability determines neuronal selection, whereas learning-related encoding is governed by synaptic plasticity. PMID:25802982
Mahan, John D; Stein, David S
2014-07-01
It is important in teaching adults to recognize the essential characteristics of adult learners and how these characteristics define their learning priorities and activities. The seven key premises and practices for teaching adults provide a good guide for those interested in helping adults learn. The emerging science of the neurobiology of learning provides powerful new insights into how learning occurs in the complex integrated neural network that characterizes the adult. Differentiation of the two types of thinking: System 1 (fast, intuitive, and, often, emotional) and System 2 (slower, deliberate, and logical). System 1 thinking helps explain the basis for quick decisions and reliance of humans on heuristics (or rules of thumb) that leads to the type of convenient thinking associated with errors of thinking and judgment. We now know that the learning experience has an objective location-in the temporal and parietal lobes-as persistent dynamic networks of neurons and neuronal connections. Learning is initially stored in transient working memory (relatively limited capacity and time frame) and then moved under the right conditions to more long-lasting/stable memory (with larger capacity) that is stored for future access and development. It is clear that memories are not static and are not destined, once developed, to forever remain as stable constructs; rather, memories are dynamic, always available for modulation and alteration, and heavily invested with context, emotion, and other operant factors. The framework for such neural networks involves new neuronal connections, enhanced neuronal synaptic transmission, and neuron generation. Ten key teaching and learning concepts derived from recent neurobiology studies on learning and memory are presented. As the neurobiology of learning is better defined, the basis for how adults best learn, and even the preferences they display, can be employed as the physiological foundation for our best methods to effectively teach adults and facilitate their learning. Copyright © 2014 Mosby, Inc. All rights reserved.
Galeazzi, Juan M.; Navajas, Joaquín; Mender, Bedeho M. W.; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M.
2016-01-01
ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views. PMID:27253452
Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M
2016-01-01
Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.
Associative Memory in Three Aplysiids: Correlation with Heterosynaptic Modulation
ERIC Educational Resources Information Center
Thompson, Laura; Wright, William G.; Hoover, Brian A.; Nguyen, Hoang
2006-01-01
Much recent research on mechanisms of learning and memory focuses on the role of heterosynaptic neuromodulatory signaling. Such neuromodulation appears to stabilize Hebbian synaptic changes underlying associative learning, thereby extending memory. Previous comparisons of three related sea-hares (Mollusca, Opisthobranchia) uncovered interspecific…
Rule learning in autism: the role of reward type and social context.
Jones, E J H; Webb, S J; Estes, A; Dawson, G
2013-01-01
Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concrete reinforcement, suggesting an underlying difficulty in forming conceptual connections. Learning abstract rules about social stimuli remained challenging through late childhood, indicating the importance of testing executive functions in both social and non-social contexts.
Interplay between population firing stability and single neuron dynamics in hippocampal networks
Slomowitz, Edden; Styr, Boaz; Vertkin, Irena; Milshtein-Parush, Hila; Nelken, Israel; Slutsky, Michael; Slutsky, Inna
2015-01-01
Neuronal circuits' ability to maintain the delicate balance between stability and flexibility in changing environments is critical for normal neuronal functioning. However, to what extent individual neurons and neuronal populations maintain internal firing properties remains largely unknown. In this study, we show that distributions of spontaneous population firing rates and synchrony are subject to accurate homeostatic control following increase of synaptic inhibition in cultured hippocampal networks. Reduction in firing rate triggered synaptic and intrinsic adaptive responses operating as global homeostatic mechanisms to maintain firing macro-stability, without achieving local homeostasis at the single-neuron level. Adaptive mechanisms, while stabilizing population firing properties, reduced short-term facilitation essential for synaptic discrimination of input patterns. Thus, invariant ongoing population dynamics emerge from intrinsically unstable activity patterns of individual neurons and synapses. The observed differences in the precision of homeostatic control at different spatial scales challenge cell-autonomous theory of network homeostasis and suggest the existence of network-wide regulation rules. DOI: http://dx.doi.org/10.7554/eLife.04378.001 PMID:25556699
Goh, Jinzhong Jeremy; Manahan-Vaughan, Denise
2013-02-01
Learning-facilitated synaptic plasticity describes the ability of hippocampal synapses to respond with persistent plasticity to afferent stimulation when coupled with a spatial learning event, whereby the afferent stimulation normally produces short-term plasticity or no change in synaptic strength if given in the absence of novel learning. Recently, it was reported that in the mouse hippocampus intrinsic long-term depression (LTD > 24 h) occurs when test-pulse afferent stimulation is coupled with a novel spatial learning. It is not known to what extent this phenomenon shares molecular properties with synaptic plasticity that is typically induced by means of patterned electrical afferent stimulation. In previous work, we showed that a novel spatial object recognition task facilitates LTD at the Schaffer collateral-CA1 synapse of freely behaving adult mice, whereas reexposure to the familiar spatial configuration ∼24 h later elicited no such facilitation. Here we report that treatment with the NMDA receptor antagonist, (±)-3-(2-Carboxypiperazin-4-yl)-propanephosphonic acid (CPP), or antagonism of metabotropic glutamate (mGlu) receptor, mGlu5, using 2-methyl-6-(phenylethynyl) pyridine (MPEP), completely prevented LTD under the novel learning conditions. Behavioral assessment during re-exposure after application of the antagonists revealed that the animals did not remember the object during novel exposure and treated them as if they were novel. Under these circumstances, where the acquisition of novel spatial information was involved, LTD was facilitated. Our data support that the endogenous LTD that is enabled through novel spatial learning in adult mice is critically dependent on the activation of both the NMDA receptors and mGlu5. Copyright © 2012 Wiley Periodicals, Inc.
Luque, Niceto R.; Garrido, Jesús A.; Carrillo, Richard R.; D'Angelo, Egidio; Ros, Eduardo
2014-01-01
The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network. PMID:25177290
ERIC Educational Resources Information Center
Cabirol, Amélie; Brooks, Rufus; Groh, Claudia; Barron, Andrew B.; Devaud, Jean-Marc
2017-01-01
The honey bee mushroom bodies (MBs) are brain centers required for specific learning tasks. Here, we show that environmental conditions experienced as young adults affect the maturation of MB neuropil and performance in a MB-dependent learning task. Specifically, olfactory reversal learning was selectively impaired following early exposure to an…
Contributions of Bcl-xL to acute and long term changes in bioenergetics during neuronal plasticity.
Jonas, Elizabeth A
2014-08-01
Mitochondria manufacture and release metabolites and manage calcium during neuronal activity and synaptic transmission, but whether long term alterations in mitochondrial function contribute to the neuronal plasticity underlying changes in organism behavior patterns is still poorly understood. Although normal neuronal plasticity may determine learning, in contrast a persistent decline in synaptic strength or neuronal excitability may portend neurite retraction and eventual somatic death. Anti-death proteins such as Bcl-xL not only provide neuroprotection at the neuronal soma during cell death stimuli, but also appear to enhance neurotransmitter release and synaptic growth and development. It is proposed that Bcl-xL performs these functions through its ability to regulate mitochondrial release of bioenergetic metabolites and calcium, and through its ability to rapidly alter mitochondrial positioning and morphology. Bcl-xL also interacts with proteins that directly alter synaptic vesicle recycling. Bcl-xL translocates acutely to sub-cellular membranes during neuronal activity to achieve changes in synaptic efficacy. After stressful stimuli, pro-apoptotic cleaved delta N Bcl-xL (ΔN Bcl-xL) induces mitochondrial ion channel activity leading to synaptic depression and this is regulated by caspase activation. During physiological states of decreased synaptic stimulation, loss of mitochondrial Bcl-xL and low level caspase activation occur prior to the onset of long term decline in synaptic efficacy. The degree to which Bcl-xL changes mitochondrial membrane permeability may control the direction of change in synaptic strength. The small molecule Bcl-xL inhibitor ABT-737 has been useful in defining the role of Bcl-xL in synaptic processes. Bcl-xL is crucial to the normal health of neurons and synapses and its malfunction may contribute to neurodegenerative disease. Copyright © 2013. Published by Elsevier B.V.
Wallisch, Pascal; Ostojic, Srdjan
2016-01-01
Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity. SIGNIFICANCE STATEMENT Synaptic plasticity, the change in efficacy of connections between neurons, is thought to underlie learning and memory. The dominant paradigm posits that the precise timing of neural action potentials (APs) is central for plasticity induction. This concept is based on experiments using highly regular and stereotyped patterns of APs, in stark contrast with natural neuronal activity. Using synaptic plasticity models, we investigated how irregular, in vivo-like activity shapes synaptic plasticity. We found that synaptic changes induced by precise timing of APs are much weaker than suggested by regular stimulation protocols, and can be equivalently induced by modest variations of the AP rate alone. Our results call into question the dominant role of precise AP timing for plasticity in natural conditions. PMID:27807166
Neuromorphic implementations of neurobiological learning algorithms for spiking neural networks.
Walter, Florian; Röhrbein, Florian; Knoll, Alois
2015-12-01
The application of biologically inspired methods in design and control has a long tradition in robotics. Unlike previous approaches in this direction, the emerging field of neurorobotics not only mimics biological mechanisms at a relatively high level of abstraction but employs highly realistic simulations of actual biological nervous systems. Even today, carrying out these simulations efficiently at appropriate timescales is challenging. Neuromorphic chip designs specially tailored to this task therefore offer an interesting perspective for neurorobotics. Unlike Von Neumann CPUs, these chips cannot be simply programmed with a standard programming language. Like real brains, their functionality is determined by the structure of neural connectivity and synaptic efficacies. Enabling higher cognitive functions for neurorobotics consequently requires the application of neurobiological learning algorithms to adjust synaptic weights in a biologically plausible way. In this paper, we therefore investigate how to program neuromorphic chips by means of learning. First, we provide an overview over selected neuromorphic chip designs and analyze them in terms of neural computation, communication systems and software infrastructure. On the theoretical side, we review neurobiological learning techniques. Based on this overview, we then examine on-die implementations of these learning algorithms on the considered neuromorphic chips. A final discussion puts the findings of this work into context and highlights how neuromorphic hardware can potentially advance the field of autonomous robot systems. The paper thus gives an in-depth overview of neuromorphic implementations of basic mechanisms of synaptic plasticity which are required to realize advanced cognitive capabilities with spiking neural networks. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Fister, Mathew; Bickford, Paula C.; Cartford, M. Claire; Samec, Amy
2004-01-01
The neurotransmitter norepinephrine (NE) has been shown to modulate cerebellar-dependent learning and memory. Lesions of the nucleus locus coeruleus or systemic blockade of noradrenergic receptors has been shown to delay the acquisition of several cerebellar-dependent learning tasks. To date, no studies have shown a direct involvement of…
Impaired Discrimination Learning in Mice Lacking the NMDA Receptor NR2A Subunit
ERIC Educational Resources Information Center
Brigman, Jonathan L.; Feyder, Michael; Saksida, Lisa M.; Bussey, Timothy J.; Mishina, Masayoshi; Holmes, Andrew
2008-01-01
N-Methyl-D-aspartate receptors (NMDARs) mediate certain forms of synaptic plasticity and learning. We used a touchscreen system to assess NR2A subunit knockout mice (KO) for (1) pairwise visual discrimination and reversal learning and (2) acquisition and extinction of an instrumental response requiring no pairwise discrimination. NR2A KO mice…
The Roles of Protein Kinases in Learning and Memory
ERIC Educational Resources Information Center
Giese, Karl Peter; Mizuno, Keiko
2013-01-01
In the adult mammalian brain, more than 250 protein kinases are expressed, but only a few of these kinases are currently known to enable learning and memory. Based on this information it appears that learning and memory-related kinases either impact on synaptic transmission by altering ion channel properties or ion channel density, or regulate…
Del Giudice, Paolo; Fusi, Stefano; Mattia, Maurizio
2003-01-01
In this paper we review a series of works concerning models of spiking neurons interacting via spike-driven, plastic, Hebbian synapses, meant to implement stimulus driven, unsupervised formation of working memory (WM) states. Starting from a summary of the experimental evidence emerging from delayed matching to sample (DMS) experiments, we briefly review the attractor picture proposed to underlie WM states. We then describe a general framework for a theoretical approach to learning with synapses subject to realistic constraints and outline some general requirements to be met by a mechanism of Hebbian synaptic structuring. We argue that a stochastic selection of the synapses to be updated allows for optimal memory storage, even if the number of stable synaptic states is reduced to the extreme (bistable synapses). A description follows of models of spike-driven synapses that implement the stochastic selection by exploiting the high irregularity in the pre- and post-synaptic activity. Reasons are listed why dynamic learning, that is the process by which the synaptic structure develops under the only guidance of neural activities, driven in turn by stimuli, is hard to accomplish. We provide a 'feasibility proof' of dynamic formation of WM states in this context the beneficial role of short-term depression (STD) is illustrated. by showing how an initially unstructured network autonomously develops a synaptic structure supporting simultaneously stable spontaneous and WM states in this context the beneficial role of short-term depression (STD) is illustrated. After summarizing heuristic indications emerging from the study performed, we conclude by briefly discussing open problems and critical issues still to be clarified.
Wang, Tao; Guan, Rui-Li; Liu, Ming-Chao; Shen, Xue-Feng; Chen, Jing Yuan; Zhao, Ming-Gao; Luo, Wen-Jing
2016-08-01
Lead (Pb) is an environmental neurotoxic metal. Pb exposure may cause neurobehavioral changes, such as learning and memory impairment, and adolescence violence among children. Previous animal models have largely focused on the effects of Pb exposure during early development (from gestation to lactation period) on neurobehavior. In this study, we exposed Sprague-Dawley rats during the juvenile stage (from juvenile period to adult period). We investigated the synaptic function and structural changes and the relationship of these changes to neurobehavioral deficits in adult rats. Our results showed that juvenile Pb exposure caused fear-conditioned memory impairment and anxiety-like behavior, but locomotion and pain behavior were indistinguishable from the controls. Electrophysiological studies showed that long-term potentiation induction was affected in Pb-exposed rats, and this was probably due to excitatory synaptic transmission impairment in Pb-exposed rats. We found that NMDA and AMPA receptor-mediated current was inhibited, whereas the GABA synaptic transmission was normal in Pb-exposed rats. NR2A and phosphorylated GluR1 expression decreased. Moreover, morphological studies showed that density of dendritic spines declined by about 20 % in the Pb-treated group. The spine showed an immature form in Pb-exposed rats, as indicated by spine size measurements. However, the length and arborization of dendrites were unchanged. Our results suggested that juvenile Pb exposure in rats is associated with alterations in the glutamate receptor, which caused synaptic functional and morphological changes in hippocampal CA1 pyramidal neurons, thereby leading to behavioral changes.
A Functional Genomic Analysis of NF1-Associated Learning Disabilities
2007-02-01
Supplemental Table 1). In addition, the expression of several synaptic receptor genes, including NMDA receptor 1, AMPA receptor 4 and metabotropic ...glutamate receptor , ionotropic , AMPA3 (alpha 3) DOWN 1425595_at Gabbr1 gamma-aminobutyric acid (GABA-B) receptor , 1 DOWN 1436297_a_at Grina glutamate... receptor , ionotropic , N-methyl D-asparate-associated protein 1 DOWN Synaptic receptor 1436772_at Gria4 Glutamate receptor , ionotropic , AMPA4 (alpha 4) UP
Analog hardware for learning neural networks
NASA Technical Reports Server (NTRS)
Eberhardt, Silvio P. (Inventor)
1991-01-01
This is a recurrent or feedforward analog neural network processor having a multi-level neuron array and a synaptic matrix for storing weighted analog values of synaptic connection strengths which is characterized by temporarily changing one connection strength at a time to determine its effect on system output relative to the desired target. That connection strength is then adjusted based on the effect, whereby the processor is taught the correct response to training examples connection by connection.
Hoshino, Osamu
2015-06-01
Perception of supraliminal stimuli might in general be reflected in bursts of action potentials (spikes), and their memory traces could be formed through spike-timing-dependent plasticity (STDP). Memory traces for subliminal stimuli might be formed in a different manner, because subliminal stimulation evokes a fraction (but not a burst) of spikes. Simulations of a cortical neural network model showed that a subliminal stimulus that was too brief (10 msec) to perceive transiently (more than about 500 msec) depolarized stimulus-relevant principal cells and hyperpolarized stimulus-irrelevant principal cells in a subthreshold manner. This led to a small increase or decrease in ongoing-spontaneous spiking activity frequency (less than 1 Hz). Synaptic modification based on STDP during this period effectively enhanced relevant synaptic weights, by which subliminal learning was improved. GABA transporters on GABAergic interneurons modulated local levels of ambient GABA. Ambient GABA molecules acted on extrasynaptic receptors, provided principal cells with tonic inhibitory currents, and contributed to achieving the subthreshold neuronal state. We suggest that ongoing-spontaneous synaptic alteration through STDP following subliminal stimulation may be a possible neuronal mechanism for leaving its memory trace in cortical circuitry. Regulation of local ambient GABA levels by transporter-mediated GABA import and export may be crucial for subliminal learning.
Distributed Cerebellar Motor Learning: A Spike-Timing-Dependent Plasticity Model
Luque, Niceto R.; Garrido, Jesús A.; Naveros, Francisco; Carrillo, Richard R.; D'Angelo, Egidio; Ros, Eduardo
2016-01-01
Deep cerebellar nuclei neurons receive both inhibitory (GABAergic) synaptic currents from Purkinje cells (within the cerebellar cortex) and excitatory (glutamatergic) synaptic currents from mossy fibers. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP) located at different cerebellar sites (parallel fibers to Purkinje cells, mossy fibers to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells) in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibers to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP) and inhibitory (i-STDP) mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibers to Purkinje cells synapses and then transferred to mossy fibers to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation toward optimizing its working range). PMID:26973504
The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift?
Trettenbrein, Patrick C
2016-01-01
Synaptic plasticity is widely considered to be the neurobiological basis of learning and memory by neuroscientists and researchers in adjacent fields, though diverging opinions are increasingly being recognized. From the perspective of what we might call "classical cognitive science" it has always been understood that the mind/brain is to be considered a computational-representational system. Proponents of the information-processing approach to cognitive science have long been critical of connectionist or network approaches to (neuro-)cognitive architecture, pointing to the shortcomings of the associative psychology that underlies Hebbian learning as well as to the fact that synapses are practically unfit to implement symbols. Recent work on memory has been adding fuel to the fire and current findings in neuroscience now provide first tentative neurobiological evidence for the cognitive scientists' doubts about the synapse as the (sole) locus of memory in the brain. This paper briefly considers the history and appeal of synaptic plasticity as a memory mechanism, followed by a summary of the cognitive scientists' objections regarding these assertions. Next, a variety of tentative neuroscientific evidence that appears to substantiate questioning the idea of the synapse as the locus of memory is presented. On this basis, a novel way of thinking about the role of synaptic plasticity in learning and memory is proposed.
ERIC Educational Resources Information Center
Cui, Wen; Darby-King, Andrea; Grimes, Matthew T.; Howland, John G.; Wang, Yu Tian; McLean, John H.; Harley, Carolyn W.
2011-01-01
An increase in synaptic AMPA receptors is hypothesized to mediate learning and memory. AMPA receptor increases have been reported in aversive learning models, although it is not clear if they are seen with memory maintenance. Here we examine AMPA receptor changes in a cAMP/PKA/CREB-dependent appetitive learning model: odor preference learning in…
NASA Astrophysics Data System (ADS)
Hoffmann, Geoffrey W.; Benson, Maurice W.
1986-08-01
A neural network concept derived from an analogy between the immune system and the central nerous system is outlined. The theory is based on a nervous that is slightly more complicated than the conventional McCullogh-Pitts type of neuron, in that it exhibits hysteresis at the single cell level. This added complication is compensated by the fact that a network of such neurons is able to learn without the necessity for any changes in synaptic connection strengths. The learning occurs as a natural consequence of interactions between the network and its enviornment, with environmental stimuli moving the system around in an N-dimensional phase space, until a point in phase space is reached such that the system's responses are appropriate for dealing with the stimuli. Due to the hysteresis associated with each neuron, the system tends to stay in the region of phase space where it is located. The theory includes a role for sleep in learning.
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning. PMID:29209191
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
Calhoun, Michael E; Fletcher, Bonnie R; Yi, Stella; Zentko, Diana C; Gallagher, Michela; Rapp, Peter R
2008-08-01
Age-related impairments in hippocampus-dependent learning and memory tasks are not associated with a loss of hippocampal neurons, but may be related to alterations in synaptic integrity. Here we used stereological techniques to estimate spine number in hippocampal subfields using immunostaining for the spine-associated protein, spinophilin, as a marker. Quantification of the immunoreactive profiles was performed using the optical disector/fractionator technique. Aging was associated with a modest increase in spine number in the molecular layer of the dentate gyrus and CA1 stratum lacunosum-moleculare. By comparison, spinophilin protein levels in the hippocampus, measured by Western blot analysis, failed to differ as a function of age. Neither the morphological nor the protein level data were correlated with spatial learning ability across individual aged rats. The results extend current evidence on synaptic integrity in the aged brain, indicating that a substantial loss of dendritic spines and spinophilin protein in the hippocampus are unlikely to contribute to age-related impairment in spatial learning.
Rule Learning in Autism: The Role of Reward Type and Social Context
Jones, E. J. H.; Webb, S. J.; Estes, A.; Dawson, G.
2013-01-01
Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concrete reinforcement, suggesting an underlying difficulty in forming conceptual connections. Learning abstract rules about social stimuli remained challenging through late childhood, indicating the importance of testing executive functions in both social and non-social contexts. PMID:23311315
Learning and Tuning of Fuzzy Rules
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1997-01-01
In this chapter, we review some of the current techniques for learning and tuning fuzzy rules. For clarity, we refer to the process of generating rules from data as the learning problem and distinguish it from tuning an already existing set of fuzzy rules. For learning, we touch on unsupervised learning techniques such as fuzzy c-means, fuzzy decision tree systems, fuzzy genetic algorithms, and linear fuzzy rules generation methods. For tuning, we discuss Jang's ANFIS architecture, Berenji-Khedkar's GARIC architecture and its extensions in GARIC-Q. We show that the hybrid techniques capable of learning and tuning fuzzy rules, such as CART-ANFIS, RNN-FLCS, and GARIC-RB, are desirable in development of a number of future intelligent systems.
Algorithm for Training a Recurrent Multilayer Perceptron
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.
2004-01-01
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.
AP-1/σ1B-adaptin mediates endosomal synaptic vesicle recycling, learning and memory
Glyvuk, Nataliya; Tsytsyura, Yaroslav; Geumann, Constanze; D'Hooge, Rudi; Hüve, Jana; Kratzke, Manuel; Baltes, Jennifer; Böning, Daniel; Klingauf, Jürgen; Schu, Peter
2010-01-01
Synaptic vesicle recycling involves AP-2/clathrin-mediated endocytosis, but it is not known whether the endosomal pathway is also required. Mice deficient in the tissue-specific AP-1–σ1B complex have impaired synaptic vesicle recycling in hippocampal synapses. The ubiquitously expressed AP-1–σ1A complex mediates protein sorting between the trans-Golgi network and early endosomes. Vertebrates express three σ1 subunit isoforms: A, B and C. The expressions of σ1A and σ1B are highest in the brain. Synaptic vesicle reformation in cultured neurons from σ1B-deficient mice is reduced upon stimulation, and large endosomal intermediates accumulate. The σ1B-deficient mice have reduced motor coordination and severely impaired long-term spatial memory. These data reveal a molecular mechanism for a severe human X-chromosome-linked mental retardation. PMID:20203623
Loss of Cdc42 leads to defects in synaptic plasticity and remote memory recall.
Kim, Il Hwan; Wang, Hong; Soderling, Scott H; Yasuda, Ryohei
2014-07-08
Cdc42 is a signaling protein important for reorganization of actin cytoskeleton and morphogenesis of cells. However, the functional role of Cdc42 in synaptic plasticity and in behaviors such as learning and memory are not well understood. Here we report that postnatal forebrain deletion of Cdc42 leads to deficits in synaptic plasticity and in remote memory recall using conditional knockout of Cdc42. We found that deletion of Cdc42 impaired LTP in the Schaffer collateral synapses and postsynaptic structural plasticity of dendritic spines in CA1 pyramidal neurons in the hippocampus. Additionally, loss of Cdc42 did not affect memory acquisition, but instead significantly impaired remote memory recall. Together these results indicate that the postnatal functions of Cdc42 may be crucial for the synaptic plasticity in hippocampal neurons, which contribute to the capacity for remote memory recall.
Novel synaptic memory device for neuromorphic computing
NASA Astrophysics Data System (ADS)
Mandal, Saptarshi; El-Amin, Ammaarah; Alexander, Kaitlyn; Rajendran, Bipin; Jha, Rashmi
2014-06-01
This report discusses the electrical characteristics of two-terminal synaptic memory devices capable of demonstrating an analog change in conductance in response to the varying amplitude and pulse-width of the applied signal. The devices are based on Mn doped HfO2 material. The mechanism behind reconfiguration was studied and a unified model is presented to explain the underlying device physics. The model was then utilized to show the application of these devices in speech recognition. A comparison between a 20 nm × 20 nm sized synaptic memory device with that of a state-of-the-art VLSI SRAM synapse showed ~10× reduction in area and >106 times reduction in the power consumption per learning cycle.
Garrido, Jesús A.; Luque, Niceto R.; D'Angelo, Egidio; Ros, Eduardo
2013-01-01
Adaptable gain regulation is at the core of the forward controller operation performed by the cerebro-cerebellar loops and it allows the intensity of motor acts to be finely tuned in a predictive manner. In order to learn and store information about body-object dynamics and to generate an internal model of movement, the cerebellum is thought to employ long-term synaptic plasticity. LTD at the PF-PC synapse has classically been assumed to subserve this function (Marr, 1969). However, this plasticity alone cannot account for the broad dynamic ranges and time scales of cerebellar adaptation. We therefore tested the role of plasticity distributed over multiple synaptic sites (Hansel et al., 2001; Gao et al., 2012) by generating an analog cerebellar model embedded into a control loop connected to a robotic simulator. The robot used a three-joint arm and performed repetitive fast manipulations with different masses along an 8-shape trajectory. In accordance with biological evidence, the cerebellum model was endowed with both LTD and LTP at the PF-PC, MF-DCN and PC-DCN synapses. This resulted in a network scheme whose effectiveness was extended considerably compared to one including just PF-PC synaptic plasticity. Indeed, the system including distributed plasticity reliably self-adapted to manipulate different masses and to learn the arm-object dynamics over a time course that included fast learning and consolidation, along the lines of what has been observed in behavioral tests. In particular, PF-PC plasticity operated as a time correlator between the actual input state and the system error, while MF-DCN and PC-DCN plasticity played a key role in generating the gain controller. This model suggests that distributed synaptic plasticity allows generation of the complex learning properties of the cerebellum. The incorporation of further plasticity mechanisms and of spiking signal processing will allow this concept to be extended in a more realistic computational scenario. PMID:24130518
Cho, Hwasuk; Son, Hyunwoo; Seong, Kihwan; Kim, Byungsub; Park, Hong-June; Sim, Jae-Yoon
2018-02-01
This paper presents an IC implementation of on-chip learning neuromorphic autoencoder unit in a form of rate-based spiking neural network. With a current-mode signaling scheme embedded in a 500 × 500 6b SRAM-based memory, the proposed architecture achieves simultaneous processing of multiplications and accumulations. In addition, a transposable memory read for both forward and backward propagations and a virtual lookup table are also proposed to perform an unsupervised learning of restricted Boltzmann machine. The IC is fabricated using 28-nm CMOS process and is verified in a three-layer network of encoder-decoder pair for training and recovery of images with two-dimensional pixels. With a dataset of 50 digits, the IC shows a normalized root mean square error of 0.078. Measured energy efficiencies are 4.46 pJ per synaptic operation for inference and 19.26 pJ per synaptic weight update for learning, respectively. The learning performance is also estimated by simulations if the proposed hardware architecture is extended to apply to a batch training of 60 000 MNIST datasets.
Molecular mechanisms of fear learning and memory.
Johansen, Joshua P; Cain, Christopher K; Ostroff, Linnaea E; LeDoux, Joseph E
2011-10-28
Pavlovian fear conditioning is a particularly useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here, we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Collectively, this body of research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals and potentially for understanding fear-related disorders, such as PTSD and phobias. Copyright © 2011 Elsevier Inc. All rights reserved.
Neural principles of memory and a neural theory of analogical insight
NASA Astrophysics Data System (ADS)
Lawson, David I.; Lawson, Anton E.
1993-12-01
Grossberg's principles of neural modeling are reviewed and extended to provide a neural level theory to explain how analogies greatly increase the rate of learning and can, in fact, make learning and retention possible. In terms of memory, the key point is that the mind is able to recognize and recall when it is able to match sensory input from new objects, events, or situations with past memory records of similar objects, events, or situations. When a match occurs, an adaptive resonance is set up in which the synaptic strengths of neurons are increased; thus a long term record of the new input is formed in memory. Systems of neurons called outstars and instars are presumably the underlying units that enable this to occur. Analogies can greatly facilitate learning and retention because they activate the outstars (i.e., the cells that are sampling the to-be-learned pattern) and cause the neural activity to grow exponentially by forming feedback loops. This increased activity insures the boost in synaptic strengths of neurons, thus causing storage and retention in long-term memory (i.e., learning).
Concurrence of rule- and similarity-based mechanisms in artificial grammar learning.
Opitz, Bertram; Hofmann, Juliane
2015-03-01
A current theoretical debate regards whether rule-based or similarity-based learning prevails during artificial grammar learning (AGL). Although the majority of findings are consistent with a similarity-based account of AGL it has been argued that these results were obtained only after limited exposure to study exemplars, and performance on subsequent grammaticality judgment tests has often been barely above chance level. In three experiments the conditions were investigated under which rule- and similarity-based learning could be applied. Participants were exposed to exemplars of an artificial grammar under different (implicit and explicit) learning instructions. The analysis of receiver operating characteristics (ROC) during a final grammaticality judgment test revealed that explicit but not implicit learning led to rule knowledge. It also demonstrated that this knowledge base is built up gradually while similarity knowledge governed the initial state of learning. Together these results indicate that rule- and similarity-based mechanisms concur during AGL. Moreover, it could be speculated that two different rule processes might operate in parallel; bottom-up learning via gradual rule extraction and top-down learning via rule testing. Crucially, the latter is facilitated by performance feedback that encourages explicit hypothesis testing. Copyright © 2015 Elsevier Inc. All rights reserved.
Acetyl-L-carnitine improves aged brain function.
Kobayashi, Satoru; Iwamoto, Machiko; Kon, Kazuo; Waki, Hatsue; Ando, Susumu; Tanaka, Yasukazu
2010-07-01
The effects of acetyl-L-carnitine (ALCAR), an acetyl derivative of L-carnitine, on memory and learning capacity and on brain synaptic functions of aged rats were examined. Male Fischer 344 rats were given ALCAR (100 mg/kg bodyweight) per os for 3 months and were subjected to the Hebb-Williams tasks and AKON-1 task to assess their learning capacity. Cholinergic activities were determined with synaptosomes isolated from brain cortices of the rats. Choline parameters, the high-affinity choline uptake, acetylcholine (ACh) synthesis and depolarization-evoked ACh release were all enhanced in the ALCAR group. An increment of depolarization-induced calcium ion influx into synaptosomes was also evident in rats given ALCAR. Electrophysiological studies using hippocampus slices indicated that the excitatory postsynaptic potential slope and population spike size were both increased in ALCAR-treated rats. These results indicate that ALCAR increases synaptic neurotransmission in the brain and consequently improves learning capacity in aging rats.
McKinstry, Jeffrey L; Edelman, Gerald M
2013-01-01
Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.
A Local Learning Rule for Independent Component Analysis
Isomura, Takuya; Toyoizumi, Taro
2016-01-01
Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering. PMID:27323661
Drive the Car(go)s-New Modalities to Control Cargo Trafficking in Live Cells.
Mondal, Payel; Khamo, John S; Krishnamurthy, Vishnu V; Cai, Qi; Zhang, Kai
2017-01-01
Synaptic transmission is a fundamental molecular process underlying learning and memory. Successful synaptic transmission involves coupled interaction between electrical signals (action potentials) and chemical signals (neurotransmitters). Defective synaptic transmission has been reported in a variety of neurological disorders such as Autism and Alzheimer's disease. A large variety of macromolecules and organelles are enriched near functional synapses. Although a portion of macromolecules can be produced locally at the synapse, a large number of synaptic components especially the membrane-bound receptors and peptide neurotransmitters require active transport machinery to reach their sites of action. This spatial relocation is mediated by energy-consuming, motor protein-driven cargo trafficking. Properly regulated cargo trafficking is of fundamental importance to neuronal functions, including synaptic transmission. In this review, we discuss the molecular machinery of cargo trafficking with emphasis on new experimental strategies that enable direct modulation of cargo trafficking in live cells. These strategies promise to provide insights into a quantitative understanding of cargo trafficking, which could lead to new intervention strategies for the treatment of neurological diseases.
Neurobiological and Endocrine Correlates of Individual Differences in Spatial Learning Ability
Sandi, Carmen; Cordero, M. Isabel; Merino, José J.; Kruyt, Nyika D.; Regan, Ciaran M.; Murphy, Keith J.
2004-01-01
The polysialylated neural cell adhesion molecule (PSA-NCAM) has been implicated in activity-dependent synaptic remodeling and memory formation. Here, we questioned whether training-induced modulation of PSA-NCAM expression might be related to individual differences in spatial learning abilities. At 12 h posttraining, immunohistochemical analyses revealed a learning-induced up-regulation of PSA-NCAM in the hippocampal dentate gyrus that was related to the spatial learning abilities displayed by rats during training. Specifically, a positive correlation was found between latency to find the platform and subsequent activated PSA levels, indicating that greater induction of polysialylation was observed in rats with the slower acquisition curve. At posttraining times when no learning-associated activation of PSA was observed, no such correlation was found. Further experiments revealed that performance in the massed water maze training is related to a pattern of spatial learning and memory abilities, and to learning-related glucocorticoid responsiveness. Taken together, our findings suggest that the learning-related neural circuits of fast learners are better suited to solving the water maze task than those of slow learners, the latter relying more on structural reorganization to form memory, rather than the relatively economic mechanism of altering synaptic efficacy that is likely used by the former. PMID:15169853
Neurobiological and endocrine correlates of individual differences in spatial learning ability.
Sandi, Carmen; Cordero, M Isabel; Merino, José J; Kruyt, Nyika D; Regan, Ciaran M; Murphy, Keith J
2004-01-01
The polysialylated neural cell adhesion molecule (PSA-NCAM) has been implicated in activity-dependent synaptic remodeling and memory formation. Here, we questioned whether training-induced modulation of PSA-NCAM expression might be related to individual differences in spatial learning abilities. At 12 h posttraining, immunohistochemical analyses revealed a learning-induced up-regulation of PSA-NCAM in the hippocampal dentate gyrus that was related to the spatial learning abilities displayed by rats during training. Specifically, a positive correlation was found between latency to find the platform and subsequent activated PSA levels, indicating that greater induction of polysialylation was observed in rats with the slower acquisition curve. At posttraining times when no learning-associated activation of PSA was observed, no such correlation was found. Further experiments revealed that performance in the massed water maze training is related to a pattern of spatial learning and memory abilities, and to learning-related glucocorticoid responsiveness. Taken together, our findings suggest that the learning-related neural circuits of fast learners are better suited to solving the water maze task than those of slow learners, the latter relying more on structural reorganization to form memory, rather than the relatively economic mechanism of altering synaptic efficacy that is likely used by the former.
Srinivasa, Narayan; Cho, Youngkwan
2014-01-01
A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns—both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity. PMID:25566045
Srinivasa, Narayan; Cho, Youngkwan
2014-01-01
A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.
Garg, Akhil R; Obermayer, Klaus; Bhaumik, Basabi
2005-01-01
Recent experimental studies of hetero-synaptic interactions in various systems have shown the role of signaling in the plasticity, challenging the conventional understanding of Hebb's rule. It has also been found that activity plays a major role in plasticity, with neurotrophins acting as molecular signals translating activity into structural changes. Furthermore, role of synaptic efficacy in biasing the outcome of competition has also been revealed recently. Motivated by these experimental findings we present a model for the development of simple cell receptive field structure based on the competitive hetero-synaptic interactions for neurotrophins combined with cooperative hetero-synaptic interactions in the spatial domain. We find that with proper balance in competition and cooperation, the inputs from two populations (ON/OFF) of LGN cells segregate starting from the homogeneous state. We obtain segregated ON and OFF regions in simple cell receptive field. Our modeling study supports the experimental findings, suggesting the role of synaptic efficacy and the role of spatial signaling. We find that using this model we obtain simple cell RF, even for positively correlated activity of ON/OFF cells. We also compare different mechanism of finding the response of cortical cell and study their possible role in the sharpening of orientation selectivity. We find that degree of selectivity improvement in individual cells varies from case to case depending upon the structure of RF field and type of sharpening mechanism.
Synaptic Plasticity, Dementia and Alzheimer Disease.
Skaper, Stephen D; Facci, Laura; Zusso, Morena; Giusti, Pietro
2017-01-01
Neuroplasticity is not only shaped by learning and memory but is also a mediator of responses to neuron attrition and injury (compensatory plasticity). As an ongoing process it reacts to neuronal cell activity and injury, death, and genesis, which encompasses the modulation of structural and functional processes of axons, dendrites, and synapses. The range of structural elements that comprise plasticity includes long-term potentiation (a cellular correlate of learning and memory), synaptic efficacy and remodelling, synaptogenesis, axonal sprouting and dendritic remodelling, and neurogenesis and recruitment. Degenerative diseases of the human brain continue to pose one of biomedicine's most intractable problems. Research on human neurodegeneration is now moving from descriptive to mechanistic analyses. At the same time, it is increasing apparently that morphological lesions traditionally used by neuropathologists to confirm post-mortem clinical diagnosis might furnish us with an experimentally tractable handle to understand causative pathways. Consider the aging-dependent neurodegenerative disorder Alzheimer's disease (AD) which is characterised at the neuropathological level by deposits of insoluble amyloid β-peptide (Aβ) in extracellular plaques and aggregated tau protein, which is found largely in the intracellular neurofibrillary tangles. We now appreciate that mild cognitive impairment in early AD may be due to synaptic dysfunction caused by accumulation of non-fibrillar, oligomeric Aβ, occurring well in advance of evident widespread synaptic loss and neurodegeneration. Soluble Aβ oligomers can adversely affect synaptic structure and plasticity at extremely low concentrations, although the molecular substrates by which synaptic memory mechanisms are disrupted remain to be fully elucidated. The dendritic spine constitutes a primary locus of excitatory synaptic transmission in the mammalian central nervous system. These structures protruding from dendritic shafts undergo dynamic changes in number, size and shape in response to variations in hormonal status, developmental stage, and changes in afferent input. It is perhaps not unexpected that loss of spine density may be linked to cognitive and memory impairment in AD, although the underlying mechanism(s) remain uncertain. This article aims to present a critical overview of current knowledge on the bases of synaptic dysfunction in neurodegenerative diseases, with a focus on AD, and will cover amyloid- and nonamyloid- driven mechanisms. We will consider also emerging data dealing with potential therapeutic approaches for ameliorating the cognitive and memory deficits associated with these disorders. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Morimura, Naoko; Yasuda, Hiroki; Yamaguchi, Kazuhiko; Katayama, Kei-ichi; Hatayama, Minoru; Tomioka, Naoko H.; Odagawa, Maya; Kamiya, Akiko; Iwayama, Yoshimi; Maekawa, Motoko; Nakamura, Kazuhiko; Matsuzaki, Hideo; Tsujii, Masatsugu; Yamada, Kazuyuki; Yoshikawa, Takeo; Aruga, Jun
2017-01-01
Lrfn2/SALM1 is a PSD-95-interacting synapse adhesion molecule, and human LRFN2 is associated with learning disabilities. However its role in higher brain function and underlying mechanisms remain unknown. Here, we show that Lrfn2 knockout mice exhibit autism-like behavioural abnormalities, including social withdrawal, decreased vocal communications, increased stereotyped activities and prepulse inhibition deficits, together with enhanced learning and memory. In the hippocampus, the levels of synaptic PSD-95 and GluA1 are decreased. The synapses are structurally and functionally immature with spindle shaped spines, smaller postsynaptic densities, reduced AMPA/NMDA ratio, and enhanced LTP. In vitro experiments reveal that synaptic surface expression of AMPAR depends on the direct interaction between Lrfn2 and PSD-95. Furthermore, we detect functionally defective LRFN2 missense mutations in autism and schizophrenia patients. Together, these findings indicate that Lrfn2/LRFN2 serve as core components of excitatory synapse maturation and maintenance, and their dysfunction causes immature/silent synapses with pathophysiological state. PMID:28604739
Spiking neural P systems with multiple channels.
Peng, Hong; Yang, Jinyu; Wang, Jun; Wang, Tao; Sun, Zhang; Song, Xiaoxiao; Luo, Xiaohui; Huang, Xiangnian
2017-11-01
Spiking neural P systems (SNP systems, in short) are a class of distributed parallel computing systems inspired from the neurophysiological behavior of biological spiking neurons. In this paper, we investigate a new variant of SNP systems in which each neuron has one or more synaptic channels, called spiking neural P systems with multiple channels (SNP-MC systems, in short). The spiking rules with channel label are introduced to handle the firing mechanism of neurons, where the channel labels indicate synaptic channels of transmitting the generated spikes. The computation power of SNP-MC systems is investigated. Specifically, we prove that SNP-MC systems are Turing universal as both number generating and number accepting devices. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dang, Nguyen Tuan; Akai-Kasada, Megumi; Asai, Tetsuya; Saito, Akira; Kuwahara, Yuji; Hokkaido University Collaboration
2015-03-01
Machine learning using the artificial neuron network research is supposed to be the best way to understand how the human brain trains itself to process information. In this study, we have successfully developed the programs using supervised machine learning algorithm. However, these supervised learning processes for the neuron network required the very strong computing configuration. Derivation from the necessity of increasing in computing ability and in reduction of power consumption, accelerator circuits become critical. To develop such accelerator circuits using supervised machine learning algorithm, conducting polymer micro/nanowires growing process was realized and applied as a synaptic weigh controller. In this work, high conductivity Polypyrrole (PPy) and Poly (3, 4 - ethylenedioxythiophene) PEDOT wires were potentiostatically grown crosslinking the designated electrodes, which were prefabricated by lithography, when appropriate square wave AC voltage and appropriate frequency were applied. Micro/nanowire growing process emulated the neurotransmitter release process of synapses inside a biological neuron and wire's resistance variation during the growing process was preferred to as the variation of synaptic weigh in machine learning algorithm. In a cooperation with Graduate School of Information Science and Technology, Hokkaido University.
Baez, María Verónica; Cercato, Magalí Cecilia; Jerusalinsky, Diana Alicia
2018-01-01
NMDA ionotropic glutamate receptors (NMDARs) are crucial in activity-dependent synaptic changes and in learning and memory. NMDARs are composed of two GluN1 essential subunits and two regulatory subunits which define their pharmacological and physiological profile. In CNS structures involved in cognitive functions as the hippocampus and prefrontal cortex, GluN2A and GluN2B are major regulatory subunits; their expression is dynamic and tightly regulated, but little is known about specific changes after plasticity induction or memory acquisition. Data strongly suggest that following appropriate stimulation, there is a rapid increase in surface GluN2A-NMDAR at the postsynapses, attributed to lateral receptor mobilization from adjacent locations. Whenever synaptic plasticity is induced or memory is consolidated, more GluN2A-NMDARs are assembled likely using GluN2A from a local translation and GluN1 from local ER. Later on, NMDARs are mobilized from other pools, and there are de novo syntheses at the neuron soma. Changes in GluN1 or NMDAR levels induced by synaptic plasticity and by spatial memory formation seem to occur in different waves of NMDAR transport/expression/degradation, with a net increase at the postsynaptic side and a rise in expression at both the spine and neuronal soma. This review aims to put together that information and the proposed hypotheses.
Marty, Vincent; Kuzmiski, J Brent; Baimoukhametova, Dinara V; Bains, Jaideep S
2011-01-01
Abstract Glutamatergic synaptic inputs onto parvocellular neurosecretory cells (PNCs) in the paraventricular nucleus of the hypothalamus (PVN) regulate the hypothalamic-pituitary-adrenal (HPA) axis responses to stress and undergo stress-dependent changes in their capacity to transmit information. In spite of their pivotal role in regulating PNCs, relatively little is known about the fundamental rules that govern transmission at these synapses. Furthermore, since salient information in the nervous system is often transmitted in bursts, it is also important to understand the short-term dynamics of glutamate transmission under basal conditions. To characterize these properties, we obtained whole-cell patch clamp recordings from PNCs in brain slices from postnatal day 21–35 male Sprague–Dawley rats and examined EPSCs. EPSCs were elicited by electrically stimulating glutamatergic afferents along the periventricular aspect. In response to a paired-pulse stimulation protocol, EPSCs generally displayed a robust short-term depression that recovered within 5 s. Similarly, trains of synaptic stimuli (5–50 Hz) resulted in a frequency-dependent depression until a near steady state was achieved. Application of inhibitors of AMPA receptor (AMPAR) desensitization or the low-affinity, competitive AMPAR antagonist failed to affect the depression due to paired-pulse and trains of synaptic stimulation indicating that this use-dependent short-term synaptic depression has a presynaptic locus of expression. We used cumulative amplitude profiles during trains of stimulation and variance–mean analysis to estimate synaptic parameters. Finally, we report that these properties contribute to hamper the efficiency with which high frequency synaptic inputs generate spikes in PNCs, indicating that these synapses operate as effective low-pass filters in basal conditions. PMID:21727221
Memory Maintenance in Synapses with Calcium-Based Plasticity in the Presence of Background Activity
Higgins, David; Graupner, Michael; Brunel, Nicolas
2014-01-01
Most models of learning and memory assume that memories are maintained in neuronal circuits by persistent synaptic modifications induced by specific patterns of pre- and postsynaptic activity. For this scenario to be viable, synaptic modifications must survive the ubiquitous ongoing activity present in neural circuits in vivo. In this paper, we investigate the time scales of memory maintenance in a calcium-based synaptic plasticity model that has been shown recently to be able to fit different experimental data-sets from hippocampal and neocortical preparations. We find that in the presence of background activity on the order of 1 Hz parameters that fit pyramidal layer 5 neocortical data lead to a very fast decay of synaptic efficacy, with time scales of minutes. We then identify two ways in which this memory time scale can be extended: (i) the extracellular calcium concentration in the experiments used to fit the model are larger than estimated concentrations in vivo. Lowering extracellular calcium concentration to in vivo levels leads to an increase in memory time scales of several orders of magnitude; (ii) adding a bistability mechanism so that each synapse has two stable states at sufficiently low background activity leads to a further boost in memory time scale, since memory decay is no longer described by an exponential decay from an initial state, but by an escape from a potential well. We argue that both features are expected to be present in synapses in vivo. These results are obtained first in a single synapse connecting two independent Poisson neurons, and then in simulations of a large network of excitatory and inhibitory integrate-and-fire neurons. Our results emphasise the need for studying plasticity at physiological extracellular calcium concentration, and highlight the role of synaptic bi- or multistability in the stability of learned synaptic structures. PMID:25275319