Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias
2008-12-01
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
A new simple /spl infin/OH neuron model as a biologically plausible principal component analyzer.
Jankovic, M V
2003-01-01
A new approach to unsupervised learning in a single-layer neural network is discussed. An algorithm for unsupervised learning based upon the Hebbian learning rule is presented. A simple neuron model is analyzed. A dynamic neural model, which contains both feed-forward and feedback connections between the input and the output, has been adopted. The, proposed learning algorithm could be more correctly named self-supervised rather than unsupervised. The solution proposed here is a modified Hebbian rule, in which the modification of the synaptic strength is proportional not to pre- and postsynaptic activity, but instead to the presynaptic and averaged value of postsynaptic activity. It is shown that the model neuron tends to extract the principal component from a stationary input vector sequence. Usually accepted additional decaying terms for the stabilization of the original Hebbian rule are avoided. Implementation of the basic Hebbian scheme would not lead to unrealistic growth of the synaptic strengths, thanks to the adopted network structure.
Beta Hebbian Learning as a New Method for Exploratory Projection Pursuit.
Quintián, Héctor; Corchado, Emilio
2017-09-01
In this research, a novel family of learning rules called Beta Hebbian Learning (BHL) is thoroughly investigated to extract information from high-dimensional datasets by projecting the data onto low-dimensional (typically two dimensional) subspaces, improving the existing exploratory methods by providing a clear representation of data's internal structure. BHL applies a family of learning rules derived from the Probability Density Function (PDF) of the residual based on the beta distribution. This family of rules may be called Hebbian in that all use a simple multiplication of the output of the neural network with some function of the residuals after feedback. The derived learning rules can be linked to an adaptive form of Exploratory Projection Pursuit and with artificial distributions, the networks perform as the theory suggests they should: the use of different learning rules derived from different PDFs allows the identification of "interesting" dimensions (as far from the Gaussian distribution as possible) in high-dimensional datasets. This novel algorithm, BHL, has been tested over seven artificial datasets to study the behavior of BHL parameters, and was later applied successfully over four real datasets, comparing its results, in terms of performance, with other well-known Exploratory and projection models such as Maximum Likelihood Hebbian Learning (MLHL), Locally-Linear Embedding (LLE), Curvilinear Component Analysis (CCA), Isomap and Neural Principal Component Analysis (Neural PCA).
Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics
Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni
2015-01-01
In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645
Chartier, Sylvain; Proulx, Robert
2005-11-01
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.
Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.
Brito, Carlos S N; Gerstner, Wulfram
2016-09-01
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.
Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation
Gerstner, Wulfram
2016-01-01
The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. PMID:27690349
Hebbian learning and predictive mirror neurons for actions, sensations and emotions
Keysers, Christian; Gazzola, Valeria
2014-01-01
Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one's own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse how mirror neurons become a dynamic system that performs active inferences about the actions of others and allows joint actions despite sensorimotor delays. We explore how this system performs a projection of the self onto others, with egocentric biases to contribute to mind-reading. Finally, we argue that Hebbian learning predicts mirror-like neurons for sensations and emotions and review evidence for the presence of such vicarious activations outside the motor system. PMID:24778372
Hebbian learning and predictive mirror neurons for actions, sensations and emotions.
Keysers, Christian; Gazzola, Valeria
2014-01-01
Spike-timing-dependent plasticity is considered the neurophysiological basis of Hebbian learning and has been shown to be sensitive to both contingency and contiguity between pre- and postsynaptic activity. Here, we will examine how applying this Hebbian learning rule to a system of interconnected neurons in the presence of direct or indirect re-afference (e.g. seeing/hearing one's own actions) predicts the emergence of mirror neurons with predictive properties. In this framework, we analyse how mirror neurons become a dynamic system that performs active inferences about the actions of others and allows joint actions despite sensorimotor delays. We explore how this system performs a projection of the self onto others, with egocentric biases to contribute to mind-reading. Finally, we argue that Hebbian learning predicts mirror-like neurons for sensations and emotions and review evidence for the presence of such vicarious activations outside the motor system.
Learning pattern recognition and decision making in the insect brain
NASA Astrophysics Data System (ADS)
Huerta, R.
2013-01-01
We revise the current model of learning pattern recognition in the Mushroom Bodies of the insects using current experimental knowledge about the location of learning, olfactory coding and connectivity. We show that it is possible to have an efficient pattern recognition device based on the architecture of the Mushroom Bodies, sparse code, mutual inhibition and Hebbian leaning only in the connections from the Kenyon cells to the output neurons. We also show that despite the conventional wisdom that believes that artificial neural networks are the bioinspired model of the brain, the Mushroom Bodies actually resemble very closely Support Vector Machines (SVMs). The derived SVM learning rules are situated in the Mushroom Bodies, are nearly identical to standard Hebbian rules, and require inhibition in the output. A very particular prediction of the model is that random elimination of the Kenyon cells in the Mushroom Bodies do not impair the ability to recognize odorants previously learned.
Adaptive WTA with an analog VLSI neuromorphic learning chip.
Häfliger, Philipp
2007-03-01
In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.
Dynamic Hebbian Cross-Correlation Learning Resolves the Spike Timing Dependent Plasticity Conundrum.
Olde Scheper, Tjeerd V; Meredith, Rhiannon M; Mansvelder, Huibert D; van Pelt, Jaap; van Ooyen, Arjen
2017-01-01
Spike Timing-Dependent Plasticity has been found to assume many different forms. The classic STDP curve, with one potentiating and one depressing window, is only one of many possible curves that describe synaptic learning using the STDP mechanism. It has been shown experimentally that STDP curves may contain multiple LTP and LTD windows of variable width, and even inverted windows. The underlying STDP mechanism that is capable of producing such an extensive, and apparently incompatible, range of learning curves is still under investigation. In this paper, it is shown that STDP originates from a combination of two dynamic Hebbian cross-correlations of local activity at the synapse. The correlation of the presynaptic activity with the local postsynaptic activity is a robust and reliable indicator of the discrepancy between the presynaptic neuron and the postsynaptic neuron's activity. The second correlation is between the local postsynaptic activity with dendritic activity which is a good indicator of matching local synaptic and dendritic activity. We show that this simple time-independent learning rule can give rise to many forms of the STDP learning curve. The rule regulates synaptic strength without the need for spike matching or other supervisory learning mechanisms. Local differences in dendritic activity at the synapse greatly affect the cross-correlation difference which determines the relative contributions of different neural activity sources. Dendritic activity due to nearby synapses, action potentials, both forward and back-propagating, as well as inhibitory synapses will dynamically modify the local activity at the synapse, and the resulting STDP learning rule. The dynamic Hebbian learning rule ensures furthermore, that the resulting synaptic strength is dynamically stable, and that interactions between synapses do not result in local instabilities. The rule clearly demonstrates that synapses function as independent localized computational entities, each contributing to the global activity, not in a simply linear fashion, but in a manner that is appropriate to achieve local and global stability of the neuron and the entire dendritic structure.
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules.
Frémaux, Nicolas; Gerstner, Wulfram
2015-01-01
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide "when" to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.
Synthetic Modeling of Autonomous Learning with a Chaotic Neural Network
NASA Astrophysics Data System (ADS)
Funabashi, Masatoshi
We investigate the possible role of intermittent chaotic dynamics called chaotic itinerancy, in interaction with nonsupervised learnings that reinforce and weaken the neural connection depending on the dynamics itself. We first performed hierarchical stability analysis of the Chaotic Neural Network model (CNN) according to the structure of invariant subspaces. Irregular transition between two attractor ruins with positive maximum Lyapunov exponent was triggered by the blowout bifurcation of the attractor spaces, and was associated with riddled basins structure. We secondly modeled two autonomous learnings, Hebbian learning and spike-timing-dependent plasticity (STDP) rule, and simulated the effect on the chaotic itinerancy state of CNN. Hebbian learning increased the residence time on attractor ruins, and produced novel attractors in the minimum higher-dimensional subspace. It also augmented the neuronal synchrony and established the uniform modularity in chaotic itinerancy. STDP rule reduced the residence time on attractor ruins, and brought a wide range of periodicity in emerged attractors, possibly including strange attractors. Both learning rules selectively destroyed and preserved the specific invariant subspaces, depending on the neuron synchrony of the subspace where the orbits are situated. Computational rationale of the autonomous learning is discussed in connectionist perspective.
Born, Jannis; Galeazzi, Juan M; Stringer, Simon M
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Born, Jannis; Stringer, Simon M.
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618
Genetic attack on neural cryptography.
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
Genetic attack on neural cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka
2006-03-15
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold formore » the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.« less
Genetic attack on neural cryptography
NASA Astrophysics Data System (ADS)
Ruttor, Andreas; Kinzel, Wolfgang; Naeh, Rivka; Kanter, Ido
2006-03-01
Different scaling properties for the complexity of bidirectional synchronization and unidirectional learning are essential for the security of neural cryptography. Incrementing the synaptic depth of the networks increases the synchronization time only polynomially, but the success of the geometric attack is reduced exponentially and it clearly fails in the limit of infinite synaptic depth. This method is improved by adding a genetic algorithm, which selects the fittest neural networks. The probability of a successful genetic attack is calculated for different model parameters using numerical simulations. The results show that scaling laws observed in the case of other attacks hold for the improved algorithm, too. The number of networks needed for an effective attack grows exponentially with increasing synaptic depth. In addition, finite-size effects caused by Hebbian and anti-Hebbian learning are analyzed. These learning rules converge to the random walk rule if the synaptic depth is small compared to the square root of the system size.
E-I balance emerges naturally from continuous Hebbian learning in autonomous neural networks.
Trapp, Philip; Echeveste, Rodrigo; Gros, Claudius
2018-06-12
Spontaneous brain activity is characterized in part by a balanced asynchronous chaotic state. Cortical recordings show that excitatory (E) and inhibitory (I) drivings in the E-I balanced state are substantially larger than the overall input. We show that such a state arises naturally in fully adapting networks which are deterministic, autonomously active and not subject to stochastic external or internal drivings. Temporary imbalances between excitatory and inhibitory inputs lead to large but short-lived activity bursts that stabilize irregular dynamics. We simulate autonomous networks of rate-encoding neurons for which all synaptic weights are plastic and subject to a Hebbian plasticity rule, the flux rule, that can be derived from the stationarity principle of statistical learning. Moreover, the average firing rate is regulated individually via a standard homeostatic adaption of the bias of each neuron's input-output non-linear function. Additionally, networks with and without short-term plasticity are considered. E-I balance may arise only when the mean excitatory and inhibitory weights are themselves balanced, modulo the overall activity level. We show that synaptic weight balance, which has been considered hitherto as given, naturally arises in autonomous neural networks when the here considered self-limiting Hebbian synaptic plasticity rule is continuously active.
Hebbian based learning with winner-take-all for spiking neural networks
NASA Astrophysics Data System (ADS)
Gupta, Ankur; Long, Lyle
2009-03-01
Learning methods for spiking neural networks are not as well developed as the traditional neural networks that widely use back-propagation training. We propose and implement a Hebbian based learning method with winner-take-all competition for spiking neural networks. This approach is spike time dependent which makes it naturally well suited for a network of spiking neurons. Homeostasis with Hebbian learning is implemented which ensures stability and quicker learning. Homeostasis implies that the net sum of incoming weights associated with a neuron remains the same. Winner-take-all is also implemented for competitive learning between output neurons. We implemented this learning rule on a biologically based vision processing system that we are developing, and use layers of leaky integrate and fire neurons. The network when presented with 4 bars (or Gabor filters) of different orientation learns to recognize the bar orientations (or Gabor filters). After training, each output neuron learns to recognize a bar at specific orientation and responds by firing more vigorously to that bar and less vigorously to others. These neurons are found to have bell shaped tuning curves and are similar to the simple cells experimentally observed by Hubel and Wiesel in the striate cortex of cat and monkey.
Neuromodulated Spike-Timing-Dependent Plasticity, and Theory of Three-Factor Learning Rules
Frémaux, Nicolas; Gerstner, Wulfram
2016-01-01
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulators on synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide “when” to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators. PMID:26834568
How synapses can enhance sensibility of a neural network
NASA Astrophysics Data System (ADS)
Protachevicz, P. R.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Baptista, M. S.; Viana, R. L.; Lameu, E. L.; Macau, E. E. N.; Batista, A. M.
2018-02-01
In this work, we study the dynamic range in a neural network modelled by cellular automaton. We consider deterministic and non-deterministic rules to simulate electrical and chemical synapses. Chemical synapses have an intrinsic time-delay and are susceptible to parameter variations guided by learning Hebbian rules of behaviour. The learning rules are related to neuroplasticity that describes change to the neural connections in the brain. Our results show that chemical synapses can abruptly enhance sensibility of the neural network, a manifestation that can become even more predominant if learning rules of evolution are applied to the chemical synapses.
Stefanescu, Roxana A; Shore, Susan E
2017-03-01
Cholinergic modulation contributes to adaptive sensory processing by controlling spontaneous and stimulus-evoked neural activity and long-term synaptic plasticity. In the dorsal cochlear nucleus (DCN), in vitro activation of muscarinic acetylcholine receptors (mAChRs) alters the spontaneous activity of DCN neurons and interacts with N -methyl-d-aspartate (NMDA) and endocannabinoid receptors to modulate the plasticity of parallel fiber synapses onto fusiform cells by converting Hebbian long-term potentiation to anti-Hebbian long-term depression. Because noise exposure and tinnitus are known to increase spontaneous activity in fusiform cells as well as alter stimulus timing-dependent plasticity (StTDP), it is important to understand the contribution of mAChRs to in vivo spontaneous activity and plasticity in fusiform cells. In the present study, we blocked mAChRs actions by infusing atropine, a mAChR antagonist, into the DCN fusiform cell layer in normal hearing guinea pigs. Atropine delivery leads to decreased spontaneous firing rates and increased synchronization of fusiform cell spiking activity. Consistent with StTDP alterations observed in tinnitus animals, atropine infusion induced a dominant pattern of inversion of StTDP mean population learning rule from a Hebbian to an anti-Hebbian profile. Units preserving their initial Hebbian learning rules shifted toward more excitatory changes in StTDP, whereas units with initial suppressive learning rules transitioned toward a Hebbian profile. Together, these results implicate muscarinic cholinergic modulation as a factor in controlling in vivo fusiform cell baseline activity and plasticity, suggesting a central role in the maladaptive plasticity associated with tinnitus pathology. NEW & NOTEWORTHY This study is the first to use a novel method of atropine infusion directly into the fusiform cell layer of the dorsal cochlear nucleus coupled with simultaneous recordings of neural activity to clarify the contribution of muscarinic acetylcholine receptors (mAChRs) to in vivo fusiform cell baseline activity and auditory-somatosensory plasticity. We have determined that blocking the mAChRs increases the synchronization of spiking activity across the fusiform cell population and induces a dominant pattern of inversion in their stimulus timing-dependent plasticity. These modifications are consistent with similar changes established in previous tinnitus studies, suggesting that mAChRs might have a critical contribution in mediating the maladaptive alterations associated with tinnitus pathology. Blocking mAChRs also resulted in decreased fusiform cell spontaneous firing rates, which is in contrast with their tinnitus hyperactivity, suggesting that changes in the interactions between the cholinergic and GABAergic systems might also be an underlying factor in tinnitus pathology. Copyright © 2017 the American Physiological Society.
A neoHebbian framework for episodic memory; role of dopamine-dependent late LTP
Grace, Anthony A.; Duzel, Emrah
2011-01-01
According to the Hebb rule, the change in the strength of a synapse depends only on the local interaction of presynaptic and postsynaptic events. Studies at many types of synapses indicate that the early phase of long-term potentiation (LTP) has Hebbian properties. However, it is now clear that the Hebb rule does not account for late LTP; this requires an additional signal that is non-local. For novel information and motivational events such as rewards, this signal at hippocampal CA1 synapses is mediated by the neuromodulator, dopamine. In this Review, we discuss recent experimental findings that support the view that this “neoHebbian” framework can account for memory behavior in a variety of learning situations. PMID:21851992
Wilmes, Katharina Anna; Schleimer, Jan-Hendrik; Schreiber, Susanne
2017-04-01
Inhibition is known to influence the forward-directed flow of information within neurons. However, also regulation of backward-directed signals, such as backpropagating action potentials (bAPs), can enrich the functional repertoire of local circuits. Inhibitory control of bAP spread, for example, can provide a switch for the plasticity of excitatory synapses. Although such a mechanism is possible, it requires a precise timing of inhibition to annihilate bAPs without impairment of forward-directed excitatory information flow. Here, we propose a specific learning rule for inhibitory synapses to automatically generate the correct timing to gate bAPs in pyramidal cells when embedded in a local circuit of feedforward inhibition. Based on computational modeling of multi-compartmental neurons with physiological properties, we demonstrate that a learning rule with anti-Hebbian shape can establish the required temporal precision. In contrast to classical spike-timing dependent plasticity of excitatory synapses, the proposed inhibitory learning mechanism does not necessarily require the definition of an upper bound of synaptic weights because of its tendency to self-terminate once annihilation of bAPs has been reached. Our study provides a functional context in which one of the many time-dependent learning rules that have been observed experimentally - specifically, a learning rule with anti-Hebbian shape - is assigned a relevant role for inhibitory synapses. Moreover, the described mechanism is compatible with an upregulation of excitatory plasticity by disinhibition. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Keysers, Christian; Perrett, David I; Gazzola, Valeria
2014-04-01
Hebbian Learning should not be reduced to contiguity, as it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: Through Hebbian Learning, mirror neurons become dynamic networks that calculate predictions and prediction errors and relate to ideomotor theories. The social force of imitation is important for mirror neuron emergence and suggests canalization.
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2009-01-01
Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly’s halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support. PMID:20396612
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2009-06-01
Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.
Keysers, Christian; Perrett, David I.; Gazzola, Valeria
2015-01-01
Hebbian Learning should not be reduced to contiguity since it detects contingency and causality. Hebbian Learning accounts of mirror neurons make predictions that differ from associative learning: through Hebbian Learning mirror neurons become dynamic networks that calculate predictions and prediction errors and relate to ideomotor theories. The social force of imitation is important for mirror neuron emergence and suggests canalization. PMID:24775162
Whittington, James C. R.; Bogacz, Rafal
2017-01-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583
Whittington, James C R; Bogacz, Rafal
2017-05-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.
Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano
2017-11-08
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex
Lindsay, Grace W.
2017-01-01
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463
NASA Astrophysics Data System (ADS)
Wang, Laiyuan; Wang, Zhiyong; Lin, Jinyi; Yang, Jie; Xie, Linghai; Yi, Mingdong; Li, Wen; Ling, Haifeng; Ou, Changjin; Huang, Wei
2016-10-01
Most simulations of neuroplasticity in memristors, which are potentially used to develop artificial synapses, are confined to the basic biological Hebbian rules. However, the simplex rules potentially can induce excessive excitation/inhibition, even collapse of neural activities, because they neglect the properties of long-term homeostasis involved in the frameworks of realistic neural networks. Here, we develop organic CuPc-based memristors of which excitatory and inhibitory conductivities can implement both Hebbian rules and homeostatic plasticity, complementary to Hebbian patterns and conductive to the long-term homeostasis. In another adaptive situation for homeostasis, in thicker samples, the overall excitement under periodic moderate stimuli tends to decrease and be recovered under intense inputs. Interestingly, the prototypes can be equipped with bio-inspired habituation and sensitization functions outperforming the conventional simplified algorithms. They mutually regulate each other to obtain the homeostasis. Therefore, we develop a novel versatile memristor with advanced synaptic homeostasis for comprehensive neural functions.
Associative (not Hebbian) learning and the mirror neuron system.
Cooper, Richard P; Cook, Richard; Dickinson, Anthony; Heyes, Cecilia M
2013-04-12
The associative sequence learning (ASL) hypothesis suggests that sensorimotor experience plays an inductive role in the development of the mirror neuron system, and that it can play this crucial role because its effects are mediated by learning that is sensitive to both contingency and contiguity. The Hebbian hypothesis proposes that sensorimotor experience plays a facilitative role, and that its effects are mediated by learning that is sensitive only to contiguity. We tested the associative and Hebbian accounts by computational modelling of automatic imitation data indicating that MNS responsivity is reduced more by contingent and signalled than by non-contingent sensorimotor training (Cook et al. [7]). Supporting the associative account, we found that the reduction in automatic imitation could be reproduced by an existing interactive activation model of imitative compatibility when augmented with Rescorla-Wagner learning, but not with Hebbian or quasi-Hebbian learning. The work argues for an associative, but against a Hebbian, account of the effect of sensorimotor training on automatic imitation. We argue, by extension, that associative learning is potentially sufficient for MNS development. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Ruan, Hongyu; Yao, Wei-Dong
2017-01-25
Addictive drugs usurp neural plasticity mechanisms that normally serve reward-related learning and memory, primarily by evoking changes in glutamatergic synaptic strength in the mesocorticolimbic dopamine circuitry. Here, we show that repeated cocaine exposure in vivo does not alter synaptic strength in the mouse prefrontal cortex during an early period of withdrawal, but instead modifies a Hebbian quantitative synaptic learning rule by broadening the temporal window and lowers the induction threshold for spike-timing-dependent LTP (t-LTP). After repeated, but not single, daily cocaine injections, t-LTP in layer V pyramidal neurons is induced at +30 ms, a normally ineffective timing interval for t-LTP induction in saline-exposed mice. This cocaine-induced, extended-timing t-LTP lasts for ∼1 week after terminating cocaine and is accompanied by an increased susceptibility to potentiation by fewer pre-post spike pairs, indicating a reduced t-LTP induction threshold. Basal synaptic strength and the maximal attainable t-LTP magnitude remain unchanged after cocaine exposure. We further show that the cocaine facilitation of t-LTP induction is caused by sensitized D1-cAMP/protein kinase A dopamine signaling in pyramidal neurons, which then pathologically recruits voltage-gated l-type Ca 2+ channels that synergize with GluN2A-containing NMDA receptors to drive t-LTP at extended timing. Our results illustrate a mechanism by which cocaine, acting on a key neuromodulation pathway, modifies the coincidence detection window during Hebbian plasticity to facilitate associative synaptic potentiation in prefrontal excitatory circuits. By modifying rules that govern activity-dependent synaptic plasticity, addictive drugs can derail the experience-driven neural circuit remodeling process important for executive control of reward and addiction. It is believed that addictive drugs often render an addict's brain reward system hypersensitive, leaving the individual more susceptible to relapse. We found that repeated cocaine exposure alters a Hebbian associative synaptic learning rule that governs activity-dependent synaptic plasticity in the mouse prefrontal cortex, characterized by a broader temporal window and a lower threshold for spike-timing-dependent LTP (t-LTP), a cellular form of learning and memory. This rule change is caused by cocaine-exacerbated D1-cAMP/protein kinase A dopamine signaling in pyramidal neurons that in turn pathologically recruits l-type Ca 2+ channels to facilitate coincidence detection during t-LTP induction. Our study provides novel insights on how cocaine, even with only brief exposure, may prime neural circuits for subsequent experience-dependent remodeling that may underlie certain addictive behavior. Copyright © 2017 the authors 0270-6474/17/370986-12$15.00/0.
Hanuschkin, A; Ganguli, S; Hahnloser, R H R
2013-01-01
Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.
Hanuschkin, A.; Ganguli, S.; Hahnloser, R. H. R.
2013-01-01
Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli. PMID:23801941
Thermodynamic efficiency of learning a rule in neural networks
NASA Astrophysics Data System (ADS)
Goldt, Sebastian; Seifert, Udo
2017-11-01
Biological systems have to build models from their sensory input data that allow them to efficiently process previously unseen inputs. Here, we study a neural network learning a binary classification rule for these inputs from examples provided by a teacher. We analyse the ability of the network to apply the rule to new inputs, that is to generalise from past experience. Using stochastic thermodynamics, we show that the thermodynamic costs of the learning process provide an upper bound on the amount of information that the network is able to learn from its teacher for both batch and online learning. This allows us to introduce a thermodynamic efficiency of learning. We analytically compute the dynamics and the efficiency of a noisy neural network performing online learning in the thermodynamic limit. In particular, we analyse three popular learning algorithms, namely Hebbian, Perceptron and AdaTron learning. Our work extends the methods of stochastic thermodynamics to a new type of learning problem and might form a suitable basis for investigating the thermodynamics of decision-making.
Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.
Bush, Daniel; Philippides, Andrew; Husbands, Phil; O'Shea, Michael
2010-07-01
The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal) coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.
Synaptic and nonsynaptic plasticity approximating probabilistic inference
Tully, Philip J.; Hennig, Matthias H.; Lansner, Anders
2014-01-01
Learning and memory operations in neural circuits are believed to involve molecular cascades of synaptic and nonsynaptic changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To this end, a Hebbian learning rule for spiking neurons inspired by Bayesian statistics is proposed. In this model, synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. Trace dynamics enable synaptic learning to readily demonstrate a spike-timing dependence, stably return to a set-point over long time scales, and remain competitive despite this stability. Beyond unsupervised learning, linking the traces with an external plasticity-modulating signal enables spike-based reinforcement learning. At the postsynaptic neuron, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We show how spike-based Hebbian-Bayesian learning can be performed in a simulated inference task using integrate-and-fire (IAF) neurons that are Poisson-firing and background-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions, and that probabilistic inference could be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert. PMID:24782758
Competitive STDP Learning of Overlapping Spatial Patterns.
Krunglevicius, Dalius
2015-08-01
Spike-timing-dependent plasticity (STDP) is a set of Hebbian learning rules firmly based on biological evidence. It has been demonstrated that one of the STDP learning rules is suited for learning spatiotemporal patterns. When multiple neurons are organized in a simple competitive spiking neural network, this network is capable of learning multiple distinct patterns. If patterns overlap significantly (i.e., patterns are mutually inclusive), however, competition would not preclude trained neuron's responding to a new pattern and adjusting synaptic weights accordingly. This letter presents a simple neural network that combines vertical inhibition and Euclidean distance-dependent synaptic strength factor. This approach helps to solve the problem of pattern size-dependent parameter optimality and significantly reduces the probability of a neuron's forgetting an already learned pattern. For demonstration purposes, the network was trained for the first ten letters of the Braille alphabet.
Panda, Priyadarshini; Roy, Kaushik
2017-01-01
Synaptic Plasticity, the foundation for learning and memory formation in the human brain, manifests in various forms. Here, we combine the standard spike timing correlation based Hebbian plasticity with a non-Hebbian synaptic decay mechanism for training a recurrent spiking neural model to generate sequences. We show that inclusion of the adaptive decay of synaptic weights with standard STDP helps learn stable contextual dependencies between temporal sequences, while reducing the strong attractor states that emerge in recurrent models due to feedback loops. Furthermore, we show that the combined learning scheme suppresses the chaotic activity in the recurrent model substantially, thereby enhancing its' ability to generate sequences consistently even in the presence of perturbations. PMID:29311774
Jankovic, Marko; Ogawa, Hidemitsu
2003-08-01
This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.
Criterion learning in rule-based categorization: Simulation of neural mechanism and new data
Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd
2015-01-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349
Criterion learning in rule-based categorization: simulation of neural mechanism and new data.
Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd
2015-04-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.
Contingency learning is reduced for high conflict stimuli.
Whitehead, Peter S; Brewer, Gene A; Patwary, Nowed; Blais, Chris
2016-09-16
Recent theories have proposed that contingency learning occurs independent of control processes. These parallel processing accounts propose that behavioral effects originally thought to be products of control processes are in fact products solely of contingency learning. This view runs contrary to conflict-mediated Hebbian-learning models that posit control and contingency learning are parts of an interactive system. In this study we replicate the contingency learning effect and modify it to further test the veracity of the parallel processing accounts in comparison to conflict-mediated Hebbian-learning models. This is accomplished by manipulating conflict to test for an interaction, or lack thereof, between conflict and contingency learning. The results are consistent with conflict-mediated Hebbian-learning in that the addition of conflict reduces the magnitude of the contingency learning effect. Copyright © 2016 Elsevier B.V. All rights reserved.
On the asymptotic equivalence between differential Hebbian and temporal difference learning.
Kolodziejski, Christoph; Porr, Bernd; Wörgötter, Florentin
2009-04-01
In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning-correlation-based differential Hebbian learning and reward-based temporal difference learning-are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons.
Neural learning circuits utilizing nano-crystalline silicon transistors and memristors.
Cantley, Kurtis D; Subramaniam, Anand; Stiegler, Harvey J; Chapman, Richard A; Vogel, Eric M
2012-04-01
Properties of neural circuits are demonstrated via SPICE simulations and their applications are discussed. The neuron and synapse subcircuits include ambipolar nano-crystalline silicon transistor and memristor device models based on measured data. Neuron circuit characteristics and the Hebbian synaptic learning rule are shown to be similar to biology. Changes in the average firing rate learning rule depending on various circuit parameters are also presented. The subcircuits are then connected into larger neural networks that demonstrate fundamental properties including associative learning and pulse coincidence detection. Learned extraction of a fundamental frequency component from noisy inputs is demonstrated. It is then shown that if the fundamental sinusoid of one neuron input is out of phase with the rest, its synaptic connection changes differently than the others. Such behavior indicates that the system can learn to detect which signals are important in the general population, and that there is a spike-timing-dependent component of the learning mechanism. Finally, future circuit design and considerations are discussed, including requirements for the memristive device.
Circuit mechanisms of sensorimotor learning
Makino, Hiroshi; Hwang, Eun Jung; Hedrick, Nathan G.; Komiyama, Takaki
2016-01-01
SUMMARY The relationship between the brain and the environment is flexible, forming the foundation for our ability to learn. Here we review the current state of our understanding of the modifications in the sensorimotor pathway related to sensorimotor learning. We divide the process in three hierarchical levels with distinct goals: 1) sensory perceptual learning, 2) sensorimotor associative learning, and 3) motor skill learning. Perceptual learning optimizes the representations of important sensory stimuli. Associative learning and the initial phase of motor skill learning are ensured by feedback-based mechanisms that permit trial-and-error learning. The later phase of motor skill learning may primarily involve feedback-independent mechanisms operating under the classic Hebbian rule. With these changes under distinct constraints and mechanisms, sensorimotor learning establishes dedicated circuitry for the reproduction of stereotyped neural activity patterns and behavior. PMID:27883902
A comparison of two neural network schemes for navigation
NASA Technical Reports Server (NTRS)
Munro, Paul W.
1989-01-01
Neural networks have been applied to tasks in several areas of artificial intelligence, including vision, speech, and language. Relatively little work has been done in the area of problem solving. Two approaches to path-finding are presented, both using neural network techniques. Both techniques require a training period. Training under the back propagation (BPL) method was accomplished by presenting representations of (current position, goal position) pairs as input and appropriate actions as output. The Hebbian/interactive activation (HIA) method uses the Hebbian rule to associate points that are nearby. A path to a goal is found by activating a representation of the goal in the network and processing until the current position is activated above some threshold level. BPL, using back-propagation learning, failed to learn, except in a very trivial fashion, that is equivalent to table lookup techniques. HIA, performed much better, and required storage of fewer weights. In drawing a comparison, it is important to note that back propagation techniques depend critically upon the forms of representation used, and can be sensitive to parameters in the simulations; hence the BPL technique may yet yield strong results.
A comparison of two neural network schemes for navigation
NASA Technical Reports Server (NTRS)
Munro, Paul
1990-01-01
Neural networks have been applied to tasks in several areas of artificial intelligence, including vision, speech, and language. Relatively little work has been done in the area of problem solving. Two approaches to path-finding are presented, both using neural network techniques. Both techniques require a training period. Training under the back propagation (BPL) method was accomplished by presenting representations of current position, goal position pairs as input and appropriate actions as output. The Hebbian/interactive activation (HIA) method uses the Hebbian rule to associate points that are nearby. A path to a goal is found by activating a representation of the goal in the network and processing until the current position is activated above some threshold level. BPL, using back-propagation learning, failed to learn, except in a very trivial fashion, that is equivalent to table lookup techniques. HIA, performed much better, and required storage of fewer weights. In drawing a comparison, it is important to note that back propagation techniques depend critically upon the forms of representation used, and can be sensitive to parameters in the simulations; hence the BPL technique may yet yield strong results.
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Koehler, Seth D.; Shore, Susan E.
2015-01-01
Central auditory circuits are influenced by the somatosensory system, a relationship that may underlie tinnitus generation. In the guinea pig dorsal cochlear nucleus (DCN), pairing spinal trigeminal nucleus (Sp5) stimulation with tones at specific intervals and orders facilitated or suppressed subsequent tone-evoked neural responses, reflecting spike timing-dependent plasticity (STDP). Furthermore, after noise-induced tinnitus, bimodal responses in DCN were shifted from Hebbian to anti-Hebbian timing rules with less discrete temporal windows, suggesting a role for bimodal plasticity in tinnitus. Here, we aimed to determine if multisensory STDP principles like those in DCN also exist in primary auditory cortex (A1), and whether they change following noise-induced tinnitus. Tone-evoked and spontaneous neural responses were recorded before and 15 min after bimodal stimulation in which the intervals and orders of auditory-somatosensory stimuli were randomized. Tone-evoked and spontaneous firing rates were influenced by the interval and order of the bimodal stimuli, and in sham-controls Hebbian-like timing rules predominated as was seen in DCN. In noise-exposed animals with and without tinnitus, timing rules shifted away from those found in sham-controls to more anti-Hebbian rules. Only those animals with evidence of tinnitus showed increased spontaneous firing rates, a purported neurophysiological correlate of tinnitus in A1. Together, these findings suggest that bimodal plasticity is also evident in A1 following noise damage and may have implications for tinnitus generation and therapeutic intervention across the central auditory circuit. PMID:26289461
Real-time modeling of primitive environments through wavelet sensors and Hebbian learning
NASA Astrophysics Data System (ADS)
Vaccaro, James M.; Yaworsky, Paul S.
1999-06-01
Modeling the world through sensory input necessarily provides a unique perspective for the observer. Given a limited perspective, objects and events cannot always be encoded precisely but must involve crude, quick approximations to deal with sensory information in a real- time manner. As an example, when avoiding an oncoming car, a pedestrian needs to identify the fact that a car is approaching before ascertaining the model or color of the vehicle. In our methodology, we use wavelet-based sensors with self-organized learning to encode basic sensory information in real-time. The wavelet-based sensors provide necessary transformations while a rank-based Hebbian learning scheme encodes a self-organized environment through translation, scale and orientation invariant sensors. Such a self-organized environment is made possible by combining wavelet sets which are orthonormal, log-scale with linear orientation and have automatically generated membership functions. In earlier work we used Gabor wavelet filters, rank-based Hebbian learning and an exponential modulation function to encode textural information from images. Many different types of modulation are possible, but based on biological findings the exponential modulation function provided a good approximation of first spike coding of `integrate and fire' neurons. These types of Hebbian encoding schemes (e.g., exponential modulation, etc.) are useful for quick response and learning, provide several advantages over contemporary neural network learning approaches, and have been found to quantize data nonlinearly. By combining wavelets with Hebbian learning we can provide a real-time front-end for modeling an intelligent process, such as the autonomous control of agents in a simulated environment.
Spriggs, M J; Sumner, R L; McMillan, R L; Moran, R J; Kirk, I J; Muthukumaraswamy, S D
2018-04-30
The Roving Mismatch Negativity (MMN), and Visual LTP paradigms are widely used as independent measures of sensory plasticity. However, the paradigms are built upon fundamentally different (and seemingly opposing) models of perceptual learning; namely, Predictive Coding (MMN) and Hebbian plasticity (LTP). The aim of the current study was to compare the generative mechanisms of the MMN and visual LTP, therefore assessing whether Predictive Coding and Hebbian mechanisms co-occur in the brain. Forty participants were presented with both paradigms during EEG recording. Consistent with Predictive Coding and Hebbian predictions, Dynamic Causal Modelling revealed that the generation of the MMN modulates forward and backward connections in the underlying network, while visual LTP only modulates forward connections. These results suggest that both Predictive Coding and Hebbian mechanisms are utilized by the brain under different task demands. This therefore indicates that both tasks provide unique insight into plasticity mechanisms, which has important implications for future studies of aberrant plasticity in clinical populations. Copyright © 2018 Elsevier Inc. All rights reserved.
Higher-order neural networks, Polyà polynomials, and Fermi cluster diagrams
NASA Astrophysics Data System (ADS)
Kürten, K. E.; Clark, J. W.
2003-09-01
The problem of controlling higher-order interactions in neural networks is addressed with techniques commonly applied in the cluster analysis of quantum many-particle systems. For multineuron synaptic weights chosen according to a straightforward extension of the standard Hebbian learning rule, we show that higher-order contributions to the stimulus felt by a given neuron can be readily evaluated via Polyà’s combinatoric group-theoretical approach or equivalently by exploiting a precise formal analogy with fermion diagrammatics.
Programmed to learn? The ontogeny of mirror neurons.
Del Giudice, Marco; Manera, Valeria; Keysers, Christian
2009-03-01
Mirror neurons are increasingly recognized as a crucial substrate for many developmental processes, including imitation and social learning. Although there has been considerable progress in describing their function and localization in the primate and adult human brain, we still know little about their ontogeny. The idea that mirror neurons result from Hebbian learning while the child observes/hears his/her own actions has received remarkable empirical support in recent years. Here we add a new element to this proposal, by suggesting that the infant's perceptual-motor system is optimized to provide the brain with the correct input for Hebbian learning, thus facilitating the association between the perception of actions and their corresponding motor programs. We review evidence that infants (1) have a marked visual preference for hands, (2) show cyclic movement patterns with a frequency that could be in the optimal range for enhanced Hebbian learning, and (3) show synchronized theta EEG (also known to favour synaptic Hebbian learning) in mirror cortical areas during self-observation of grasping. These conditions, taken together, would allow mirror neurons for manual actions to develop quickly and reliably through experiential canalization. Our hypothesis provides a plausible pathway for the emergence of mirror neurons that integrates learning with genetic pre-programming, suggesting new avenues for research on the link between synaptic processes and behaviour in ontogeny.
Bamford, Simeon A; Murray, Alan F; Willshaw, David J
2010-02-01
A distributed and locally reprogrammable address-event receiver has been designed, in which incoming address-events are monitored simultaneously by all synapses, allowing for arbitrarily large axonal fan-out without reducing channel capacity. Synapses can change the address of their presynaptic neuron, allowing the distributed implementation of a biologically realistic learning rule, with both synapse formation and elimination (synaptic rewiring). Probabilistic synapse formation leads to topographic map development, made possible by a cross-chip current-mode calculation of Euclidean distance. As well as synaptic plasticity in rewiring, synapses change weights using a competitive Hebbian learning rule (spike-timing-dependent plasticity). The weight plasticity allows receptive fields to be modified based on spatio-temporal correlations in the inputs, and the rewiring plasticity allows these modifications to become embedded in the network topology.
A theory of local learning, the learning channel, and the optimality of backpropagation.
Baldi, Pierre; Sadowski, Peter
2016-11-01
In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far. Copyright © 2016 Elsevier Ltd. All rights reserved.
Learning with incomplete information in the committee machine.
Bergmann, Urs M; Kühn, Reimer; Stamatescu, Ion-Olimpiu
2009-12-01
We study the problem of learning with incomplete information in a student-teacher setup for the committee machine. The learning algorithm combines unsupervised Hebbian learning of a series of associations with a delayed reinforcement step, in which the set of previously learnt associations is partly and indiscriminately unlearnt, to an extent that depends on the success rate of the student on these previously learnt associations. The relevant learning parameter lambda represents the strength of Hebbian learning. A coarse-grained analysis of the system yields a set of differential equations for overlaps of student and teacher weight vectors, whose solutions provide a complete description of the learning behavior. It reveals complicated dynamics showing that perfect generalization can be obtained if the learning parameter exceeds a threshold lambda ( c ), and if the initial value of the overlap between student and teacher weights is non-zero. In case of convergence, the generalization error exhibits a power law decay as a function of the number of examples used in training, with an exponent that depends on the parameter lambda. An investigation of the system flow in a subspace with broken permutation symmetry between hidden units reveals a bifurcation point lambda* above which perfect generalization does not depend on initial conditions. Finally, we demonstrate that cases of a complexity mismatch between student and teacher are optimally resolved in the sense that an over-complex student can emulate a less complex teacher rule, while an under-complex student reaches a state which realizes the minimal generalization error compatible with the complexity mismatch.
Li, Yi; Zhong, Yingpeng; Zhang, Jinjian; Xu, Lei; Wang, Qing; Sun, Huajun; Tong, Hao; Cheng, Xiaoming; Miao, Xiangshui
2014-05-09
Nanoscale inorganic electronic synapses or synaptic devices, which are capable of emulating the functions of biological synapses of brain neuronal systems, are regarded as the basic building blocks for beyond-Von Neumann computing architecture, combining information storage and processing. Here, we demonstrate a Ag/AgInSbTe/Ag structure for chalcogenide memristor-based electronic synapses. The memristive characteristics with reproducible gradual resistance tuning are utilised to mimic the activity-dependent synaptic plasticity that serves as the basis of memory and learning. Bidirectional long-term Hebbian plasticity modulation is implemented by the coactivity of pre- and postsynaptic spikes, and the sign and degree are affected by assorted factors including the temporal difference, spike rate and voltage. Moreover, synaptic saturation is observed to be an adjustment of Hebbian rules to stabilise the growth of synaptic weights. Our results may contribute to the development of highly functional plastic electronic synapses and the further construction of next-generation parallel neuromorphic computing architecture.
Functional requirements for reward-modulated spike-timing-dependent plasticity.
Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram
2010-10-06
Recent experiments have shown that spike-timing-dependent plasticity is influenced by neuromodulation. We derive theoretical conditions for successful learning of reward-related behavior for a large class of learning rules where Hebbian synaptic plasticity is conditioned on a global modulatory factor signaling reward. We show that all learning rules in this class can be separated into a term that captures the covariance of neuronal firing and reward and a second term that presents the influence of unsupervised learning. The unsupervised term, which is, in general, detrimental for reward-based learning, can be suppressed if the neuromodulatory signal encodes the difference between the reward and the expected reward-but only if the expected reward is calculated for each task and stimulus separately. If several tasks are to be learned simultaneously, the nervous system needs an internal critic that is able to predict the expected reward for arbitrary stimuli. We show that, with a critic, reward-modulated spike-timing-dependent plasticity is capable of learning motor trajectories with a temporal resolution of tens of milliseconds. The relation to temporal difference learning, the relevance of block-based learning paradigms, and the limitations of learning with a critic are discussed.
Stereo, Shading, and Surfaces: Curvature Constraints Couple Neural Computations
2014-04-23
Bullier, and J. S. Lund, ‘‘Circuits for local and global signal integration in primary visual cortex,’’ J. Neurosci ., vol. 22, no. 19, pp. 8633–8646...cortex,’’ J. Neurosci ., vol. 17, no. 6, pp. 2112–2127, Mar. 15, 1997. [16] Y. Boykov, O. Veksler, and R. Zabih, ‘‘Fast approximate energy minimization...plasticity: A Hebbian learning rule,’’ Annu. Rev. Neurosci ., vol. 31, pp. 25–46, 2008. [19] V. A. Casagrande and J. H. Kaas, ‘‘The afferent, intrinsic
Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.
2015-01-01
The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637
Liu, Jiajuan; Dosher, Barbara Anne; Lu, Zhong-Lin
2015-01-01
Using an asymmetrical set of vernier stimuli (−15″, −10″, −5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (−5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning. PMID:26418382
Liu, Jiajuan; Dosher, Barbara Anne; Lu, Zhong-Lin
2015-01-01
Using an asymmetrical set of vernier stimuli (-15″, -10″, -5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (-5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning.
Leibo, Joel Z.; Liao, Qianli; Freiwald, Winrich A.; Anselmi, Fabio; Poggio, Tomaso
2017-01-01
SUMMARY The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations like depth-rotations [1, 2]. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3, 4, 5, 6]. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here we demonstrate that one specific biologically-plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli like faces at intermediate levels of the architecture and show why it does so. Thus the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. PMID:27916522
Learning invariance from natural images inspired by observations in the primary visual cortex.
Teichmann, Michael; Wiltschut, Jan; Hamker, Fred
2012-05-01
The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela Irina
An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. A Hebbian learning rule may be used to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of pixel patches over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detectmore » geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.« less
Towards a general theory of neural computation based on prediction by single neurons.
Fiorillo, Christopher D
2008-10-01
Although there has been tremendous progress in understanding the mechanics of the nervous system, there has not been a general theory of its computational function. Here I present a theory that relates the established biophysical properties of single generic neurons to principles of Bayesian probability theory, reinforcement learning and efficient coding. I suggest that this theory addresses the general computational problem facing the nervous system. Each neuron is proposed to mirror the function of the whole system in learning to predict aspects of the world related to future reward. According to the model, a typical neuron receives current information about the state of the world from a subset of its excitatory synaptic inputs, and prior information from its other inputs. Prior information would be contributed by synaptic inputs representing distinct regions of space, and by different types of non-synaptic, voltage-regulated channels representing distinct periods of the past. The neuron's membrane voltage is proposed to signal the difference between current and prior information ("prediction error" or "surprise"). A neuron would apply a Hebbian plasticity rule to select those excitatory inputs that are the most closely correlated with reward but are the least predictable, since unpredictable inputs provide the neuron with the most "new" information about future reward. To minimize the error in its predictions and to respond only when excitation is "new and surprising," the neuron selects amongst its prior information sources through an anti-Hebbian rule. The unique inputs of a mature neuron would therefore result from learning about spatial and temporal patterns in its local environment, and by extension, the external world. Thus the theory describes how the structure of the mature nervous system could reflect the structure of the external world, and how the complexity and intelligence of the system might develop from a population of undifferentiated neurons, each implementing similar learning algorithms.
Neuromodulated Synaptic Plasticity on the SpiNNaker Neuromorphic System
Mikaitis, Mantas; Pineda García, Garibaldi; Knight, James C.; Furber, Steve B.
2018-01-01
SpiNNaker is a digital neuromorphic architecture, designed specifically for the low power simulation of large-scale spiking neural networks at speeds close to biological real-time. Unlike other neuromorphic systems, SpiNNaker allows users to develop their own neuron and synapse models as well as specify arbitrary connectivity. As a result SpiNNaker has proved to be a powerful tool for studying different neuron models as well as synaptic plasticity—believed to be one of the main mechanisms behind learning and memory in the brain. A number of Spike-Timing-Dependent-Plasticity(STDP) rules have already been implemented on SpiNNaker and have been shown to be capable of solving various learning tasks in real-time. However, while STDP is an important biological theory of learning, it is a form of Hebbian or unsupervised learning and therefore does not explain behaviors that depend on feedback from the environment. Instead, learning rules based on neuromodulated STDP (three-factor learning rules) have been shown to be capable of solving reinforcement learning tasks in a biologically plausible manner. In this paper we demonstrate for the first time how a model of three-factor STDP, with the third-factor representing spikes from dopaminergic neurons, can be implemented on the SpiNNaker neuromorphic system. Using this learning rule we first show how reward and punishment signals can be delivered to a single synapse before going on to demonstrate it in a larger network which solves the credit assignment problem in a Pavlovian conditioning experiment. Because of its extra complexity, we find that our three-factor learning rule requires approximately 2× as much processing time as the existing SpiNNaker STDP learning rules. However, we show that it is still possible to run our Pavlovian conditioning model with up to 1 × 104 neurons in real-time, opening up new research opportunities for modeling behavioral learning on SpiNNaker. PMID:29535600
Brown, Stephen B R E; van Steenbergen, Henk; Kedar, Tomer; Nieuwenhuis, Sander
2014-01-01
An increasing number of empirical phenomena that were previously interpreted as a result of cognitive control, turn out to reflect (in part) simple associative-learning effects. A prime example is the proportion congruency effect, the finding that interference effects (such as the Stroop effect) decrease as the proportion of incongruent stimuli increases. While this was previously regarded as strong evidence for a global conflict monitoring-cognitive control loop, recent evidence has shown that the proportion congruency effect is largely item-specific and hence must be due to associative learning. The goal of our research was to test a recent hypothesis about the mechanism underlying such associative-learning effects, the conflict-modulated Hebbian-learning hypothesis, which proposes that the effect of conflict on associative learning is mediated by phasic arousal responses. In Experiment 1, we examined in detail the relationship between the item-specific proportion congruency effect and an autonomic measure of phasic arousal: task-evoked pupillary responses. In Experiment 2, we used a task-irrelevant phasic arousal manipulation and examined the effect on item-specific learning of incongruent stimulus-response associations. The results provide little evidence for the conflict-modulated Hebbian-learning hypothesis, which requires additional empirical support to remain tenable.
Neural Mechanism for Stochastic Behavior During a Competitive Game
Soltani, Alireza; Lee, Daeyeol; Wang, Xiao-Jing
2006-01-01
Previous studies have shown that non-human primates can generate highly stochastic choice behavior, especially when this is required during a competitive interaction with another agent. To understand the neural mechanism of such dynamic choice behavior, we propose a biologically plausible model of decision making endowed with synaptic plasticity that follows a reward-dependent stochastic Hebbian learning rule. This model constitutes a biophysical implementation of reinforcement learning, and it reproduces salient features of behavioral data from an experiment with monkeys playing a matching pennies game. Due to interaction with an opponent and learning dynamics, the model generates quasi-random behavior robustly in spite of intrinsic biases. Furthermore, non-random choice behavior can also emerge when the model plays against a non-interactive opponent, as observed in the monkey experiment. Finally, when combined with a meta-learning algorithm, our model accounts for the slow drift in the animal’s strategy based on a process of reward maximization. PMID:17015181
Learning the Gestalt rule of collinearity from object motion.
Prodöhl, Carsten; Würtz, Rolf P; von der Malsburg, Christoph
2003-08-01
The Gestalt principle of collinearity (and curvilinearity) is widely regarded as being mediated by the long-range connection structure in primary visual cortex. We review the neurophysiological and psychophysical literature to argue that these connections are developed from visual experience after birth, relying on coherent object motion. We then present a neural network model that learns these connections in an unsupervised Hebbian fashion with input from real camera sequences. The model uses spatiotemporal retinal filtering, which is very sensitive to changes in the visual input. We show that it is crucial for successful learning to use the correlation of the transient responses instead of the sustained ones. As a consequence, learning works best with video sequences of moving objects. The model addresses a special case of the fundamental question of what represents the necessary a priori knowledge the brain is equipped with at birth so that the self-organized process of structuring by experience can be successful.
Leibo, Joel Z; Liao, Qianli; Anselmi, Fabio; Freiwald, Winrich A; Poggio, Tomaso
2017-01-09
The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and robust against identity-preserving transformations, like depth rotations [1, 2]. Current computational models of object recognition, including recent deep-learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations [3-6]. Here, we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules generate approximate invariance to identity-preserving transformations at the top level of the processing hierarchy. However, all past models tested failed to reproduce the most salient property of an intermediate representation of a three-level face-processing hierarchy in the brain: mirror-symmetric tuning to head orientation [7]. Here, we demonstrate that one specific biologically plausible Hebb-type learning rule generates mirror-symmetric tuning to bilaterally symmetric stimuli, like faces, at intermediate levels of the architecture and show why it does so. Thus, the tuning properties of individual cells inside the visual stream appear to result from group properties of the stimuli they encode and to reflect the learning rules that sculpted the information-processing system within which they reside. Copyright © 2017 Elsevier Ltd. All rights reserved.
Integrating Hebbian and homeostatic plasticity: introduction.
Fox, Kevin; Stryker, Michael
2017-03-05
Hebbian plasticity is widely considered to be the mechanism by which information can be coded and retained in neurons in the brain. Homeostatic plasticity moves the neuron back towards its original state following a perturbation, including perturbations produced by Hebbian plasticity. How then does homeostatic plasticity avoid erasing the Hebbian coded information? To understand how plasticity works in the brain, and therefore to understand learning, memory, sensory adaptation, development and recovery from injury, requires development of a theory of plasticity that integrates both forms of plasticity into a whole. In April 2016, a group of computational and experimental neuroscientists met in London at a discussion meeting hosted by the Royal Society to identify the critical questions in the field and to frame the research agenda for the next steps. Here, we provide a brief introduction to the papers arising from the meeting and highlight some of the themes to have emerged from the discussions.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'. © 2017 The Author(s).
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics
Sinapayen, Lana; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309
Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.
Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi
2017-01-01
Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.
Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.
2012-01-01
Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381
Anti-Hebbian long-term potentiation in the hippocampal feedback inhibitory circuit.
Lamsa, Karri P; Heeroma, Joost H; Somogyi, Peter; Rusakov, Dmitri A; Kullmann, Dimitri M
2007-03-02
Long-term potentiation (LTP), which approximates Hebb's postulate of associative learning, typically requires depolarization-dependent glutamate receptors of the NMDA (N-methyl-D-aspartate) subtype. However, in some neurons, LTP depends instead on calcium-permeable AMPA-type receptors. This is paradoxical because intracellular polyamines block such receptors during depolarization. We report that LTP at synapses on hippocampal interneurons mediating feedback inhibition is "anti-Hebbian":Itis induced by presynaptic activity but prevented by postsynaptic depolarization. Anti-Hebbian LTP may occur in interneurons that are silent during periods of intense pyramidal cell firing, such as sharp waves, and lead to their altered activation during theta activity.
Watson, Richard A; Mills, Rob; Buckley, C L
2011-01-01
In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organize into structures that enhance global adaptation, efficiency, or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology, and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalization, and optimization are well understood. Such global functions within a single agent or organism are not wholly surprising, since the mechanisms (e.g., Hebbian learning) that create these neural organizations may be selected for this purpose; but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviors when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g., when they can influence which other agents they interact with), then, in adapting these inter-agent relationships to maximize their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviors as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalize by idealizing stored patterns and/or creating new combinations of subpatterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviors in the same sense, and by the same mechanism, as with the organizational principles familiar in connectionist models of organismic learning.
Auto-programmable impulse neural circuits
NASA Technical Reports Server (NTRS)
Watula, D.; Meador, J.
1990-01-01
Impulse neural networks use pulse trains to communicate neuron activation levels. Impulse neural circuits emulate natural neurons at a more detailed level than that typically employed by contemporary neural network implementation methods. An impulse neural circuit which realizes short term memory dynamics is presented. The operation of that circuit is then characterized in terms of pulse frequency modulated signals. Both fixed and programmable synapse circuits for realizing long term memory are also described. The implementation of a simple and useful unsupervised learning law is then presented. The implementation of a differential Hebbian learning rule for a specific mean-frequency signal interpretation is shown to have a straightforward implementation using digital combinational logic with a variation of a previously developed programmable synapse circuit. This circuit is expected to be exploited for simple and straightforward implementation of future auto-adaptive neural circuits.
Autonomous learning in gesture recognition by using lobe component analysis
NASA Astrophysics Data System (ADS)
Lu, Jian; Weng, Juyang
2007-02-01
Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.
Decomposition of Rotor Hopfield Neural Networks Using Complex Numbers.
Kobayashi, Masaki
2018-04-01
A complex-valued Hopfield neural network (CHNN) is a multistate model of a Hopfield neural network. It has the disadvantage of low noise tolerance. Meanwhile, a symmetric CHNN (SCHNN) is a modification of a CHNN that improves noise tolerance. Furthermore, a rotor Hopfield neural network (RHNN) is an extension of a CHNN. It has twice the storage capacity of CHNNs and SCHNNs, and much better noise tolerance than CHNNs, although it requires twice many connection parameters. In this brief, we investigate the relations between CHNN, SCHNN, and RHNN; an RHNN is uniquely decomposed into a CHNN and SCHNN. In addition, the Hebbian learning rule for RHNNs is decomposed into those for CHNNs and SCHNNs.
Ong, M L; Ng, E Y K
2005-12-01
In the lower brain, body temperature is continually being regulated almost flawlessly despite huge fluctuations in ambient and physiological conditions that constantly threaten the well-being of the body. The underlying control problem defining thermal homeostasis is one of great enormity: Many systems and sub-systems are involved in temperature regulation and physiological processes are intrinsically complex and intertwined. Thus the defining control system has to take into account the complications of nonlinearities, system uncertainties, delayed feedback loops as well as internal and external disturbances. In this paper, we propose a self-tuning adaptive thermal controller based upon Hebbian feedback covariance learning where the system is to be regulated continually to best suit its environment. This hypothesis is supported in part by postulations of the presence of adaptive optimization behavior in biological systems of certain organisms which face limited resources vital for survival. We demonstrate the use of Hebbian feedback covariance learning as a possible self-adaptive controller in body temperature regulation. The model postulates an important role of Hebbian covariance adaptation as a means of reinforcement learning in the thermal controller. The passive system is based on a simplified 2-node core and shell representation of the body, where global responses are captured. Model predictions are consistent with observed thermoregulatory responses to conditions of exercise and rest, and heat and cold stress. An important implication of the model is that optimal physiological behaviors arising from self-tuning adaptive regulation in the thermal controller may be responsible for the departure from homeostasis in abnormal states, e.g., fever. This was previously unexplained using the conventional "set-point" control theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-10-01
Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less
Improving KPCA Online Extraction by Orthonormalization in the Feature Space.
Souza Filho, Joao B O; Diniz, Paulo S R
2018-04-01
Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.
Nothing can be coincidence: synaptic inhibition and plasticity in the cerebellar nuclei
Pugh, Jason R.; Raman, Indira M.
2009-01-01
Many cerebellar neurons fire spontaneously, generating 10–100 action potentials per second even without synaptic input. This high basal activity correlates with information-coding mechanisms that differ from those of cells that are quiescent until excited synaptically. For example, in the deep cerebellar nuclei, Hebbian patterns of coincident synaptic excitation and postsynaptic firing fail to induce long-term increases in the strength of excitatory inputs. Instead, excitatory synaptic currents are potentiated by combinations of inhibition and excitation that resemble the activity of Purkinje and mossy fiber afferents that is predicted to occur during cerebellar associative learning tasks. Such results indicate that circuits with intrinsically active neurons have rules for information transfer and storage that distinguish them from other brain regions. PMID:19178955
Theta Coordinated Error-Driven Learning in the Hippocampus
Ketz, Nicholas; Morkonda, Srinimisha G.; O'Reilly, Randall C.
2013-01-01
The learning mechanism in the hippocampus has almost universally been assumed to be Hebbian in nature, where individual neurons in an engram join together with synaptic weight increases to support facilitated recall of memories later. However, it is also widely known that Hebbian learning mechanisms impose significant capacity constraints, and are generally less computationally powerful than learning mechanisms that take advantage of error signals. We show that the differential phase relationships of hippocampal subfields within the overall theta rhythm enable a powerful form of error-driven learning, which results in significantly greater capacity, as shown in computer simulations. In one phase of the theta cycle, the bidirectional connectivity between CA1 and entorhinal cortex can be trained in an error-driven fashion to learn to effectively encode the cortical inputs in a compact and sparse form over CA1. In a subsequent portion of the theta cycle, the system attempts to recall an existing memory, via the pathway from entorhinal cortex to CA3 and CA1. Finally the full theta cycle completes when a strong target encoding representation of the current input is imposed onto the CA1 via direct projections from entorhinal cortex. The difference between this target encoding and the attempted recall of the same representation on CA1 constitutes an error signal that can drive the learning of CA3 to CA1 synapses. This CA3 to CA1 pathway is critical for enabling full reinstatement of recalled hippocampal memories out in cortex. Taken together, these new learning dynamics enable a much more robust, high-capacity model of hippocampal learning than was available previously under the classical Hebbian model. PMID:23762019
Control of a simulated arm using a novel combination of Cerebellar learning mechanisms
NASA Technical Reports Server (NTRS)
Assad, C.; Hartmann, M.; Paulin, M. G.
2001-01-01
We present a model of cerebellar cortex that combines two types of learning: feedforward predicitve association based on local Hebbian-type learning between granule cell ascending branch and parallel fiber inputs, and reinforcement learning with feedback error correction based on climbing fiber activity.
Syntactic sequencing in Hebbian cell assemblies.
Wennekers, Thomas; Palm, Günther
2009-12-01
Hebbian cell assemblies provide a theoretical framework for the modeling of cognitive processes that grounds them in the underlying physiological neural circuits. Recently we have presented an extension of cell assemblies by operational components which allows to model aspects of language, rules, and complex behaviour. In the present work we study the generation of syntactic sequences using operational cell assemblies timed by unspecific trigger signals. Syntactic patterns are implemented in terms of hetero-associative transition graphs in attractor networks which cause a directed flow of activity through the neural state space. We provide regimes for parameters that enable an unspecific excitatory control signal to switch reliably between attractors in accordance with the implemented syntactic rules. If several target attractors are possible in a given state, noise in the system in conjunction with a winner-takes-all mechanism can randomly choose a target. Disambiguation can also be guided by context signals or specific additional external signals. Given a permanently elevated level of external excitation the model can enter an autonomous mode, where it generates temporal grammatical patterns continuously.
Hamker, Fred H; Wiltschut, Jan
2007-09-01
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.
Walters, D M; Stringer, S M
2010-07-01
A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.
Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff
2017-01-01
Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.
Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity.
Albers, Christian; Westkott, Maren; Pawelzik, Klaus
2016-01-01
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.
Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity
Albers, Christian; Westkott, Maren; Pawelzik, Klaus
2016-01-01
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns. PMID:26900845
Domain-specific and domain-general constraints on word and sequence learning.
Archibald, Lisa M D; Joanisse, Marc F
2013-02-01
The relative influences of language-related and memory-related constraints on the learning of novel words and sequences were examined by comparing individual differences in performance of children with and without specific deficits in either language or working memory. Children recalled lists of words in a Hebbian learning protocol in which occasional lists repeated, yielding improved recall over the course of the task on the repeated lists. The task involved presentation of pictures of common nouns followed immediately by equivalent presentations of the spoken names. The same participants also completed a paired-associate learning task involving word-picture and nonword-picture pairs. Hebbian learning was observed for all groups. Domain-general working memory constrained immediate recall, whereas language abilities impacted recall in the auditory modality only. In addition, working memory constrained paired-associate learning generally, whereas language abilities disproportionately impacted novel word learning. Overall, all of the learning tasks were highly correlated with domain-general working memory. The learning of nonwords was additionally related to general intelligence, phonological short-term memory, language abilities, and implicit learning. The results suggest that distinct associations between language- and memory-related mechanisms support learning of familiar and unfamiliar phonological forms and sequences.
A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation.
Fiebig, Florian; Lansner, Anders
2017-01-04
A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying cortical tissue. These findings are directly relevant to the ongoing paradigm shift in the WM field. Copyright © 2017 Fiebig and Lansner.
A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation
Fiebig, Florian
2017-01-01
A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying cortical tissue. These findings are directly relevant to the ongoing paradigm shift in the WM field. PMID:28053032
Associative Memory in Three Aplysiids: Correlation with Heterosynaptic Modulation
ERIC Educational Resources Information Center
Thompson, Laura; Wright, William G.; Hoover, Brian A.; Nguyen, Hoang
2006-01-01
Much recent research on mechanisms of learning and memory focuses on the role of heterosynaptic neuromodulatory signaling. Such neuromodulation appears to stabilize Hebbian synaptic changes underlying associative learning, thereby extending memory. Previous comparisons of three related sea-hares (Mollusca, Opisthobranchia) uncovered interspecific…
Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...
2014-12-09
We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less
Szabo, Miruna; Deco, Gustavo; Fusi, Stefano; Del Giudice, Paolo; Mattia, Maurizio; Stetter, Martin
2006-05-01
Recent experiments on behaving monkeys have shown that learning a visual categorization task makes the neurons in infero-temporal cortex (ITC) more selective to the task-relevant features of the stimuli (Sigala and Logothetis in Nature 415 318-320, 2002). We hypothesize that such a selectivity modulation emerges from the interaction between ITC and other cortical area, presumably the prefrontal cortex (PFC), where the previously learned stimulus categories are encoded. We propose a biologically inspired model of excitatory and inhibitory spiking neurons with plastic synapses, modified according to a reward based Hebbian learning rule, to explain the experimental results and test the validity of our hypothesis. We assume that the ITC neurons, receiving feature selective inputs, form stronger connections with the category specific neurons to which they are consistently associated in rewarded trials. After learning, the top-down influence of PFC neurons enhances the selectivity of the ITC neurons encoding the behaviorally relevant features of the stimuli, as observed in the experiments. We conclude that the perceptual representation in visual areas like ITC can be strongly affected by the interaction with other areas which are devoted to higher cognitive functions.
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
van Hartevelt, Tim J; Cabral, Joana; Møller, Arne; FitzGerald, James J; Green, Alexander L; Aziz, Tipu Z; Deco, Gustavo; Kringelbach, Morten L
2015-01-01
It is unclear whether Hebbian-like learning occurs at the level of long-range white matter connections in humans, i.e., where measurable changes in structural connectivity (SC) are correlated with changes in functional connectivity. However, the behavioral changes observed after deep brain stimulation (DBS) suggest the existence of such Hebbian-like mechanisms occurring at the structural level with functional consequences. In this rare case study, we obtained the full network of white matter connections of one patient with Parkinson's disease (PD) before and after long-term DBS and combined it with a computational model of ongoing activity to investigate the effects of DBS-induced long-term structural changes. The results show that the long-term effects of DBS on resting-state functional connectivity is best obtained in the computational model by changing the structural weights from the subthalamic nucleus (STN) to the putamen and the thalamus in a Hebbian-like manner. Moreover, long-term DBS also significantly changed the SC towards normality in terms of model-based measures of segregation and integration of information processing, two key concepts of brain organization. This novel approach using computational models to model the effects of Hebbian-like changes in SC allowed us to causally identify the possible underlying neural mechanisms of long-term DBS using rare case study data. In time, this could help predict the efficacy of individual DBS targeting and identify novel DBS targets.
Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert
2015-01-01
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370
Unsupervised learning in neural networks with short range synapses
NASA Astrophysics Data System (ADS)
Brunnet, L. G.; Agnes, E. J.; Mizusaki, B. E. P.; Erichsen, R., Jr.
2013-01-01
Different areas of the brain are involved in specific aspects of the information being processed both in learning and in memory formation. For example, the hippocampus is important in the consolidation of information from short-term memory to long-term memory, while emotional memory seems to be dealt by the amygdala. On the microscopic scale the underlying structures in these areas differ in the kind of neurons involved, in their connectivity, or in their clustering degree but, at this level, learning and memory are attributed to neuronal synapses mediated by longterm potentiation and long-term depression. In this work we explore the properties of a short range synaptic connection network, a nearest neighbor lattice composed mostly by excitatory neurons and a fraction of inhibitory ones. The mechanism of synaptic modification responsible for the emergence of memory is Spike-Timing-Dependent Plasticity (STDP), a Hebbian-like rule, where potentiation/depression is acquired when causal/non-causal spikes happen in a synapse involving two neurons. The system is intended to store and recognize memories associated to spatial external inputs presented as simple geometrical forms. The synaptic modifications are continuously applied to excitatory connections, including a homeostasis rule and STDP. In this work we explore the different scenarios under which a network with short range connections can accomplish the task of storing and recognizing simple connected patterns.
Hiratani, Naoki; Fukai, Tomoki
2016-01-01
In the adult mammalian cortex, a small fraction of spines are created and eliminated every day, and the resultant synaptic connection structure is highly nonrandom, even in local circuits. However, it remains unknown whether a particular synaptic connection structure is functionally advantageous in local circuits, and why creation and elimination of synaptic connections is necessary in addition to rich synaptic weight plasticity. To answer these questions, we studied an inference task model through theoretical and numerical analyses. We demonstrate that a robustly beneficial network structure naturally emerges by combining Hebbian-type synaptic weight plasticity and wiring plasticity. Especially in a sparsely connected network, wiring plasticity achieves reliable computation by enabling efficient information transmission. Furthermore, the proposed rule reproduces experimental observed correlation between spine dynamics and task performance. PMID:27303271
Critical neural networks with short- and long-term plasticity.
Michiels van Kessenich, L; Luković, M; de Arcangelis, L; Herrmann, H J
2018-03-01
In recent years self organized critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time series of neuronal activity exhibits temporal bursts leading to 1/f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as xor, providing the foundation of future research on more complicated tasks such as pattern recognition.
Brzosko, Zuzanna; Zannone, Sara; Schultz, Wolfram
2017-01-01
Spike timing-dependent plasticity (STDP) is under neuromodulatory control, which is correlated with distinct behavioral states. Previously, we reported that dopamine, a reward signal, broadens the time window for synaptic potentiation and modulates the outcome of hippocampal STDP even when applied after the plasticity induction protocol (Brzosko et al., 2015). Here, we demonstrate that sequential neuromodulation of STDP by acetylcholine and dopamine offers an efficacious model of reward-based navigation. Specifically, our experimental data in mouse hippocampal slices show that acetylcholine biases STDP toward synaptic depression, whilst subsequent application of dopamine converts this depression into potentiation. Incorporating this bidirectional neuromodulation-enabled correlational synaptic learning rule into a computational model yields effective navigation toward changing reward locations, as in natural foraging behavior. Thus, temporally sequenced neuromodulation of STDP enables associations to be made between actions and outcomes and also provides a possible mechanism for aligning the time scales of cellular and behavioral learning. DOI: http://dx.doi.org/10.7554/eLife.27756.001 PMID:28691903
Critical neural networks with short- and long-term plasticity
NASA Astrophysics Data System (ADS)
Michiels van Kessenich, L.; Luković, M.; de Arcangelis, L.; Herrmann, H. J.
2018-03-01
In recent years self organized critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time series of neuronal activity exhibits temporal bursts leading to 1 /f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as xor, providing the foundation of future research on more complicated tasks such as pattern recognition.
Inter-synaptic learning of combination rules in a cortical network model
Lavigne, Frédéric; Avnaïm, Francis; Dumercy, Laurent
2014-01-01
Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network. PMID:25221529
Towards autonomous neuroprosthetic control using Hebbian reinforcement learning.
Mahmoudi, Babak; Pohlmeyer, Eric A; Prins, Noeline W; Geng, Shijia; Sanchez, Justin C
2013-12-01
Our goal was to design an adaptive neuroprosthetic controller that could learn the mapping from neural states to prosthetic actions and automatically adjust adaptation using only a binary evaluative feedback as a measure of desirability/undesirability of performance. Hebbian reinforcement learning (HRL) in a connectionist network was used for the design of the adaptive controller. The method combines the efficiency of supervised learning with the generality of reinforcement learning. The convergence properties of this approach were studied using both closed-loop control simulations and open-loop simulations that used primate neural data from robot-assisted reaching tasks. The HRL controller was able to perform classification and regression tasks using its episodic and sequential learning modes, respectively. In our experiments, the HRL controller quickly achieved convergence to an effective control policy, followed by robust performance. The controller also automatically stopped adapting the parameters after converging to a satisfactory control policy. Additionally, when the input neural vector was reorganized, the controller resumed adaptation to maintain performance. By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.
Faghihi, Faramarz; Moustafa, Ahmed A.
2015-01-01
Synapses act as information filters by different molecular mechanisms including retrograde messenger that affect neuronal spiking activity. One of the well-known effects of retrograde messenger in presynaptic neurons is a change of the probability of neurotransmitter release. Hebbian learning describe a strengthening of a synapse between a presynaptic input onto a postsynaptic neuron when both pre- and postsynaptic neurons are coactive. In this work, a theory of homeostatic regulation of neurotransmitter release by retrograde messenger and Hebbian plasticity in neuronal encoding is presented. Encoding efficiency was measured for different synaptic conditions. In order to gain high encoding efficiency, the spiking pattern of a neuron should be dependent on the intensity of the input and show low levels of noise. In this work, we represent spiking trains as zeros and ones (corresponding to non-spike or spike in a time bin, respectively) as words with length equal to three. Then the frequency of each word (here eight words) is measured using spiking trains. These frequencies are used to measure neuronal efficiency in different conditions and for different parameter values. Results show that neurons that have synapses acting as band-pass filters show the highest efficiency to encode their input when both Hebbian mechanism and homeostatic regulation of neurotransmitter release exist in synapses. Specifically, the integration of homeostatic regulation of feedback inhibition with Hebbian mechanism and homeostatic regulation of neurotransmitter release in the synapses leads to even higher efficiency when high stimulus intensity is presented to the neurons. However, neurons with synapses acting as high-pass filters show no remarkable increase in encoding efficiency for all simulated synaptic plasticity mechanisms. This study demonstrates the importance of cooperation of Hebbian mechanism with regulation of neurotransmitter release induced by rapid diffused retrograde messenger in neurons with synapses as low and band-pass filters to obtain high encoding efficiency in different environmental and physiological conditions. PMID:25972786
A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP
Balduzzi, David; Tononi, Giulio
2012-01-01
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences
Lindén, Henrik; Lansner, Anders
2016-01-01
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. PMID:27213810
Learning with incomplete information and the mathematical structure behind it.
Kühn, Reimer; Stamatescu, Ion-Olimpiu
2007-07-01
We investigate the problem of learning with incomplete information as exemplified by learning with delayed reinforcement. We study a two phase learning scenario in which a phase of Hebbian associative learning based on momentary internal representations is supplemented by an 'unlearning' phase depending on a graded reinforcement signal. The reinforcement signal quantifies the success-rate globally for a number of learning steps in phase one, and 'unlearning' is indiscriminate with respect to associations learnt in that phase. Learning according to this model is studied via simulations and analytically within a student-teacher scenario for both single layer networks and, for a committee machine. Success and speed of learning depend on the ratio lambda of the learning rates used for the associative Hebbian learning phase and for the unlearning-correction in response to the reinforcement signal, respectively. Asymptotically perfect generalization is possible only, if this ratio exceeds a critical value lambda( c ), in which case the generalization error exhibits a power law decay with the number of examples seen by the student, with an exponent that depends in a non-universal manner on the parameter lambda. We find these features to be robust against a wide spectrum of modifications of microscopic modelling details. Two illustrative applications-one of a robot learning to navigate a field containing obstacles, and the problem of identifying a specific component in a collection of stimuli-are also provided.
A framework for plasticity implementation on the SpiNNaker neural architecture.
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A; Furber, Steve B; Benosman, Ryad B
2014-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
A framework for plasticity implementation on the SpiNNaker neural architecture
Galluppi, Francesco; Lagorce, Xavier; Stromatias, Evangelos; Pfeiffer, Michael; Plana, Luis A.; Furber, Steve B.; Benosman, Ryad B.
2015-01-01
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system. PMID:25653580
Recurrent-neural-network-based Boolean factor analysis and its application to word clustering.
Frolov, Alexander A; Husek, Dusan; Polyakov, Pavel Yu
2009-07-01
The objective of this paper is to introduce a neural-network-based algorithm for word clustering as an extension of the neural-network-based Boolean factor analysis algorithm (Frolov , 2007). It is shown that this extended algorithm supports even the more complex model of signals that are supposed to be related to textual documents. It is hypothesized that every topic in textual data is characterized by a set of words which coherently appear in documents dedicated to a given topic. The appearance of each word in a document is coded by the activity of a particular neuron. In accordance with the Hebbian learning rule implemented in the network, sets of coherently appearing words (treated as factors) create tightly connected groups of neurons, hence, revealing them as attractors of the network dynamics. The found factors are eliminated from the network memory by the Hebbian unlearning rule facilitating the search of other factors. Topics related to the found sets of words can be identified based on the words' semantics. To make the method complete, a special technique based on a Bayesian procedure has been developed for the following purposes: first, to provide a complete description of factors in terms of component probability, and second, to enhance the accuracy of classification of signals to determine whether it contains the factor. Since it is assumed that every word may possibly contribute to several topics, the proposed method might be related to the method of fuzzy clustering. In this paper, we show that the results of Boolean factor analysis and fuzzy clustering are not contradictory, but complementary. To demonstrate the capabilities of this attempt, the method is applied to two types of textual data on neural networks in two different languages. The obtained topics and corresponding words are at a good level of agreement despite the fact that identical topics in Russian and English conferences contain different sets of keywords.
From modulated Hebbian plasticity to simple behavior learning through noise and weight saturation.
Soltoggio, Andrea; Stanley, Kenneth O
2012-10-01
Synaptic plasticity is a major mechanism for adaptation, learning, and memory. Yet current models struggle to link local synaptic changes to the acquisition of behaviors. The aim of this paper is to demonstrate a computational relationship between local Hebbian plasticity and behavior learning by exploiting two traditionally unwanted features: neural noise and synaptic weight saturation. A modulation signal is employed to arbitrate the sign of plasticity: when the modulation is positive, the synaptic weights saturate to express exploitative behavior; when it is negative, the weights converge to average values, and neural noise reconfigures the network's functionality. This process is demonstrated through simulating neural dynamics in the autonomous emergence of fearful and aggressive navigating behaviors and in the solution to reward-based problems. The neural model learns, memorizes, and modifies different behaviors that lead to positive modulation in a variety of settings. The algorithm establishes a simple relationship between local plasticity and behavior learning by demonstrating the utility of noise and weight saturation. Moreover, it provides a new tool to simulate adaptive behavior, and contributes to bridging the gap between synaptic changes and behavior in neural computation. Copyright © 2012 Elsevier Ltd. All rights reserved.
On the capacity of ternary Hebbian networks
NASA Technical Reports Server (NTRS)
Baram, Yoram
1991-01-01
Networks of ternary neurons storing random vectors over the set -1,0,1 by the so-called Hebbian rule are considered. It is shown that the maximal number of stored patterns that are equilibrium states of the network with probability tending to one as N tends to infinity is at least on the order of (N exp 2-1/alpha)/K, where N is the number of neurons, K is the number of nonzero elements in a pattern, and t = alpha x K, alpha between 1/2 and 1, is the threshold in the neuron function. While, for small K, this bound is similar to that obtained for fully connected binary networks, the number of interneural connections required in the ternary case is considerably smaller. Similar bounds, incorporating error probabilities, are shown to guarantee, in the same probabilistic sense, the correction of errors in the nonzero elements and in the location of these elements.
Feedforward Inhibition and Synaptic Scaling – Two Sides of the Same Coin?
Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing. PMID:22457610
NASA Astrophysics Data System (ADS)
Moody, Daniela I.; Wilson, Cathy J.; Rowland, Joel C.; Altmann, Garrett L.
2015-06-01
Advanced pattern recognition and computer vision algorithms are of great interest for landscape characterization, change detection, and change monitoring in satellite imagery, in support of global climate change science and modeling. We present results from an ongoing effort to extend neuroscience-inspired models for feature extraction to the environmental sciences, and we demonstrate our work using Worldview-2 multispectral satellite imagery. We use a Hebbian learning rule to derive multispectral, multiresolution dictionaries directly from regional satellite normalized band difference index data. These feature dictionaries are used to build sparse scene representations, from which we automatically generate land cover labels via our CoSA algorithm: Clustering of Sparse Approximations. These data adaptive feature dictionaries use joint spectral and spatial textural characteristics to help separate geologic, vegetative, and hydrologic features. Land cover labels are estimated in example Worldview-2 satellite images of Barrow, Alaska, taken at two different times, and are used to detect and discuss seasonal surface changes. Our results suggest that an approach that learns from both spectral and spatial features is promising for practical pattern recognition problems in high resolution satellite imagery.
Feedforward inhibition and synaptic scaling--two sides of the same coin?
Keck, Christian; Savin, Cristina; Lücke, Jörg
2012-01-01
Feedforward inhibition and synaptic scaling are important adaptive processes that control the total input a neuron can receive from its afferents. While often studied in isolation, the two have been reported to co-occur in various brain regions. The functional implications of their interactions remain unclear, however. Based on a probabilistic modeling approach, we show here that fast feedforward inhibition and synaptic scaling interact synergistically during unsupervised learning. In technical terms, we model the input to a neural circuit using a normalized mixture model with Poisson noise. We demonstrate analytically and numerically that, in the presence of lateral inhibition introducing competition between different neurons, Hebbian plasticity and synaptic scaling approximate the optimal maximum likelihood solutions for this model. Our results suggest that, beyond its conventional use as a mechanism to remove undesired pattern variations, input normalization can make typical neural interaction and learning rules optimal on the stimulus subspace defined through feedforward inhibition. Furthermore, learning within this subspace is more efficient in practice, as it helps avoid locally optimal solutions. Our results suggest a close connection between feedforward inhibition and synaptic scaling which may have important functional implications for general cortical processing.
ERP evidence for conflict in contingency learning.
Whitehead, Peter S; Brewer, Gene A; Blais, Chris
2017-07-01
The proportion congruency effect refers to the observation that the magnitude of the Stroop effect increases as the proportion of congruent trials in a block increases. Contemporary work shows that proportion effects can be driven by both context and individual items, and are referred to as context-specific proportion congruency (CSPC) and item-specific proportion congruency (ISPC) effects, respectively. The conflict-modulated Hebbian learning account posits that these effects manifest from the same mechanism, while the parallel episodic processing model posits that the ISPC can occur by simple associative learning. Our prior work showed that the neural correlates of the CSPC is an N2 over frontocentral electrode sites approximately 300 ms after stimulus onset that predicts behavioral performance. There is strong consensus in the field that this N2 signal is associated with conflict detection in the medial frontal cortex. The experiment reported here assesses whether the same qualitative electrophysiological pattern of results holds for the ISPC. We find that the spatial topography of the N2 is similar but slightly delayed with a peak onset of approximately 300 ms after stimulus onset. We argue that this provides strong evidence that a single common mechanism-conflict-modulated Hebbian learning-drives both the ISPC and CSPC. © 2017 Society for Psychophysiological Research.
Buss, Aaron T; Wifall, Tim; Hazeltine, Eliot; Spencer, John P
2014-02-01
People are typically slower when executing two tasks than when only performing a single task. These dual-task costs are initially robust but are reduced with practice. Dux et al. (2009) explored the neural basis of dual-task costs and learning using fMRI. Inferior frontal junction (IFJ) showed a larger hemodynamic response on dual-task trials compared with single-task trial early in learning. As dual-task costs were eliminated, dual-task hemodynamics in IFJ reduced to single-task levels. Dux and colleagues concluded that the reduction of dual-task costs is accomplished through increased efficiency of information processing in IFJ. We present a dynamic field theory of response selection that addresses two questions regarding these results. First, what mechanism leads to the reduction of dual-task costs and associated changes in hemodynamics? We show that a simple Hebbian learning mechanism is able to capture the quantitative details of learning at both the behavioral and neural levels. Second, is efficiency isolated to cognitive control areas such as IFJ, or is it also evident in sensory motor areas? To investigate this, we restrict Hebbian learning to different parts of the neural model. None of the restricted learning models showed the same reductions in dual-task costs as the unrestricted learning model, suggesting that efficiency is distributed across cognitive control and sensory motor processing systems.
A computational developmental model for specificity and transfer in perceptual learning.
Solgi, Mojtaba; Liu, Taosheng; Weng, Juyang
2013-01-04
How and under what circumstances the training effects of perceptual learning (PL) transfer to novel situations is critical to our understanding of generalization and abstraction in learning. Although PL is generally believed to be highly specific to the trained stimulus, a series of psychophysical studies have recently shown that training effects can transfer to untrained conditions under certain experimental protocols. In this article, we present a brain-inspired, neuromorphic computational model of the Where-What visuomotor pathways which successfully explains both the specificity and transfer of perceptual learning. The major architectural novelty is that each feature neuron has both sensory and motor inputs. The network of neurons is autonomously developed from experience, using a refined Hebbian-learning rule and lateral competition, which altogether result in neuronal recruitment. Our hypothesis is that certain paradigms of experiments trigger two-way (descending and ascending) off-task processes about the untrained condition which lead to recruitment of more neurons in lower feature representation areas as well as higher concept representation areas for the untrained condition, hence the transfer. We put forward a novel proposition that gated self-organization of the connections during the off-task processes accounts for the observed transfer effects. Simulation results showed transfer of learning across retinal locations in a Vernier discrimination task in a double-training procedure, comparable to previous psychophysical data (Xiao et al., 2008). To the best of our knowledge, this model is the first neurally-plausible model to explain both transfer and specificity in a PL setting.
NASA Astrophysics Data System (ADS)
Marshall, Jonathan A.
1992-12-01
A simple self-organizing neural network model, called an EXIN network, that learns to process sensory information in a context-sensitive manner, is described. EXIN networks develop efficient representation structures for higher-level visual tasks such as segmentation, grouping, transparency, depth perception, and size perception. Exposure to a perceptual environment during a developmental period serves to configure the network to perform appropriate organization of sensory data. A new anti-Hebbian inhibitory learning rule permits superposition of multiple simultaneous neural activations (multiple winners), while maintaining contextual consistency constraints, instead of forcing winner-take-all pattern classifications. The activations can represent multiple patterns simultaneously and can represent uncertainty. The network performs parallel parsing, credit attribution, and simultaneous constraint satisfaction. EXIN networks can learn to represent multiple oriented edges even where they intersect and can learn to represent multiple transparently overlaid surfaces defined by stereo or motion cues. In the case of stereo transparency, the inhibitory learning implements both a uniqueness constraint and permits coactivation of cells representing multiple disparities at the same image location. Thus two or more disparities can be active simultaneously without interference. This behavior is analogous to that of Prazdny's stereo vision algorithm, with the bonus that each binocular point is assigned a unique disparity. In a large implementation, such a NN would also be able to represent effectively the disparities of a cloud of points at random depths, like human observers, and unlike Prazdny's method
Hebbian self-organizing integrate-and-fire networks for data clustering.
Landis, Florian; Ott, Thomas; Stoop, Ruedi
2010-01-01
We propose a Hebbian learning-based data clustering algorithm using spiking neurons. The algorithm is capable of distinguishing between clusters and noisy background data and finds an arbitrary number of clusters of arbitrary shape. These properties render the approach particularly useful for visual scene segmentation into arbitrarily shaped homogeneous regions. We present several application examples, and in order to highlight the advantages and the weaknesses of our method, we systematically compare the results with those from standard methods such as the k-means and Ward's linkage clustering. The analysis demonstrates that not only the clustering ability of the proposed algorithm is more powerful than those of the two concurrent methods, the time complexity of the method is also more modest than that of its generally used strongest competitor.
Information-driven self-organization: the dynamical system approach to autonomous robot behavior.
Ay, Nihat; Bernigau, Holger; Der, Ralf; Prokopenko, Mikhail
2012-09-01
In recent years, information theory has come into the focus of researchers interested in the sensorimotor dynamics of both robots and living beings. One root for these approaches is the idea that living beings are information processing systems and that the optimization of these processes should be an evolutionary advantage. Apart from these more fundamental questions, there is much interest recently in the question how a robot can be equipped with an internal drive for innovation or curiosity that may serve as a drive for an open-ended, self-determined development of the robot. The success of these approaches depends essentially on the choice of a convenient measure for the information. This article studies in some detail the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process. The PI of a process quantifies the total information of past experience that can be used for predicting future events. However, the application of information theoretic measures in robotics mostly is restricted to the case of a finite, discrete state-action space. This article aims at applying the PI in the dynamical systems approach to robot control. We study linear systems as a first step and derive exact results for the PI together with explicit learning rules for the parameters of the controller. Interestingly, these learning rules are of Hebbian nature and local in the sense that the synaptic update is given by the product of activities available directly at the pertinent synaptic ports. The general findings are exemplified by a number of case studies. In particular, in a two-dimensional system, designed at mimicking embodied systems with latent oscillatory locomotion patterns, it is shown that maximizing the PI means to recognize and amplify the latent modes of the robotic system. This and many other examples show that the learning rules derived from the maximum PI principle are a versatile tool for the self-organization of behavior in complex robotic systems.
NASA Astrophysics Data System (ADS)
Hsieh, Cheng-Chih; Roy, Anupam; Chang, Yao-Feng; Shahrjerdi, Davood; Banerjee, Sanjay K.
2016-11-01
Nanoscale metal oxide memristors have potential in the development of brain-inspired computing systems that are scalable and efficient. In such systems, memristors represent the native electronic analogues of the biological synapses. In this work, we show cerium oxide based bilayer memristors that are forming-free, low-voltage (˜|0.8 V|), energy-efficient (full on/off switching at ˜8 pJ with 20 ns pulses, intermediate states switching at ˜fJ), and reliable. Furthermore, pulse measurements reveal the analog nature of the memristive device; that is, it can directly be programmed to intermediate resistance states. Leveraging this finding, we demonstrate spike-timing-dependent plasticity, a spike-based Hebbian learning rule. In those experiments, the memristor exhibits a marked change in the normalized synaptic strength (>30 times), when the pre- and post-synaptic neural spikes overlap. This demonstration is an important step towards the physical construction of high density and high connectivity neural networks.
Spontaneous scale-free structure in adaptive networks with synchronously dynamical linking
NASA Astrophysics Data System (ADS)
Yuan, Wu-Jie; Zhou, Jian-Fang; Li, Qun; Chen, De-Bao; Wang, Zhen
2013-08-01
Inspired by the anti-Hebbian learning rule in neural systems, we study how the feedback from dynamical synchronization shapes network structure by adding new links. Through extensive numerical simulations, we find that an adaptive network spontaneously forms scale-free structure, as confirmed in many real systems. Moreover, the adaptive process produces two nontrivial power-law behaviors of deviation strength from mean activity of the network and negative degree correlation, which exists widely in technological and biological networks. Importantly, these scalings are robust to variation of the adaptive network parameters, which may have meaningful implications in the scale-free formation and manipulation of dynamical networks. Our study thus suggests an alternative adaptive mechanism for the formation of scale-free structure with negative degree correlation, which means that nodes of high degree tend to connect, on average, with others of low degree and vice versa. The relevance of the results to structure formation and dynamical property in neural networks is briefly discussed as well.
Theta Coordinated Error Driven Learning in the Hippocampus (Open Access, Publisher’s Version)
2013-06-06
assumed to be Hebbian in nature, where individual neurons in an engram join together with synaptic weight increases to support facilitated recall of...together as part of a memory or engram representation, e.g., in the central area CA3 of the hippocampus. With these connections strengthened, the ability
Resonant spatiotemporal learning in large random recurrent networks.
Daucé, Emmanuel; Quoy, Mathias; Doyon, Bernard
2002-09-01
Taking a global analogy with the structure of perceptual biological systems, we present a system composed of two layers of real-valued sigmoidal neurons. The primary layer receives stimulating spatiotemporal signals, and the secondary layer is a fully connected random recurrent network. This secondary layer spontaneously displays complex chaotic dynamics. All connections have a constant time delay. We use for our experiments a Hebbian (covariance) learning rule. This rule slowly modifies the weights under the influence of a periodic stimulus. The effect of learning is twofold: (i) it simplifies the secondary-layer dynamics, which eventually stabilizes to a periodic orbit; and (ii) it connects the secondary layer to the primary layer, and realizes a feedback from the secondary to the primary layer. This feedback signal is added to the incoming signal, and matches it (i.e., the secondary layer performs a one-step prediction of the forthcoming stimulus). After learning, a resonant behavior can be observed: the system resonates with familiar stimuli, which activates a feedback signal. In particular, this resonance allows the recognition and retrieval of partial signals, and dynamic maintenance of the memory of past stimuli. This resonance is highly sensitive to the temporal relationships and to the periodicity of the presented stimuli. When we present stimuli which do not match in time or space, the feedback remains silent. The number of different stimuli for which resonant behavior can be learned is analyzed. As with Hopfield networks, the capacity is proportional to the size of the second, recurrent layer. Moreover, the high capacity displayed allows the implementation of our model on real-time systems interacting with their environment. Such an implementation is reported in the case of a simple behavior-based recognition task on a mobile robot. Finally, we present some functional analogies with biological systems in terms of autonomy and dynamic binding, and present some hypotheses on the computational role of feedback connections.
Addressing the Movement of a Freescale Robotic Car Using Neural Network
NASA Astrophysics Data System (ADS)
Horváth, Dušan; Cuninka, Peter
2016-12-01
This article deals with the management of a Freescale small robotic car along the predefined guide line. Controlling of the direction of movement of the robot is performed by neural networks, and scales (memory) of neurons are calculated by Hebbian learning from the truth tables as learning with a teacher. Reflexive infrared sensors serves as inputs. The results are experiments, which are used to compare two methods of mobile robot control - tracking lines.
Morvan's syndrome and the sustained absence of all sleep rhythms for months or years: An hypothesis.
Touzet, Claude
2016-09-01
Despite the predation costs, sleep is ubiquitous in the animal realm. Humans spend a third of their life sleeping, and the quality of sleep has been related to co-morbidity, Alzheimer disease, etc. Excessive wakefulness induces rapid changes in cognitive performances, and it is claimed that one could die of sleep deprivation as quickly as by absence of water. In this context, the fact that a few people are able to go without sleep for months, even years, without displaying any cognitive troubles requires explanations. Theories ascribing sleep to memory consolidation are unable to explain such observations. It is not the case of the theory of sleep as the hebbian reinforcement of the inhibitory synapses (ToS-HRIS). Hebbian learning (Long Term Depression - LTD) guarantees that an efficient inhibitory synapse will lose its efficiency just because it is efficient at avoiding the activation of the post-synaptic neuron. This erosion of the inhibition is replenished by hebbian learning (Long Term Potentiation - LTP) when pre and post-synaptic neurons are active together - which is exactly what happens with the travelling depolarization waves of the slow-wave sleep (SWS). The best documented cases of months-long insomnia are reports of patients with Morvan's syndrome. This syndrome has an autoimmune cause that impedes - among many things - the potassium channels of the post-synaptic neurons, increasing LTP and decreasing LTD. We hypothesize that the absence of inhibitory efficiency erosion during wakefulness (thanks to a decrease of inhibitory LTD) is the cause for an absence of slow-wave sleep (SWS), which results also in the absence of REM sleep. Copyright © 2016 Elsevier Ltd. All rights reserved.
Frégnac, Yves; Pananceau, Marc; René, Alice; Huguet, Nazyed; Marre, Olivier; Levy, Manuel; Shulz, Daniel E.
2010-01-01
Spike timing-dependent plasticity (STDP) is considered as an ubiquitous rule for associative plasticity in cortical networks in vitro. However, limited supporting evidence for its functional role has been provided in vivo. In particular, there are very few studies demonstrating the co-occurrence of synaptic efficiency changes and alteration of sensory responses in adult cortex during Hebbian or STDP protocols. We addressed this issue by reviewing and comparing the functional effects of two types of cellular conditioning in cat visual cortex. The first one, referred to as the “covariance” protocol, obeys a generalized Hebbian framework, by imposing, for different stimuli, supervised positive and negative changes in covariance between postsynaptic and presynaptic activity rates. The second protocol, based on intracellular recordings, replicated in vivo variants of the theta-burst paradigm (TBS), proven successful in inducing long-term potentiation in vitro. Since it was shown to impose a precise correlation delay between the electrically activated thalamic input and the TBS-induced postsynaptic spike, this protocol can be seen as a probe of causal (“pre-before-post”) STDP. By choosing a thalamic region where the visual field representation was in retinotopic overlap with the intracellularly recorded cortical receptive field as the afferent site for supervised electrical stimulation, this protocol allowed to look for possible correlates between STDP and functional reorganization of the conditioned cortical receptive field. The rate-based “covariance protocol” induced significant and large amplitude changes in receptive field properties, in both kitten and adult V1 cortex. The TBS STDP-like protocol produced in the adult significant changes in the synaptic gain of the electrically activated thalamic pathway, but the statistical significance of the functional correlates was detectable mostly at the population level. Comparison of our observations with the literature leads us to re-examine the experimental status of spike timing-dependent potentiation in adult cortex. We propose the existence of a correlation-based threshold in vivo, limiting the expression of STDP-induced changes outside the critical period, and which accounts for the stability of synaptic weights during sensory cortical processing in the absence of attention or reward-gated supervision. PMID:21423533
A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
Hawkins, Jeff; Ahmad, Subutai; Cui, Yuwei
2017-01-01
Neocortical regions are organized into columns and layers. Connections between layers run mostly perpendicular to the surface suggesting a columnar functional organization. Some layers have long-range excitatory lateral connections suggesting interactions between columns. Similar patterns of connectivity exist in all regions but their exact role remain a mystery. In this paper, we propose a network model composed of columns and layers that performs robust object learning and recognition. Each column integrates its changing input over time to learn complete predictive models of observed objects. Excitatory lateral connections across columns allow the network to more rapidly infer objects based on the partial knowledge of adjacent columns. Because columns integrate input over time and space, the network learns models of complex objects that extend well beyond the receptive field of individual cells. Our network model introduces a new feature to cortical columns. We propose that a representation of location relative to the object being sensed is calculated within the sub-granular layers of each column. The location signal is provided as an input to the network, where it is combined with sensory data. Our model contains two layers and one or more columns. Simulations show that using Hebbian-like learning rules small single-column networks can learn to recognize hundreds of objects, with each object containing tens of features. Multi-column networks recognize objects with significantly fewer movements of the sensory receptors. Given the ubiquity of columnar and laminar connectivity patterns throughout the neocortex, we propose that columns and regions have more powerful recognition and modeling capabilities than previously assumed. PMID:29118696
Cooperation-Induced Topological Complexity: A Promising Road to Fault Tolerance and Hebbian Learning
2012-03-16
topological complexity a way to compare the efficiency of a scale-free network to the random network of Erdos and Renyi . All this is extensively dis- cussed in...an excellent review paper byArenas et al. (2008) showing very interesting comparisons of Erdos– Renyi networks and scale- free networks as a function
Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex.
Steyn-Ross, Moira L; Steyn-Ross, D A; Sleigh, J W; Wilson, M T; Wilcocks, Lara C
2005-12-01
Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer's dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.
Proposed mechanism for learning and memory erasure in a white-noise-driven sleeping cortex
NASA Astrophysics Data System (ADS)
Steyn-Ross, Moira L.; Steyn-Ross, D. A.; Sleigh, J. W.; Wilson, M. T.; Wilcocks, Lara C.
2005-12-01
Understanding the structure and purpose of sleep remains one of the grand challenges of neurobiology. Here we use a mean-field linearized theory of the sleeping cortex to derive statistics for synaptic learning and memory erasure. The growth in correlated low-frequency high-amplitude voltage fluctuations during slow-wave sleep (SWS) is characterized by a probability density function that becomes broader and shallower as the transition into rapid-eye-movement (REM) sleep is approached. At transition, the Shannon information entropy of the fluctuations is maximized. If we assume Hebbian-learning rules apply to the cortex, then its correlated response to white-noise stimulation during SWS provides a natural mechanism for a synaptic weight change that will tend to shut down reverberant neural activity. In contrast, during REM sleep the weights will evolve in a direction that encourages excitatory activity. These entropy and weight-change predictions lead us to identify the final portion of deep SWS that occurs immediately prior to transition into REM sleep as a time of enhanced erasure of labile memory. We draw a link between the sleeping cortex and Landauer’s dissipation theorem for irreversible computing [R. Landauer, IBM J. Res. Devel. 5, 183 (1961)], arguing that because information erasure is an irreversible computation, there is an inherent entropy cost as the cortex transits from SWS into REM sleep.
Unsupervised learning in general connectionist systems.
Dente, J A; Mendes, R Vilela
1996-01-01
There is a common framework in which different connectionist systems may be treated in a unified way. The general system in which they may all be mapped is a network which, in addition to the connection strengths, has an adaptive node parameter controlling the output intensity. In this paper we generalize two neural network learning schemes to networks with node parameters. In generalized Hebbian learning we find improvements to the convergence rate for small eigenvalues in principal component analysis. For competitive learning the use of node parameters also seems useful in that, by emphasizing or de-emphasizing the dominance of winning neurons, either improved robustness or discrimination is obtained.
Pattern Adaptation and Normalization Reweighting.
Westrick, Zachary M; Heeger, David J; Landy, Michael S
2016-09-21
Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their preferred orientations shift away from the adapter. We propose a model in which weights of divisive normalization are dynamically adjusted to homeostatically maintain response products between pairs of neurons. We demonstrate that this adjustment can be performed by a very simple learning rule. Simulations of this model closely match existing data from visual adaptation experiments. We consider several alternative models, including variants based on homeostatic maintenance of response correlations or covariance, as well as feedforward gain-control models with multiple layers, and we demonstrate that homeostatic maintenance of response products provides the best account of the physiological data. Adaptation is a phenomenon throughout the nervous system in which neural tuning properties change in response to changes in environmental statistics. We developed a model of adaptation that combines normalization (in which a neuron's gain is reduced by the summed responses of its neighbors) and Hebbian learning (in which synaptic strength, in this case divisive normalization, is increased by correlated firing). The model is shown to account for several properties of adaptation in primary visual cortex in response to changes in the statistics of contour orientation. Copyright © 2016 the authors 0270-6474/16/369805-12$15.00/0.
Hebbian Learning of Cognitive Control: Dealing with Specific and Nonspecific Adaptation
ERIC Educational Resources Information Center
Verguts, Tom; Notebaert, Wim
2008-01-01
The conflict monitoring model of M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, and J. D. Cohen (2001) triggered several research programs investigating various aspects of cognitive control. One problematic aspect of the Botvinick et al. model is that there is no clear account of how the cognitive system knows where to intervene when…
Woodward, Alexander; Froese, Tom; Ikegami, Takashi
2015-02-01
The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hebbian Plasticity in CPG Controllers Facilitates Self-Synchronization for Human-Robot Handshaking.
Jouaiti, Melanie; Caron, Lancelot; Hénaff, Patrick
2018-01-01
It is well-known that human social interactions generate synchrony phenomena which are often unconscious. If the interaction between individuals is based on rhythmic movements, synchronized and coordinated movements will emerge from the social synchrony. This paper proposes a plausible model of plastic neural controllers that allows the emergence of synchronized movements in physical and rhythmical interactions. The controller is designed with central pattern generators (CPG) based on rhythmic Rowat-Selverston neurons endowed with neuronal and synaptic Hebbian plasticity. To demonstrate the interest of the proposed model, the case of handshaking is considered because it is a very common, both physically and socially, but also, a very complex act in the point of view of robotics, neuroscience and psychology. Plastic CPGs controllers are implemented in the joints of a simulated robotic arm that has to learn the frequency and amplitude of an external force applied to its effector, thus reproducing the act of handshaking with a human. Results show that the neural and synaptic Hebbian plasticity are working together leading to a natural and autonomous synchronization between the arm and the external force even if the frequency is changing during the movement. Moreover, a power consumption analysis shows that, by offering emergence of synchronized and coordinated movements, the plasticity mechanisms lead to a significant decrease in the energy spend by the robot actuators thus generating a more adaptive and natural human/robot handshake.
Dordek, Yedidyah; Soudry, Daniel; Meir, Ron; Derdikman, Dori
2016-01-01
Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights are learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Both numerical results and analytic considerations indicate that if the components of the feedforward neural network are non-negative, the output converges to a hexagonal lattice. Without the non-negativity constraint, the output converges to a square lattice. Consistent with experiments, grid spacing ratio between the first two consecutive modules is −1.4. Our results express a possible linkage between place cell to grid cell interactions and PCA. DOI: http://dx.doi.org/10.7554/eLife.10094.001 PMID:26952211
The dialectic of Hebb and homeostasis.
Turrigiano, Gina G
2017-03-05
It has become widely accepted that homeostatic and Hebbian plasticity mechanisms work hand in glove to refine neural circuit function. Nonetheless, our understanding of how these fundamentally distinct forms of plasticity compliment (and under some circumstances interfere with) each other remains rudimentary. Here, I describe some of the recent progress of the field, as well as some of the deep puzzles that remain. These include unravelling the spatial and temporal scales of different homeostatic and Hebbian mechanisms, determining which aspects of network function are under homeostatic control, and understanding when and how homeostatic and Hebbian mechanisms must be segregated within neural circuits to prevent interference.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'. © 2017 The Author(s).
GABAa excitation and synaptogenesis after Status Epilepticus - A computational study.
França, Keite Lira de Almeida; de Almeida, Antônio-Carlos Guimarães; Saddow, Stephen E; Santos, Luiz Eduardo Canton; Scorza, Carla Alessandra; Scorza, Fulvio Alexandre; Rodrigues, Antônio Márcio
2018-03-08
The role of GABAergic neurotransmission on epileptogenesis has been the subject of speculation according to different approaches. However, it is a very complex task to specifically consider the action of the GABAa neurotransmitter, which, in its dependence on the intracellular level of Cl - , can change its effect from inhibitory to excitatory. We have developed a computational model that represents the dentate gyrus and is composed of three different populations of neurons (granule cells, interneurons and mossy cells) that are mutually interconnected. The interconnections of the neurons were based on compensation theory with Hebbian and anti-Hebbian rules. The model also incorporates non-synaptic mechanisms to control the ionic homeostasis and was able to reproduce ictal discharges. The goal of the work was to investigate the hypothesis that the observed aberrant sprouting is promoted by GABAa excitatory action. Conjointly with the abnormal sprouting of the mossy fibres, the simulations show a reduction of the mossy cells connections in the network and an increased inhibition of the interneurons as a response of the neuronal network to control the activity. This finding contributes to increasing the changes in the connectivity of the neuronal circuitry and to increasing the epileptiform activity occurrences.
AHaH computing-from metastable switches to attractors to machine learning.
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.
AHaH Computing–From Metastable Switches to Attractors to Machine Learning
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315
A Learning Model for L/M Specificity in Ganglion Cells
NASA Technical Reports Server (NTRS)
Ahumada, Albert J.
2016-01-01
An unsupervised learning model for developing LM specific wiring at the ganglion cell level would support the research indicating LM specific wiring at the ganglion cell level (Reid and Shapley, 2002). Removing the contributions to the surround from cells of the same cone type improves the signal-to-noise ratio of the chromatic signals. The unsupervised learning model used is Hebbian associative learning, which strengthens the surround input connections according to the correlation of the output with the input. Since the surround units of the same cone type as the center are redundant with the center, their weights end up disappearing. This process can be thought of as a general mechanism for eliminating unnecessary cells in the nervous system.
Oren, Iris; Nissen, Wiebke; Kullmann, Dimitri M.; Somogyi, Peter; Lamsa, Karri P.
2009-01-01
Some interneurons of the hippocampus exhibit NMDA receptor-independent long-term potentiation (LTP) that is induced by presynaptic glutamate release when the postsynaptic membrane potential is hyperpolarized. This ‘anti-Hebbian’ form of LTP is prevented by postsynaptic depolarization or by blocking AMPA and kainate receptors. Although both AMPA and kainate receptors are expressed in hippocampal interneurons, their relative roles in anti-Hebbian LTP are not known. Because interneuron diversity potentially conceals simple rules underlying different forms of plasticity, we focus on glutamatergic synapses onto a subset of interneurons with dendrites in stratum oriens and a main ascending axon that projects to stratum lacunosum-moleculare, the O-LM cells. We show that anti-Hebbian LTP in O-LM interneurons has consistent induction and expression properties, and is prevented by selective inhibition of AMPA receptors. The majority of the ionotropic glutamatergic synaptic current in these cells is mediated by inwardly rectifying Ca2+ -permeable AMPA receptors. Although GluR5-containing kainate receptors contribute to synaptic currents at high stimulus frequency, they are not required for LTP induction. Glutamatergic synapses on O-LM cells thus behave in a homogeneous manner, and exhibit LTP dependent on Ca2+-permeable AMPA receptors. PMID:19176803
Uninformative memories will prevail: the storage of correlated representations and its consequences.
Kropff, Emilio; Treves, Alessandro
2007-11-01
Autoassociative networks were proposed in the 80's as simplified models of memory function in the brain, using recurrent connectivity with Hebbian plasticity to store patterns of neural activity that can be later recalled. This type of computation has been suggested to take place in the CA3 region of the hippocampus and at several levels in the cortex. One of the weaknesses of these models is their apparent inability to store correlated patterns of activity. We show, however, that a small and biologically plausible modification in the "learning rule" (associating to each neuron a plasticity threshold that reflects its popularity) enables the network to handle correlations. We study the stability properties of the resulting memories (in terms of their resistance to the damage of neurons or synapses), finding a novel property of autoassociative networks: not all memories are equally robust, and the most informative are also the most sensitive to damage. We relate these results to category-specific effects in semantic memory patients, where concepts related to "non-living things" are usually more resistant to brain damage than those related to "living things," a phenomenon suspected to be rooted in the correlation between representations of concepts in the cortex.
Theory of optimal balance predicts and explains the amplitude and decay time of synaptic inhibition
Kim, Jaekyung K.; Fiorillo, Christopher D.
2017-01-01
Synaptic inhibition counterbalances excitation, but it is not known what constitutes optimal inhibition. We previously proposed that perfect balance is achieved when the peak of an excitatory postsynaptic potential (EPSP) is exactly at spike threshold, so that the slightest variation in excitation determines whether a spike is generated. Using simulations, we show that the optimal inhibitory postsynaptic conductance (IPSG) increases in amplitude and decay rate as synaptic excitation increases from 1 to 800 Hz. As further proposed by theory, we show that optimal IPSG parameters can be learned through anti-Hebbian rules. Finally, we compare our theoretical optima to published experimental data from 21 types of neurons, in which rates of synaptic excitation and IPSG decay times vary by factors of about 100 (5–600 Hz) and 50 (1–50 ms), respectively. From an infinite range of possible decay times, theory predicted experimental decay times within less than a factor of 2. Across a distinct set of 15 types of neuron recorded in vivo, theory predicted the amplitude of synaptic inhibition within a factor of 1.7. Thus, the theory can explain biophysical quantities from first principles. PMID:28281523
Theory of optimal balance predicts and explains the amplitude and decay time of synaptic inhibition
NASA Astrophysics Data System (ADS)
Kim, Jaekyung K.; Fiorillo, Christopher D.
2017-03-01
Synaptic inhibition counterbalances excitation, but it is not known what constitutes optimal inhibition. We previously proposed that perfect balance is achieved when the peak of an excitatory postsynaptic potential (EPSP) is exactly at spike threshold, so that the slightest variation in excitation determines whether a spike is generated. Using simulations, we show that the optimal inhibitory postsynaptic conductance (IPSG) increases in amplitude and decay rate as synaptic excitation increases from 1 to 800 Hz. As further proposed by theory, we show that optimal IPSG parameters can be learned through anti-Hebbian rules. Finally, we compare our theoretical optima to published experimental data from 21 types of neurons, in which rates of synaptic excitation and IPSG decay times vary by factors of about 100 (5-600 Hz) and 50 (1-50 ms), respectively. From an infinite range of possible decay times, theory predicted experimental decay times within less than a factor of 2. Across a distinct set of 15 types of neuron recorded in vivo, theory predicted the amplitude of synaptic inhibition within a factor of 1.7. Thus, the theory can explain biophysical quantities from first principles.
A History of Spike-Timing-Dependent Plasticity
Markram, Henry; Gerstner, Wulfram; Sjöström, Per Jesper
2011-01-01
How learning and memory is achieved in the brain is a central question in neuroscience. Key to today’s research into information storage in the brain is the concept of synaptic plasticity, a notion that has been heavily influenced by Hebb's (1949) postulate. Hebb conjectured that repeatedly and persistently co-active cells should increase connective strength among populations of interconnected neurons as a means of storing a memory trace, also known as an engram. Hebb certainly was not the first to make such a conjecture, as we show in this history. Nevertheless, literally thousands of studies into the classical frequency-dependent paradigm of cellular learning rules were directly inspired by the Hebbian postulate. But in more recent years, a novel concept in cellular learning has emerged, where temporal order instead of frequency is emphasized. This new learning paradigm – known as spike-timing-dependent plasticity (STDP) – has rapidly gained tremendous interest, perhaps because of its combination of elegant simplicity, biological plausibility, and computational power. But what are the roots of today’s STDP concept? Here, we discuss several centuries of diverse thinking, beginning with philosophers such as Aristotle, Locke, and Ribot, traversing, e.g., Lugaro’s plasticità and Rosenblatt’s perceptron, and culminating with the discovery of STDP. We highlight interactions between theoretical and experimental fields, showing how discoveries sometimes occurred in parallel, seemingly without much knowledge of the other field, and sometimes via concrete back-and-forth communication. We point out where the future directions may lie, which includes interneuron STDP, the functional impact of STDP, its mechanisms and its neuromodulatory regulation, and the linking of STDP to the developmental formation and continuous plasticity of neuronal networks. PMID:22007168
The neuroscience of learning: beyond the Hebbian synapse.
Gallistel, C R; Matzel, Louis D
2013-01-01
From the traditional perspective of associative learning theory, the hypothesis linking modifications of synaptic transmission to learning and memory is plausible. It is less so from an information-processing perspective, in which learning is mediated by computations that make implicit commitments to physical and mathematical principles governing the domains where domain-specific cognitive mechanisms operate. We compare the properties of associative learning and memory to the properties of long-term potentiation, concluding that the properties of the latter do not explain the fundamental properties of the former. We briefly review the neuroscience of reinforcement learning, emphasizing the representational implications of the neuroscientific findings. We then review more extensively findings that confirm the existence of complex computations in three information-processing domains: probabilistic inference, the representation of uncertainty, and the representation of space. We argue for a change in the conceptual framework within which neuroscientists approach the study of learning mechanisms in the brain.
Models of Acetylcholine and Dopamine Signals Differentially Improve Neural Representations
Holca-Lamarre, Raphaël; Lücke, Jörg; Obermayer, Klaus
2017-01-01
Biological and artificial neural networks (ANNs) represent input signals as patterns of neural activity. In biology, neuromodulators can trigger important reorganizations of these neural representations. For instance, pairing a stimulus with the release of either acetylcholine (ACh) or dopamine (DA) evokes long lasting increases in the responses of neurons to the paired stimulus. The functional roles of ACh and DA in rearranging representations remain largely unknown. Here, we address this question using a Hebbian-learning neural network model. Our aim is both to gain a functional understanding of ACh and DA transmission in shaping biological representations and to explore neuromodulator-inspired learning rules for ANNs. We model the effects of ACh and DA on synaptic plasticity and confirm that stimuli coinciding with greater neuromodulator activation are over represented in the network. We then simulate the physiological release schedules of ACh and DA. We measure the impact of neuromodulator release on the network's representation and on its performance on a classification task. We find that ACh and DA trigger distinct changes in neural representations that both improve performance. The putative ACh signal redistributes neural preferences so that more neurons encode stimulus classes that are challenging for the network. The putative DA signal adapts synaptic weights so that they better match the classes of the task at hand. Our model thus offers a functional explanation for the effects of ACh and DA on cortical representations. Additionally, our learning algorithm yields performances comparable to those of state-of-the-art optimisation methods in multi-layer perceptrons while requiring weaker supervision signals and interacting with synaptically-local weight updates. PMID:28690509
Ursino, Mauro; Magosso, Elisa; Cuppini, Cristiano
2009-02-01
Synchronization of neural activity in the gamma band is assumed to play a significant role not only in perceptual processing, but also in higher cognitive functions. Here, we propose a neural network of Wilson-Cowan oscillators to simulate recognition of abstract objects, each represented as a collection of four features. Features are ordered in topological maps of oscillators connected via excitatory lateral synapses, to implement a similarity principle. Experience on previous objects is stored in long-range synapses connecting the different topological maps, and trained via timing dependent Hebbian learning (previous knowledge principle). Finally, a downstream decision network detects the presence of a reliable object representation, when all features are oscillating in synchrony. Simulations performed giving various simultaneous objects to the network (from 1 to 4), with some missing and/or modified properties suggest that the network can reconstruct objects, and segment them from the other simultaneously present objects, even in case of deteriorated information, noise, and moderate correlation among the inputs (one common feature). The balance between sensitivity and specificity depends on the strength of the Hebbian learning. Achieving a correct reconstruction in all cases, however, requires ad hoc selection of the oscillation frequency. The model represents an attempt to investigate the interactions among topological maps, autoassociative memory, and gamma-band synchronization, for recognition of abstract objects.
Kaplan, Bernhard A; Lansner, Anders
2014-01-01
Olfactory sensory information passes through several processing stages before an odor percept emerges. The question how the olfactory system learns to create odor representations linking those different levels and how it learns to connect and discriminate between them is largely unresolved. We present a large-scale network model with single and multi-compartmental Hodgkin-Huxley type model neurons representing olfactory receptor neurons (ORNs) in the epithelium, periglomerular cells, mitral/tufted cells and granule cells in the olfactory bulb (OB), and three types of cortical cells in the piriform cortex (PC). Odor patterns are calculated based on affinities between ORNs and odor stimuli derived from physico-chemical descriptors of behaviorally relevant real-world odorants. The properties of ORNs were tuned to show saturated response curves with increasing concentration as seen in experiments. On the level of the OB we explored the possibility of using a fuzzy concentration interval code, which was implemented through dendro-dendritic inhibition leading to winner-take-all like dynamics between mitral/tufted cells belonging to the same glomerulus. The connectivity from mitral/tufted cells to PC neurons was self-organized from a mutual information measure and by using a competitive Hebbian-Bayesian learning algorithm based on the response patterns of mitral/tufted cells to different odors yielding a distributed feed-forward projection to the PC. The PC was implemented as a modular attractor network with a recurrent connectivity that was likewise organized through Hebbian-Bayesian learning. We demonstrate the functionality of the model in a one-sniff-learning and recognition task on a set of 50 odorants. Furthermore, we study its robustness against noise on the receptor level and its ability to perform concentration invariant odor recognition. Moreover, we investigate the pattern completion capabilities of the system and rivalry dynamics for odor mixtures.
Hodzic, Amra; Veit, Ralf; Karim, Ahmed A; Erb, Michael; Godde, Ben
2004-01-14
Perceptual learning can be induced by passive tactile coactivation without attention or reinforcement. We used functional MRI (fMRI) and psychophysics to investigate in detail the specificity of this type of learning for different tactile discrimination tasks and the underlying cortical reorganization. We found that a few hours of Hebbian coactivation evoked a significant increase of primary (SI) and secondary (SII) somatosensory cortical areas representing the stimulated body parts. The amount of plastic changes was strongly correlated with improvement in spatial discrimination performance. However, in the same subjects, frequency discrimination was impaired after coactivation, indicating that even maladaptive processes can be induced by intense passive sensory stimulation.
Enhanced detection threshold for in vivo cortical stimulation produced by Hebbian conditioning
NASA Astrophysics Data System (ADS)
Rebesco, James M.; Miller, Lee E.
2011-02-01
Normal brain function requires constant adaptation, as an organism learns to associate important sensory stimuli with the appropriate motor actions. Neurological disorders may disrupt these learned associations and require the nervous system to reorganize itself. As a consequence, neural plasticity is a crucial component of normal brain function and a critical mechanism for recovery from injury. Associative, or Hebbian, pairing of pre- and post-synaptic activity has been shown to alter stimulus-evoked responses in vivo; however, to date, such protocols have not been shown to affect the animal's subsequent behavior. We paired stimulus trains separated by a brief time delay to two electrodes in rat sensorimotor cortex, which changed the statistical pattern of spikes during subsequent behavior. These changes were consistent with strengthened functional connections from the leading electrode to the lagging electrode. We then trained rats to respond to a microstimulation cue, and repeated the paradigm using the cue electrode as the leading electrode. This pairing lowered the rat's ICMS-detection threshold, with the same dependence on intra-electrode time lag that we found for the functional connectivity changes. The timecourse of the behavioral effects was very similar to that of the connectivity changes. We propose that the behavioral changes were a consequence of strengthened functional connections from the cue electrode to other regions of sensorimotor cortex. Such paradigms might be used to augment recovery from a stroke, or to promote adaptation in a bidirectional brain-machine interface.
Cerebellar supervised learning revisited: biophysical modeling and degrees-of-freedom control.
Kawato, Mitsuo; Kuroda, Shinya; Schweighofer, Nicolas
2011-10-01
The biophysical models of spike-timing-dependent plasticity have explored dynamics with molecular basis for such computational concepts as coincidence detection, synaptic eligibility trace, and Hebbian learning. They overall support different learning algorithms in different brain areas, especially supervised learning in the cerebellum. Because a single spine is physically very small, chemical reactions at it are essentially stochastic, and thus sensitivity-longevity dilemma exists in the synaptic memory. Here, the cascade of excitable and bistable dynamics is proposed to overcome this difficulty. All kinds of learning algorithms in different brain regions confront with difficult generalization problems. For resolution of this issue, the control of the degrees-of-freedom can be realized by changing synchronicity of neural firing. Especially, for cerebellar supervised learning, the triangle closed-loop circuit consisting of Purkinje cells, the inferior olive nucleus, and the cerebellar nucleus is proposed as a circuit to optimally control synchronous firing and degrees-of-freedom in learning. Copyright © 2011 Elsevier Ltd. All rights reserved.
Dynamic DNA Methylation Controls Glutamate Receptor Trafficking and Synaptic Scaling
Sweatt, J. David
2016-01-01
Hebbian plasticity, including LTP and LTD, has long been regarded as important for local circuit refinement in the context of memory formation and stabilization. However, circuit development and stabilization additionally relies on non-Hebbian, homoeostatic, forms of plasticity such as synaptic scaling. Synaptic scaling is induced by chronic increases or decreases in neuronal activity. Synaptic scaling is associated with cell-wide adjustments in postsynaptic receptor density, and can occur in a multiplicative manner resulting in preservation of relative synaptic strengths across the entire neuron's population of synapses. Both active DNA methylation and de-methylation have been validated as crucial regulators of gene transcription during learning, and synaptic scaling is known to be transcriptionally dependent. However, it has been unclear whether homeostatic forms of plasticity such as synaptic scaling are regulated via epigenetic mechanisms. This review describes exciting recent work that has demonstrated a role for active changes in neuronal DNA methylation and demethylation as a controller of synaptic scaling and glutamate receptor trafficking. These findings bring together three major categories of memory-associated mechanisms that were previously largely considered separately: DNA methylation, homeostatic plasticity, and glutamate receptor trafficking. PMID:26849493
Neuronal boost to evolutionary dynamics.
de Vladar, Harold P; Szathmáry, Eörs
2015-12-06
Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild.
Molnets: An Artificial Chemistry Based on Neural Networks
NASA Technical Reports Server (NTRS)
Colombano, Silvano; Luk, Johnny; Segovia-Juarez, Jose L.; Lohn, Jason; Clancy, Daniel (Technical Monitor)
2002-01-01
The fundamental problem in the evolution of matter is to understand how structure-function relationships are formed and increase in complexity from the molecular level all the way to a genetic system. We have created a system where structure-function relationships arise naturally and without the need of ad hoc function assignments to given structures. The idea was inspired by neural networks, where the structure of the net embodies specific computational properties. In this system networks interact with other networks to create connections between the inputs of one net and the outputs of another. The newly created net then recomputes its own synaptic weights, based on anti-hebbian rules. As a result some connections may be cut, and multiple nets can emerge as products of a 'reaction'. The idea is to study emergent reaction behaviors, based on simple rules that constitute a pseudophysics of the system. These simple rules are parameterized to produce behaviors that emulate chemical reactions. We find that these simple rules show a gradual increase in the size and complexity of molecules. We have been building a virtual artificial chemistry laboratory for discovering interesting reactions and for testing further ideas on the evolution of primitive molecules. Some of these ideas include the potential effect of membranes and selective diffusion according to molecular size.
Modeling trial by trial and block feedback in perceptual learning
Liu, Jiajuan; Dosher, Barbara; Lu, Zhong-Lin
2014-01-01
Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle 1997, Vision Research, 37 (15), 2133–2141), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning. PMID:24423783
Neural network regulation driven by autonomous neural firings
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2016-07-01
Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.
Del Giudice, Paolo; Fusi, Stefano; Mattia, Maurizio
2003-01-01
In this paper we review a series of works concerning models of spiking neurons interacting via spike-driven, plastic, Hebbian synapses, meant to implement stimulus driven, unsupervised formation of working memory (WM) states. Starting from a summary of the experimental evidence emerging from delayed matching to sample (DMS) experiments, we briefly review the attractor picture proposed to underlie WM states. We then describe a general framework for a theoretical approach to learning with synapses subject to realistic constraints and outline some general requirements to be met by a mechanism of Hebbian synaptic structuring. We argue that a stochastic selection of the synapses to be updated allows for optimal memory storage, even if the number of stable synaptic states is reduced to the extreme (bistable synapses). A description follows of models of spike-driven synapses that implement the stochastic selection by exploiting the high irregularity in the pre- and post-synaptic activity. Reasons are listed why dynamic learning, that is the process by which the synaptic structure develops under the only guidance of neural activities, driven in turn by stimuli, is hard to accomplish. We provide a 'feasibility proof' of dynamic formation of WM states in this context the beneficial role of short-term depression (STD) is illustrated. by showing how an initially unstructured network autonomously develops a synaptic structure supporting simultaneously stable spontaneous and WM states in this context the beneficial role of short-term depression (STD) is illustrated. After summarizing heuristic indications emerging from the study performed, we conclude by briefly discussing open problems and critical issues still to be clarified.
Rowe, Justin B; Chan, Vicky; Ingemanson, Morgan L; Cramer, Steven C; Wolbrecht, Eric T; Reinkensmeyer, David J
2017-08-01
Robots that physically assist movement are increasingly used in rehabilitation therapy after stroke, yet some studies suggest robotic assistance discourages effort and reduces motor learning. To determine the therapeutic effects of high and low levels of robotic assistance during finger training. We designed a protocol that varied the amount of robotic assistance while controlling the number, amplitude, and exerted effort of training movements. Participants (n = 30) with a chronic stroke and moderate hemiparesis (average Box and Blocks Test 32 ± 18 and upper extremity Fugl-Meyer score 46 ± 12) actively moved their index and middle fingers to targets to play a musical game similar to GuitarHero 3 h/wk for 3 weeks. The participants were randomized to receive high assistance (causing 82% success at hitting targets) or low assistance (55% success). Participants performed ~8000 movements during 9 training sessions. Both groups improved significantly at the 1-month follow-up on functional and impairment-based motor outcomes, on depression scores, and on self-efficacy of hand function, with no difference between groups in the primary endpoint (change in Box and Blocks). High assistance boosted motivation, as well as secondary motor outcomes (Fugl-Meyer and Lateral Pinch Strength)-particularly for individuals with more severe finger motor deficits. Individuals with impaired finger proprioception at baseline benefited less from the training. Robot-assisted training can promote key psychological outcomes known to modulate motor learning and retention. Furthermore, the therapeutic effectiveness of robotic assistance appears to derive at least in part from proprioceptive stimulation, consistent with a Hebbian plasticity model.
MOLECULAR MECHANISMS OF FEAR LEARNING AND MEMORY
Johansen, Joshua P.; Cain, Christopher K.; Ostroff, Linnaea E.; LeDoux, Joseph E.
2011-01-01
Pavlovian fear conditioning is a useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Together, this research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals, and potentially for understanding fear related disorders, such as PTSD and phobias. PMID:22036561
Complementary roles for amygdala and periaqueductal gray in temporal-difference fear learning.
Cole, Sindy; McNally, Gavan P
2009-01-01
Pavlovian fear conditioning is not a unitary process. At the neurobiological level multiple brain regions and neurotransmitters contribute to fear learning. At the behavioral level many variables contribute to fear learning including the physical salience of the events being learned about, the direction and magnitude of predictive error, and the rate at which these are learned about. These experiments used a serial compound conditioning design to determine the roles of basolateral amygdala (BLA) NMDA receptors and ventrolateral midbrain periaqueductal gray (vlPAG) mu-opioid receptors (MOR) in predictive fear learning. Rats received a three-stage design, which arranged for both positive and negative prediction errors producing bidirectional changes in fear learning within the same subjects during the test stage. Intra-BLA infusion of the NR2B receptor antagonist Ifenprodil prevented all learning. In contrast, intra-vlPAG infusion of the MOR antagonist CTAP enhanced learning in response to positive predictive error but impaired learning in response to negative predictive error--a pattern similar to Hebbian learning and an indication that fear learning had been divorced from predictive error. These findings identify complementary but dissociable roles for amygdala NMDA receptors and vlPAG MOR in temporal-difference predictive fear learning.
Anticipation by multi-modal association through an artificial mental imagery process
NASA Astrophysics Data System (ADS)
Gaona, Wilmer; Escobar, Esaú; Hermosillo, Jorge; Lara, Bruno
2015-01-01
Mental imagery has become a central issue in research laboratories seeking to emulate basic cognitive abilities in artificial agents. In this work, we propose a computational model to produce an anticipatory behaviour by means of a multi-modal off-line hebbian association. Unlike the current state of the art, we propose to apply hebbian learning during an internal sensorimotor simulation, emulating a process of mental imagery. We associate visual and tactile stimuli re-enacted by a long-term predictive simulation chain motivated by covert actions. As a result, we obtain a neural network which provides a robot with a mechanism to produce a visually conditioned obstacle avoidance behaviour. We developed our system in a physical Pioneer 3-DX robot and realised two experiments. In the first experiment we test our model on one individual navigating in two different mazes. In the second experiment we assess the robustness of the model by testing in a single environment five individuals trained under different conditions. We believe that our work offers an underpinning mechanism in cognitive robotics for the study of motor control strategies based on internal simulations. These strategies can be seen analogous to the mental imagery process known in humans, opening thus interesting pathways to the construction of upper-level grounded cognitive abilities.
Savary, Etienne; Kullmann, Dimitri M.; Miles, Richard
2015-01-01
An anti-Hebbian form of LTP is observed at excitatory synapses made with some hippocampal interneurons. LTP induction is facilitated when postsynaptic interneurons are hyperpolarized, presumably because Ca2+ entry through Ca2+-permeable glutamate receptors is enhanced. The contribution of modulatory transmitters to anti-Hebbian LTP induction remains to be established. Activation of group I metabotropic receptors (mGluRs) is required for anti-Hebbian LTP induction in interneurons with cell bodies in the CA1 stratum oriens. This region receives a strong cholinergic innervation from the septum, and muscarinic acetylcholine receptors (mAChRs) share some signaling pathways and cooperate with mGluRs in the control of neuronal excitability. We therefore examined possible interactions between group I mGluRs and mAChRs in anti-Hebbian LTP at synapses which excite oriens interneurons in rat brain slices. We found that blockade of either group I mGluRs or M1 mAChRs prevented the induction of anti-Hebbian LTP by pairing presynaptic activity with postsynaptic hyperpolarization. Blocking either receptor also suppressed long-term effects of activation of the other G-protein coupled receptor on interneuron membrane potential. However, no crossed blockade was detected for mGluR or mAchR effects on interneuron after-burst potentials or on the frequency of miniature EPSPs. Paired recordings between pyramidal neurons and oriens interneurons were obtained to determine whether LTP could be induced without concurrent stimulation of cholinergic axons. Exogenous activation of mAChRs led to LTP, with changes in EPSP amplitude distributions consistent with a presynaptic locus of expression. LTP, however, required noninvasive presynaptic and postsynaptic recordings. SIGNIFICANCE STATEMENT In the hippocampus, a form of NMDA receptor-independent long-term potentiation (LTP) occurs at excitatory synapses made on some inhibitory neurons. This is preferentially induced when postsynaptic interneurons are hyperpolarized, depends on Ca2+ entry through Ca2+-permeable AMPA receptors, and has been labeled anti-Hebbian LTP. Here we show that this form of LTP also depends on activation of both group I mGluR and M1 mAChRs. We demonstrate that these G-protein coupled receptors (GPCRs) interact, because the blockade of one receptor suppresses long-term effects of activation of the other GPCR on both LTP and interneuron membrane potential. This LTP was also detected in paired recordings, although only when both presynaptic and postsynaptic recordings did not perturb the intracellular medium. Changes in EPSP amplitude distributions in dual recordings were consistent with a presynaptic locus of expression. PMID:26446209
Neuronal boost to evolutionary dynamics
de Vladar, Harold P.; Szathmáry, Eörs
2015-01-01
Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. Evolution might thus be more efficient within evolved brains than among organisms out in the wild. PMID:26640653
Associative Learning in Invertebrates
Hawkins, Robert D.; Byrne, John H.
2015-01-01
This work reviews research on neural mechanisms of two types of associative learning in the marine mollusk Aplysia, classical conditioning of the gill- and siphon-withdrawal reflex and operant conditioning of feeding behavior. Basic classical conditioning is caused in part by activity-dependent facilitation at sensory neuron–motor neuron (SN–MN) synapses and involves a hybrid combination of activity-dependent presynaptic facilitation and Hebbian potentiation, which are coordinated by trans-synaptic signaling. Classical conditioning also shows several higher-order features, which might be explained by the known circuit connections in Aplysia. Operant conditioning is caused in part by a different type of mechanism, an intrinsic increase in excitability of an identified neuron in the central pattern generator (CPG) for feeding. However, for both classical and operant conditioning, adenylyl cyclase is a molecular site of convergence of the two signals that are associated. Learning in other invertebrate preparations also involves many of the same mechanisms, which may contribute to learning in vertebrates as well. PMID:25877219
Garagnani, Max; Lucchese, Guglielmo; Tomasello, Rosario; Wennekers, Thomas; Pulvermüller, Friedemann
2017-01-01
Experimental evidence indicates that neurophysiological responses to well-known meaningful sensory items and symbols (such as familiar objects, faces, or words) differ from those to matched but novel and senseless materials (unknown objects, scrambled faces, and pseudowords). Spectral responses in the high beta- and gamma-band have been observed to be generally stronger to familiar stimuli than to unfamiliar ones. These differences have been hypothesized to be caused by the activation of distributed neuronal circuits or cell assemblies, which act as long-term memory traces for learned familiar items only. Here, we simulated word learning using a biologically constrained neurocomputational model of the left-hemispheric cortical areas known to be relevant for language and conceptual processing. The 12-area spiking neural-network architecture implemented replicates physiological and connectivity features of primary, secondary, and higher-association cortices in the frontal, temporal, and occipital lobes of the human brain. We simulated elementary aspects of word learning in it, focussing specifically on semantic grounding in action and perception. As a result of spike-driven Hebbian synaptic plasticity mechanisms, distributed, stimulus-specific cell-assembly (CA) circuits spontaneously emerged in the network. After training, presentation of one of the learned “word” forms to the model correlate of primary auditory cortex induced periodic bursts of activity within the corresponding CA, leading to oscillatory phenomena in the entire network and spontaneous across-area neural synchronization. Crucially, Morlet wavelet analysis of the network's responses recorded during presentation of learned meaningful “word” and novel, senseless “pseudoword” patterns revealed stronger induced spectral power in the gamma-band for the former than the latter, closely mirroring differences found in neurophysiological data. Furthermore, coherence analysis of the simulated responses uncovered dissociated category specific patterns of synchronous oscillations in distant cortical areas, including indirectly connected primary sensorimotor areas. Bridging the gap between cellular-level mechanisms, neuronal-population behavior, and cognitive function, the present model constitutes the first spiking, neurobiologically, and anatomically realistic model able to explain high-frequency oscillatory phenomena indexing language processing on the basis of dynamics and competitive interactions of distributed cell-assembly circuits which emerge in the brain as a result of Hebbian learning and sensorimotor experience. PMID:28149276
Garagnani, Max; Wennekers, Thomas; Pulvermüller, Friedemann
2008-01-01
Meaningful familiar stimuli and senseless unknown materials lead to different patterns of brain activation. A late major neurophysiological response indexing ‘sense’ is the negative component of event-related potential peaking at around 400 ms (N400), an event-related potential that emerges in attention-demanding tasks and is larger for senseless materials (e.g. meaningless pseudowords) than for matched meaningful stimuli (words). However, the mismatch negativity (latency 100–250 ms), an early automatic brain response elicited under distraction, is larger to words than to pseudowords, thus exhibiting the opposite pattern to that seen for the N400. So far, no theoretical account has been able to reconcile and explain these findings by means of a single, mechanistic neural model. We implemented a neuroanatomically grounded neural network model of the left perisylvian language cortex and simulated: (i) brain processes of early language acquisition and (ii) cortical responses to familiar word and senseless pseudoword stimuli. We found that variation of the area-specific inhibition (the model correlate of attention) modulated the simulated brain response to words and pseudowords, producing either an N400- or a mismatch negativity-like response depending on the amount of inhibition (i.e. available attentional resources). Our model: (i) provides a unifying explanatory account, at cortical level, of experimental observations that, so far, had not been given a coherent interpretation within a single framework; (ii) demonstrates the viability of purely Hebbian, associative learning in a multilayered neural network architecture; and (iii) makes clear predictions on the effects of attention on latency and magnitude of event-related potentials to lexical items. Such predictions have been confirmed by recent experimental evidence. PMID:18215243
NASA Astrophysics Data System (ADS)
Volfson, Boris
2013-09-01
The hypothesis of transition from a chaotic Dirac Sea, via highly unstable positronium, into a Simhony Model of stable face-centered cubic lattice structure of electrons and positrons securely bound in vacuum space, is considered. 13.75 Billion years ago, the new lattice, which, unlike a Dirac Sea, is permeable by photons and phonons, made the Universe detectable. Many electrons and positrons ended up annihilating each other producing energy quanta and neutrino-antineutrino pairs. The weak force of the electron-positron crystal lattice, bombarded by the chirality-changing neutrinos, may have started capturing these neutrinos thus transforming from cubic crystals into a quasicrystal lattice. Unlike cubic crystal lattice, clusters of quasicrystals are "slippery" allowing the formation of centers of local torsion, where gravity condenses matter into galaxies, stars and planets. In the presence of quanta, in a quasicrystal lattice, the Majorana neutrinos' rotation flips to the opposite direction causing natural transformations in a category comprised of three components; two others being positron and electron. In other words, each particle-antiparticle pair "e-" and "e+", in an individual crystal unit, could become either a quasi- component "e- ve e+", or a quasi- component "e+ - ve e-". Five-to-six six billion years ago, a continuous stimulation of the quasicrystal aetherial lattice by the same, similar, or different, astronomical events, could have triggered Hebbian and anti-Hebbian learning processes. The Universe may have started writing script into its own aether in a code most appropriate for the quasicrystal aether "hardware": Eight three-dimensional "alphabet" characters, each corresponding to the individual quasi-crystal unit shape. They could be expressed as quantum Turing machine qubits, or, alternatively, in a binary code. The code numerals could contain terminal and nonterminal symbols of the Chomsky's hierarchy, wherein, the showers of quanta, forming the cosmic microwave background radiation, may re-script a quasi-component "e- ve e+" (in the binary code case, same as numeral "0") into a quasi-component "e+ -ve e-" (numeral "1"), or vice versa. According to both, the Chomsky"s logic, and the rules applicable to Majorana particles, terminals "e-" and "e+" cannot be changed using the rules of grammar, while nonterminals "ve" and "-ve" can. Under "quantum" showers, the quasi- unit cells re-shape, resulting in re-combination of the clusters that they form, with the affected pattern become same as, similar to, or different from, other pattern(s). The process of self-learning may have occurred as a natural response to various astronomical events and cosmic cataclysms: The same astronomical activity in two different areas resulted in the emission of the same energy forming the same secondary quasicrystal pattern. Different but similar astronomical activity resulted in the emission of a similar amount of energy forming a similar secondary quasicrystal pattern. Different astronomical activity resulted in the emission of a different amount of energy forming a different secondary quasicrystal pattern. Since quasicrystals conduct energy in one direction and don't conduct energy in the other, the control over quanta flows allows aether to scribe a script onto itself through changing its own quasi- patterns. The paper, as published below, is a lecture summary. The full text is published on website: www.borisvolfson.org
Molecular mechanisms of fear learning and memory.
Johansen, Joshua P; Cain, Christopher K; Ostroff, Linnaea E; LeDoux, Joseph E
2011-10-28
Pavlovian fear conditioning is a particularly useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here, we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Collectively, this body of research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals and potentially for understanding fear-related disorders, such as PTSD and phobias. Copyright © 2011 Elsevier Inc. All rights reserved.
Acquisition of automatic imitation is sensitive to sensorimotor contingency.
Cook, Richard; Press, Clare; Dickinson, Anthony; Heyes, Cecilia
2010-08-01
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror system plasticity is sensitive to contingency (i.e., the extent to which activation of one representation predicts activation of another). In Experiment 1, residual automatic imitation was measured following incompatible training in which the action stimulus was a perfect predictor of the response (contingent) or not at all predictive of the response (noncontingent). A contingency effect was observed: There was less automatic imitation indicative of more learning in the contingent group. Experiment 2 replicated this contingency effect and showed that, as predicted by associative learning theory, it can be abolished by signaling trials in which the response occurs in the absence of an action stimulus. These findings support the view that mirror system development depends on associative learning and indicate that this learning is not purely Hebbian. If this is correct, associative learning theory could be used to explain, predict, and intervene in mirror system development.
John, Rohit Abraham; Liu, Fucai; Chien, Nguyen Anh; Kulkarni, Mohit R; Zhu, Chao; Fu, Qundong; Basu, Arindam; Liu, Zheng; Mathews, Nripan
2018-06-01
Emulation of brain-like signal processing with thin-film devices can lay the foundation for building artificially intelligent learning circuitry in future. Encompassing higher functionalities into single artificial neural elements will allow the development of robust neuromorphic circuitry emulating biological adaptation mechanisms with drastically lesser neural elements, mitigating strict process challenges and high circuit density requirements necessary to match the computational complexity of the human brain. Here, 2D transition metal di-chalcogenide (MoS 2 ) neuristors are designed to mimic intracellular ion endocytosis-exocytosis dynamics/neurotransmitter-release in chemical synapses using three approaches: (i) electronic-mode: a defect modulation approach where the traps at the semiconductor-dielectric interface are perturbed; (ii) ionotronic-mode: where electronic responses are modulated via ionic gating; and (iii) photoactive-mode: harnessing persistent photoconductivity or trap-assisted slow recombination mechanisms. Exploiting a novel multigated architecture incorporating electrical and optical biases, this incarnation not only addresses different charge-trapping probabilities to finely modulate the synaptic weights, but also amalgamates neuromodulation schemes to achieve "plasticity of plasticity-metaplasticity" via dynamic control of Hebbian spike-time dependent plasticity and homeostatic regulation. Coexistence of such multiple forms of synaptic plasticity increases the efficacy of memory storage and processing capacity of artificial neuristors, enabling design of highly efficient novel neural architectures. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Is the Role of External Feedback in Auditory Skill Learning Age Dependent?
Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat
2017-12-20
The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.
Chersi, Fabian; Ferro, Marcello; Pezzulo, Giovanni; Pirrelli, Vito
2014-07-01
A growing body of evidence in cognitive psychology and neuroscience suggests a deep interconnection between sensory-motor and language systems in the brain. Based on recent neurophysiological findings on the anatomo-functional organization of the fronto-parietal network, we present a computational model showing that language processing may have reused or co-developed organizing principles, functionality, and learning mechanisms typical of premotor circuit. The proposed model combines principles of Hebbian topological self-organization and prediction learning. Trained on sequences of either motor or linguistic units, the network develops independent neuronal chains, formed by dedicated nodes encoding only context-specific stimuli. Moreover, neurons responding to the same stimulus or class of stimuli tend to cluster together to form topologically connected areas similar to those observed in the brain cortex. Simulations support a unitary explanatory framework reconciling neurophysiological motor data with established behavioral evidence on lexical acquisition, access, and recall. Copyright © 2014 Cognitive Science Society, Inc.
Binding and segmentation via a neural mass model trained with Hebbian and anti-Hebbian mechanisms.
Cona, Filippo; Zavaglia, Melissa; Ursino, Mauro
2012-04-01
Synchronization of neural activity in the gamma band, modulated by a slower theta rhythm, is assumed to play a significant role in binding and segmentation of multiple objects. In the present work, a recent neural mass model of a single cortical column is used to analyze the synaptic mechanisms which can warrant synchronization and desynchronization of cortical columns, during an autoassociation memory task. The model considers two distinct layers communicating via feedforward connections. The first layer receives the external input and works as an autoassociative network in the theta band, to recover a previously memorized object from incomplete information. The second realizes segmentation of different objects in the gamma band. To this end, units within both layers are connected with synapses trained on the basis of previous experience to store objects. The main model assumptions are: (i) recovery of incomplete objects is realized by excitatory synapses from pyramidal to pyramidal neurons in the same object; (ii) binding in the gamma range is realized by excitatory synapses from pyramidal neurons to fast inhibitory interneurons in the same object. These synapses (both at points i and ii) have a few ms dynamics and are trained with a Hebbian mechanism. (iii) Segmentation is realized with faster AMPA synapses, with rise times smaller than 1 ms, trained with an anti-Hebbian mechanism. Results show that the model, with the previous assumptions, can correctly reconstruct and segment three simultaneous objects, starting from incomplete knowledge. Segmentation of more objects is possible but requires an increased ratio between the theta and gamma periods.
Imitation, empathy, and mirror neurons.
Iacoboni, Marco
2009-01-01
There is a convergence between cognitive models of imitation, constructs derived from social psychology studies on mimicry and empathy, and recent empirical findings from the neurosciences. The ideomotor framework of human actions assumes a common representational format for action and perception that facilitates imitation. Furthermore, the associative sequence learning model of imitation proposes that experience-based Hebbian learning forms links between sensory processing of the actions of others and motor plans. Social psychology studies have demonstrated that imitation and mimicry are pervasive, automatic, and facilitate empathy. Neuroscience investigations have demonstrated physiological mechanisms of mirroring at single-cell and neural-system levels that support the cognitive and social psychology constructs. Why were these neural mechanisms selected, and what is their adaptive advantage? Neural mirroring solves the "problem of other minds" (how we can access and understand the minds of others) and makes intersubjectivity possible, thus facilitating social behavior.
Unsupervised segmentation with dynamical units.
Rao, A Ravishankar; Cecchi, Guillermo A; Peck, Charles C; Kozloski, James R
2008-01-01
In this paper, we present a novel network to separate mixtures of inputs that have been previously learned. A significant capability of the network is that it segments the components of each input object that most contribute to its classification. The network consists of amplitude-phase units that can synchronize their dynamics, so that separation is determined by the amplitude of units in an output layer, and segmentation by phase similarity between input and output layer units. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. Moreover, efficient segmentation can be achieved even when there is considerable superposition of the inputs. The network dynamics are derived from an objective function that rewards sparse coding in the generalized amplitude-phase variables. We argue that this objective function can provide a possible formal interpretation of the binding problem and that the implementation of the network architecture and dynamics is biologically plausible.
Miskovic, Vladimir; Keil, Andreas
2012-01-01
The capacity to associate neutral stimuli with affective value is an important survival strategy that can be accomplished by cell assemblies obeying Hebbian learning principles. In the neuroscience laboratory, classical fear conditioning has been extensively used as a model to study learning related changes in neural structure and function. Here, we review the effects of classical fear conditioning on electromagnetic brain activity in humans, focusing on how sensory systems adapt to changing fear-related contingencies. By considering spatio-temporal patterns of mass neuronal activity we illustrate a range of cortical changes related to a retuning of neuronal sensitivity to amplify signals consistent with fear-associated stimuli at the cost of other sensory information. Putative mechanisms that may underlie fear-associated plasticity at the level of the sensory cortices are briefly considered and several avenues for future work are outlined. PMID:22891639
Slow synaptic dynamics in a network: From exponential to power-law forgetting
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2014-09-01
We investigate a mean-field model of interacting synapses on a directed neural network. Our interest lies in the slow adaptive dynamics of synapses, which are driven by the fast dynamics of the neurons they connect. Cooperation is modeled from the usual Hebbian perspective, while competition is modeled by an original polarity-driven rule. The emergence of a critical manifold culminating in a tricritical point is crucially dependent on the presence of synaptic competition. This leads to a universal 1/t power-law relaxation of the mean synaptic strength along the critical manifold and an equally universal 1/√t relaxation at the tricritical point, to be contrasted with the exponential relaxation that is otherwise generic. In turn, this leads to the natural emergence of long- and short-term memory from different parts of parameter space in a synaptic network, which is the most original and important result of our present investigations.
Optimizing one-shot learning with binary synapses.
Romani, Sandro; Amit, Daniel J; Amit, Yali
2008-08-01
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
McNaughton, Neil; Wickens, Jeff
2003-01-01
The hippocampus has been proposed as a key component of a "behavioural inhibition system". We explore the implications of this idea for the nature of associative memory--i.e. learning that is distinct from the moulding of response sequences by error correction and reinforcement. It leads to the view that all associative memory depends on purely Hebbian mechanisms. Memories involve acquisition of new goals not the strengthening of new stimulus-response links. Critically, memories will consist of affectively positive and affectively negative associations as well "purely cognitive" information. The hippocampus is seen as a supervisor that is normally "just checking" information about current available goals. When one available goal is pre-eminent there is no hippocampal output and the goal controls the response system. When two or more goals are similarly and highly primed there is conflict. This is detected by the hippocampus which sends output that increases the valence of affectively negative perceptions and so resolves the conflict by suppressing more aversive goals. Such conflict resolution occurs with innate as well as acquired goals and is fundamentally non-memorial. But, in memory paradigms, it can often act to suppress interference on the current trial and, through Hebbian association of the increase in negative affect, decrease the probability of interference on later trials and during consolidation. Both memory-driven and innate behaviour is made hippocampal-dependent by innate and acquired conflicting tendencies and not the class of stimulus presented.
Norman, Kenneth A; Newman, Ehren L; Perotte, Adler J
2005-11-01
The stability-plasticity problem (i.e. how the brain incorporates new information into its model of the world, while at the same time preserving existing knowledge) has been at the forefront of computational memory research for several decades. In this paper, we critically evaluate how well the Complementary Learning Systems theory of hippocampo-cortical interactions addresses the stability-plasticity problem. We identify two major challenges for the model: Finding a learning algorithm for cortex and hippocampus that enacts selective strengthening of weak memories, and selective punishment of competing memories; and preventing catastrophic forgetting in the case of non-stationary environments (i.e. when items are temporarily removed from the training set). We then discuss potential solutions to these problems: First, we describe a recently developed learning algorithm that leverages neural oscillations to find weak parts of memories (so they can be strengthened) and strong competitors (so they can be punished), and we show how this algorithm outperforms other learning algorithms (CPCA Hebbian learning and Leabra at memorizing overlapping patterns. Second, we describe how autonomous re-activation of memories (separately in cortex and hippocampus) during REM sleep, coupled with the oscillating learning algorithm, can reduce the rate of forgetting of input patterns that are no longer present in the environment. We then present a simple demonstration of how this process can prevent catastrophic interference in an AB-AC learning paradigm.
Pattern Learning, Damage and Repair within Biological Neural Networks
NASA Astrophysics Data System (ADS)
Siu, Theodore; Fitzgerald O'Neill, Kate; Shinbrot, Troy
2015-03-01
Traumatic brain injury (TBI) causes damage to neural networks, potentially leading to disability or even death. Nearly one in ten of these patients die, and most of the remainder suffer from symptoms ranging from headaches and nausea to convulsions and paralysis. In vitro studies to develop treatments for TBI have limited in vivo applicability, and in vitro therapies have even proven to worsen the outcome of TBI patients. We propose that this disconnect between in vitro and in vivo outcomes may be associated with the fact that in vitro tests assess indirect measures of neuronal health, but do not investigate the actual function of neuronal networks. Therefore in this talk, we examine both in vitro and in silico neuronal networks that actually perform a function: pattern identification. We allow the networks to execute genetic, Hebbian, learning, and additionally, we examine the effects of damage and subsequent repair within our networks. We show that the length of repaired connections affects the overall pattern learning performance of the network and we propose therapies that may improve function following TBI in clinical settings.
Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle
Isomura, Takuya; Kotani, Kiyoshi; Jimbo, Yasuhiko
2015-01-01
Blind source separation is the computation underlying the cocktail party effect––a partygoer can distinguish a particular talker’s voice from the ambient noise. Early studies indicated that the brain might use blind source separation as a signal processing strategy for sensory perception and numerous mathematical models have been proposed; however, it remains unclear how the neural networks extract particular sources from a complex mixture of inputs. We discovered that neurons in cultures of dissociated rat cortical cells could learn to represent particular sources while filtering out other signals. Specifically, the distinct classes of neurons in the culture learned to respond to the distinct sources after repeating training stimulation. Moreover, the neural network structures changed to reduce free energy, as predicted by the free-energy principle, a candidate unified theory of learning and memory, and by Jaynes’ principle of maximum entropy. This implicit learning can only be explained by some form of Hebbian plasticity. These results are the first in vitro (as opposed to in silico) demonstration of neural networks performing blind source separation, and the first formal demonstration of neuronal self-organization under the free energy principle. PMID:26690814
Learning of Chunking Sequences in Cognition and Behavior
Rabinovich, Mikhail
2015-01-01
We often learn and recall long sequences in smaller segments, such as a phone number 858 534 22 30 memorized as four segments. Behavioral experiments suggest that humans and some animals employ this strategy of breaking down cognitive or behavioral sequences into chunks in a wide variety of tasks, but the dynamical principles of how this is achieved remains unknown. Here, we study the temporal dynamics of chunking for learning cognitive sequences in a chunking representation using a dynamical model of competing modes arranged to evoke hierarchical Winnerless Competition (WLC) dynamics. Sequential memory is represented as trajectories along a chain of metastable fixed points at each level of the hierarchy, and bistable Hebbian dynamics enables the learning of such trajectories in an unsupervised fashion. Using computer simulations, we demonstrate the learning of a chunking representation of sequences and their robust recall. During learning, the dynamics associates a set of modes to each information-carrying item in the sequence and encodes their relative order. During recall, hierarchical WLC guarantees the robustness of the sequence order when the sequence is not too long. The resulting patterns of activities share several features observed in behavioral experiments, such as the pauses between boundaries of chunks, their size and their duration. Failures in learning chunking sequences provide new insights into the dynamical causes of neurological disorders such as Parkinson’s disease and Schizophrenia. PMID:26584306
Linear analysis of auto-organization in Hebbian neural networks.
Carlos Letelier, J; Mpodozis, J
1995-01-01
The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.
Functional model of biological neural networks.
Lo, James Ting-Ho
2010-12-01
A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.
Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination
2012-01-01
Background Principal component analysis (PCA) has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs) is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA) has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs) were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and hardware memories achieved by the streaming computing Results and discussion Results demonstrate that the stream-based eigenfilter can achieve the same accuracy and is 10 times more computationally efficient when compared with conventional PCA algorithms. Hardware evaluations show that 90.3% logic resources, 95.1% power consumption and 86.8% computing latency can be reduced by the stream-based eigenfilter when compared with PCA hardware. By utilizing the streaming method, 92% memory resources and 67% power consumption can be saved when compared with the direct implementation of GHA. Conclusion Stream-based Hebbian eigenfilter presents a novel approach to enable real-time spike sorting with reduced computational complexity and hardware costs. This new design can be further utilized for multi-channel neuro-physiological experiments or chronic implants. PMID:22490725
Diverse strategy-learning styles promote cooperation in evolutionary spatial prisoner's dilemma game
NASA Astrophysics Data System (ADS)
Liu, Run-Ran; Jia, Chun-Xiao; Rong, Zhihai
2015-11-01
Observational learning and practice learning are two important learning styles and play important roles in our information acquisition. In this paper, we study a spacial evolutionary prisoner's dilemma game, where players can choose the observational learning rule or the practice learning rule when updating their strategies. In the proposed model, we use a parameter p controlling the preference of players choosing the observational learning rule, and found that there exists an optimal value of p leading to the highest cooperation level, which indicates that the cooperation can be promoted by these two learning rules collaboratively and one single learning rule is not favor the promotion of cooperation. By analysing the dynamical behavior of the system, we find that the observational learning rule can make the players residing on cooperative clusters more easily realize the bad sequence of mutual defection. However, a too high observational learning probability suppresses the players to form compact cooperative clusters. Our results highlight the importance of a strategy-updating rule, more importantly, the observational learning rule in the evolutionary cooperation.
Recommendation System Based On Association Rules For Distributed E-Learning Management Systems
NASA Astrophysics Data System (ADS)
Mihai, Gabroveanu
2015-09-01
Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.
Emergent explosive synchronization in adaptive complex networks
NASA Astrophysics Data System (ADS)
Avalos-Gaytán, Vanesa; Almendral, Juan A.; Leyva, I.; Battiston, F.; Nicosia, V.; Latora, V.; Boccaletti, S.
2018-04-01
Adaptation plays a fundamental role in shaping the structure of a complex network and improving its functional fitting. Even when increasing the level of synchronization in a biological system is considered as the main driving force for adaptation, there is evidence of negative effects induced by excessive synchronization. This indicates that coherence alone cannot be enough to explain all the structural features observed in many real-world networks. In this work, we propose an adaptive network model where the dynamical evolution of the node states toward synchronization is coupled with an evolution of the link weights based on an anti-Hebbian adaptive rule, which accounts for the presence of inhibitory effects in the system. We found that the emergent networks spontaneously develop the structural conditions to sustain explosive synchronization. Our results can enlighten the shaping mechanisms at the heart of the structural and dynamical organization of some relevant biological systems, namely, brain networks, for which the emergence of explosive synchronization has been observed.
Excitation-neurogenesis coupling in adult neural stem/progenitor cells.
Deisseroth, Karl; Singla, Sheela; Toda, Hiroki; Monje, Michelle; Palmer, Theo D; Malenka, Robert C
2004-05-27
A wide variety of in vivo manipulations influence neurogenesis in the adult hippocampus. It is not known, however, if adult neural stem/progenitor cells (NPCs) can intrinsically sense excitatory neural activity and thereby implement a direct coupling between excitation and neurogenesis. Moreover, the theoretical significance of activity-dependent neurogenesis in hippocampal-type memory processing networks has not been explored. Here we demonstrate that excitatory stimuli act directly on adult hippocampal NPCs to favor neuron production. The excitation is sensed via Ca(v)1.2/1.3 (L-type) Ca(2+) channels and NMDA receptors on the proliferating precursors. Excitation through this pathway acts to inhibit expression of the glial fate genes Hes1 and Id2 and increase expression of NeuroD, a positive regulator of neuronal differentiation. These activity-sensing properties of the adult NPCs, when applied as an "excitation-neurogenesis coupling rule" within a Hebbian neural network, predict significant advantages for both the temporary storage and the clearance of memories.
Emergent explosive synchronization in adaptive complex networks.
Avalos-Gaytán, Vanesa; Almendral, Juan A; Leyva, I; Battiston, F; Nicosia, V; Latora, V; Boccaletti, S
2018-04-01
Adaptation plays a fundamental role in shaping the structure of a complex network and improving its functional fitting. Even when increasing the level of synchronization in a biological system is considered as the main driving force for adaptation, there is evidence of negative effects induced by excessive synchronization. This indicates that coherence alone cannot be enough to explain all the structural features observed in many real-world networks. In this work, we propose an adaptive network model where the dynamical evolution of the node states toward synchronization is coupled with an evolution of the link weights based on an anti-Hebbian adaptive rule, which accounts for the presence of inhibitory effects in the system. We found that the emergent networks spontaneously develop the structural conditions to sustain explosive synchronization. Our results can enlighten the shaping mechanisms at the heart of the structural and dynamical organization of some relevant biological systems, namely, brain networks, for which the emergence of explosive synchronization has been observed.
Synchronization and desynchronization in a network of locally coupled Wilson-Cowan oscillators.
Campbell, S; Wang, D
1996-01-01
A network of Wilson-Cowan (WC) oscillators is constructed, and its emergent properties of synchronization and desynchronization are investigated by both computer simulation and formal analysis. The network is a 2D matrix, where each oscillator is coupled only to its neighbors. We show analytically that a chain of locally coupled oscillators (the piecewise linear approximation to the WC oscillator) synchronizes, and we present a technique to rapidly entrain finite numbers of oscillators. The coupling strengths change on a fast time scale based on a Hebbian rule. A global separator is introduced which receives input from and sends feedback to each oscillator in the matrix. The global separator is used to desynchronize different oscillator groups. Unlike many other models, the properties of this network emerge from local connections that preserve spatial relationships among components and are critical for encoding Gestalt principles of feature grouping. The ability to synchronize and desynchronize oscillator groups within this network offers a promising approach for pattern segmentation and figure/ground segregation based on oscillatory correlation.
A single-cell spiking model for the origin of grid-cell patterns
Kempter, Richard
2017-01-01
Spatial cognition in mammals is thought to rely on the activity of grid cells in the entorhinal cortex, yet the fundamental principles underlying the origin of grid-cell firing are still debated. Grid-like patterns could emerge via Hebbian learning and neuronal adaptation, but current computational models remained too abstract to allow direct confrontation with experimental data. Here, we propose a single-cell spiking model that generates grid firing fields via spike-rate adaptation and spike-timing dependent plasticity. Through rigorous mathematical analysis applicable in the linear limit, we quantitatively predict the requirements for grid-pattern formation, and we establish a direct link to classical pattern-forming systems of the Turing type. Our study lays the groundwork for biophysically-realistic models of grid-cell activity. PMID:28968386
Developmental changes in automatic rule-learning mechanisms across early childhood.
Mueller, Jutta L; Friederici, Angela D; Männel, Claudia
2018-06-27
Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.
Acute effects of aerobic exercise promote learning
Perini, Renza; Bortoletto, Marta; Capogrosso, Michela; Fertonani, Anna; Miniussi, Carlo
2016-01-01
The benefits that physical exercise confers on cardiovascular health are well known, whereas the notion that physical exercise can also improve cognitive performance has only recently begun to be explored and has thus far yielded only controversial results. In the present study, we used a sample of young male subjects to test the effects that a single bout of aerobic exercise has on learning. Two tasks were run: the first was an orientation discrimination task involving the primary visual cortex, and the second was a simple thumb abduction motor task that relies on the primary motor cortex. Forty-four and forty volunteers participated in the first and second experiments, respectively. We found that a single bout of aerobic exercise can significantly facilitate learning mechanisms within visual and motor domains and that these positive effects can persist for at least 30 minutes following exercise. This finding suggests that physical activity, at least of moderate intensity, might promote brain plasticity. By combining physical activity–induced plasticity with specific cognitive training–induced plasticity, we favour a gradual up-regulation of a functional network due to a steady increase in synaptic strength, promoting associative Hebbian-like plasticity. PMID:27146330
Sun, Qing; Schwartz, François; Michel, Jacques; Herve, Yannick; Dalmolin, Renzo
2011-06-01
In this paper, we aim at developing an analog spiking neural network (SNN) for reinforcing the performance of conventional cardiac resynchronization therapy (CRT) devices (also called biventricular pacemakers). Targeting an alternative analog solution in 0.13- μm CMOS technology, this paper proposes an approach to improve cardiac delay predictions in every cardiac period in order to assist the CRT device to provide real-time optimal heartbeats. The primary analog SNN architecture is proposed and its implementation is studied to fulfill the requirement of very low energy consumption. By using the Hebbian learning and reinforcement learning algorithms, the intended adaptive CRT device works with different functional modes. The simulations of both learning algorithms have been carried out, and they were shown to demonstrate the global functionalities. To improve the realism of the system, we introduce various heart behavior models (with constant/variable heart rates) that allow pathologic simulations with/without noise on the signals of the input sensors. The simulations of the global system (pacemaker models coupled with heart models) have been investigated and used to validate the analog spiking neural network implementation.
Rule learning in autism: the role of reward type and social context.
Jones, E J H; Webb, S J; Estes, A; Dawson, G
2013-01-01
Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concrete reinforcement, suggesting an underlying difficulty in forming conceptual connections. Learning abstract rules about social stimuli remained challenging through late childhood, indicating the importance of testing executive functions in both social and non-social contexts.
McMurray, Bob; Horst, Jessica S; Samuelson, Larissa K
2012-10-01
Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Rapid motor learning in the translational vestibulo-ocular reflex
NASA Technical Reports Server (NTRS)
Zhou, Wu; Weldon, Patrick; Tang, Bingfeng; King, W. M.; Shelhamer, M. J. (Principal Investigator)
2003-01-01
Motor learning was induced in the translational vestibulo-ocular reflex (TVOR) when monkeys were repeatedly subjected to a brief (0.5 sec) head translation while they tried to maintain binocular fixation on a visual target for juice rewards. If the target was world-fixed, the initial eye speed of the TVOR gradually increased; if the target was head-fixed, the initial eye speed of the TVOR gradually decreased. The rate of learning acquisition was very rapid, with a time constant of approximately 100 trials, which was equivalent to <1 min of accumulated stimulation. These learned changes were consolidated over >or=1 d without any reinforcement, indicating induction of long-term synaptic plasticity. Although the learning generalized to targets with different viewing distances and to head translations with different accelerations, it was highly specific for the particular combination of head motion and evoked eye movement associated with the training. For example, it was specific to the modality of the stimulus (translation vs rotation) and the direction of the evoked eye movement in the training. Furthermore, when one eye was aligned with the heading direction so that it remained motionless during training, learning was not expressed in this eye, but only in the other nonaligned eye. These specificities show that the learning sites are neither in the sensory nor the motor limb of the reflex but in the sensory-motor transformation stage of the reflex. The dependence of the learning on both head motion and evoked eye movement suggests that Hebbian learning may be one of the underlying cellular mechanisms.
FPGA Implementation of Generalized Hebbian Algorithm for Texture Classification
Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao
2012-01-01
This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs. PMID:22778640
Pearce, Timothy C.; Karout, Salah; Rácz, Zoltán; Capurro, Alberto; Gardner, Julian W.; Cole, Marina
2012-01-01
We present a biologically-constrained neuromorphic spiking model of the insect antennal lobe macroglomerular complex that encodes concentration ratios of chemical components existing within a blend, implemented using a set of programmable logic neuronal modeling cores. Depending upon the level of inhibition and symmetry in its inhibitory connections, the model exhibits two dynamical regimes: fixed point attractor (winner-takes-all type), and limit cycle attractor (winnerless competition type) dynamics. We show that, when driven by chemosensor input in real-time, the dynamical trajectories of the model's projection neuron population activity accurately encode the concentration ratios of binary odor mixtures in both dynamical regimes. By deploying spike timing-dependent plasticity in a subset of the synapses in the model, we demonstrate that a Hebbian-like associative learning rule is able to organize weights into a stable configuration after exposure to a randomized training set comprising a variety of input ratios. Examining the resulting local interneuron weights in the model shows that each inhibitory neuron competes to represent possible ratios across the population, forming a ratiometric representation via mutual inhibition. After training the resulting dynamical trajectories of the projection neuron population activity show amplification and better separation in their response to inputs of different ratios. Finally, we demonstrate that by using limit cycle attractor dynamics, it is possible to recover and classify blend ratio information from the early transient phases of chemosensor responses in real-time more rapidly and accurately compared to a nearest-neighbor classifier applied to the normalized chemosensor data. Our results demonstrate the potential of biologically-constrained neuromorphic spiking models in achieving rapid and efficient classification of early phase chemosensor array transients with execution times well beyond biological timescales. PMID:23874265
Ising formulation of associative memory models and quantum annealing recall
NASA Astrophysics Data System (ADS)
Santra, Siddhartha; Shehab, Omar; Balu, Radhakrishnan
2017-12-01
Associative memory models, in theoretical neuro- and computer sciences, can generally store at most a linear number of memories. Recalling memories in these models can be understood as retrieval of the energy minimizing configuration of classical Ising spins, closest in Hamming distance to an imperfect input memory, where the energy landscape is determined by the set of stored memories. We present an Ising formulation for associative memory models and consider the problem of memory recall using quantum annealing. We show that allowing for input-dependent energy landscapes allows storage of up to an exponential number of memories (in terms of the number of neurons). Further, we show how quantum annealing may naturally be used for recall tasks in such input-dependent energy landscapes, although the recall time may increase with the number of stored memories. Theoretically, we obtain the radius of attractor basins R (N ) and the capacity C (N ) of such a scheme and their tradeoffs. Our calculations establish that for randomly chosen memories the capacity of our model using the Hebbian learning rule as a function of problem size can be expressed as C (N ) =O (eC1N) , C1≥0 , and succeeds on randomly chosen memory sets with a probability of (1 -e-C2N) , C2≥0 with C1+C2=(0.5-f ) 2/(1 -f ) , where f =R (N )/N , 0 ≤f ≤0.5 , is the radius of attraction in terms of the Hamming distance of an input probe from a stored memory as a fraction of the problem size. We demonstrate the application of this scheme on a programmable quantum annealing device, the D-wave processor.
Rule Learning in Autism: The Role of Reward Type and Social Context
Jones, E. J. H.; Webb, S. J.; Estes, A.; Dawson, G.
2013-01-01
Learning abstract rules is central to social and cognitive development. Across two experiments, we used Delayed Non-Matching to Sample tasks to characterize the longitudinal development and nature of rule-learning impairments in children with Autism Spectrum Disorder (ASD). Results showed that children with ASD consistently experienced more difficulty learning an abstract rule from a discrete physical reward than children with DD. Rule learning was facilitated by the provision of more concrete reinforcement, suggesting an underlying difficulty in forming conceptual connections. Learning abstract rules about social stimuli remained challenging through late childhood, indicating the importance of testing executive functions in both social and non-social contexts. PMID:23311315
Learning and Tuning of Fuzzy Rules
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1997-01-01
In this chapter, we review some of the current techniques for learning and tuning fuzzy rules. For clarity, we refer to the process of generating rules from data as the learning problem and distinguish it from tuning an already existing set of fuzzy rules. For learning, we touch on unsupervised learning techniques such as fuzzy c-means, fuzzy decision tree systems, fuzzy genetic algorithms, and linear fuzzy rules generation methods. For tuning, we discuss Jang's ANFIS architecture, Berenji-Khedkar's GARIC architecture and its extensions in GARIC-Q. We show that the hybrid techniques capable of learning and tuning fuzzy rules, such as CART-ANFIS, RNN-FLCS, and GARIC-RB, are desirable in development of a number of future intelligent systems.
Concurrence of rule- and similarity-based mechanisms in artificial grammar learning.
Opitz, Bertram; Hofmann, Juliane
2015-03-01
A current theoretical debate regards whether rule-based or similarity-based learning prevails during artificial grammar learning (AGL). Although the majority of findings are consistent with a similarity-based account of AGL it has been argued that these results were obtained only after limited exposure to study exemplars, and performance on subsequent grammaticality judgment tests has often been barely above chance level. In three experiments the conditions were investigated under which rule- and similarity-based learning could be applied. Participants were exposed to exemplars of an artificial grammar under different (implicit and explicit) learning instructions. The analysis of receiver operating characteristics (ROC) during a final grammaticality judgment test revealed that explicit but not implicit learning led to rule knowledge. It also demonstrated that this knowledge base is built up gradually while similarity knowledge governed the initial state of learning. Together these results indicate that rule- and similarity-based mechanisms concur during AGL. Moreover, it could be speculated that two different rule processes might operate in parallel; bottom-up learning via gradual rule extraction and top-down learning via rule testing. Crucially, the latter is facilitated by performance feedback that encourages explicit hypothesis testing. Copyright © 2015 Elsevier Inc. All rights reserved.
A Local Learning Rule for Independent Component Analysis
Isomura, Takuya; Toyoizumi, Taro
2016-01-01
Humans can separately recognize independent sources when they sense their superposition. This decomposition is mathematically formulated as independent component analysis (ICA). While a few biologically plausible learning rules, so-called local learning rules, have been proposed to achieve ICA, their performance varies depending on the parameters characterizing the mixed signals. Here, we propose a new learning rule that is both easy to implement and reliable. Both mathematical and numerical analyses confirm that the proposed rule outperforms other local learning rules over a wide range of parameters. Notably, unlike other rules, the proposed rule can separate independent sources without any preprocessing, even if the number of sources is unknown. The successful performance of the proposed rule is then demonstrated using natural images and movies. We discuss the implications of this finding for our understanding of neuronal information processing and its promising applications to neuromorphic engineering. PMID:27323661
Two symmetry-breaking mechanisms for the development of orientation selectivity in a neural system
NASA Astrophysics Data System (ADS)
Cho, Myoung Won; Chun, Min Young
2015-11-01
Orientation selectivity is a remarkable feature of the neurons located in the primary visual cortex. Provided that the visual neurons acquire orientation selectivity through activity-dependent Hebbian learning, the development process could be understood as a kind of symmetry-breaking phenomenon in the view of physics. This paper examines the key mechanisms of the orientation selectivity development process. Be found that at least two different mechanisms, which lead to the development of orientation selectivity by breaking the radial symmetry in receptive fields. The first is a simultaneous symmetry-breaking mechanism occurring based on the competition between neighboring neurons, and the second is a spontaneous one occurring based on the nonlinearity in interactions. Only the second mechanism leads to the formation of a columnar pattern whose characteristics is in accord with those observed in an animal experiment.
Wains: a pattern-seeking artificial life species.
de Buitléir, Amy; Russell, Michael; Daly, Mark
2012-01-01
We describe the initial phase of a research project to develop an artificial life framework designed to extract knowledge from large data sets with minimal preparation or ramp-up time. In this phase, we evolved an artificial life population with a new brain architecture. The agents have sufficient intelligence to discover patterns in data and to make survival decisions based on those patterns. The species uses diploid reproduction, Hebbian learning, and Kohonen self-organizing maps, in combination with novel techniques such as using pattern-rich data as the environment and framing the data analysis as a survival problem for artificial life. The first generation of agents mastered the pattern discovery task well enough to thrive. Evolution further adapted the agents to their environment by making them a little more pessimistic, and also by making their brains more efficient.
Information flow in layered networks of non-monotonic units
NASA Astrophysics Data System (ADS)
Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.
2015-07-01
Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.
Grounded understanding of abstract concepts: The case of STEM learning.
Hayes, Justin C; Kraemer, David J M
2017-01-01
Characterizing the neural implementation of abstract conceptual representations has long been a contentious topic in cognitive science. At the heart of the debate is whether the "sensorimotor" machinery of the brain plays a central role in representing concepts, or whether the involvement of these perceptual and motor regions is merely peripheral or epiphenomenal. The domain of science, technology, engineering, and mathematics (STEM) learning provides an important proving ground for sensorimotor (or grounded) theories of cognition, as concepts in science and engineering courses are often taught through laboratory-based and other hands-on methodologies. In this review of the literature, we examine evidence suggesting that sensorimotor processes strengthen learning associated with the abstract concepts central to STEM pedagogy. After considering how contemporary theories have defined abstraction in the context of semantic knowledge, we propose our own explanation for how body-centered information, as computed in sensorimotor brain regions and visuomotor association cortex, can form a useful foundation upon which to build an understanding of abstract scientific concepts, such as mechanical force. Drawing from theories in cognitive neuroscience, we then explore models elucidating the neural mechanisms involved in grounding intangible concepts, including Hebbian learning, predictive coding, and neuronal recycling. Empirical data on STEM learning through hands-on instruction are considered in light of these neural models. We conclude the review by proposing three distinct ways in which the field of cognitive neuroscience can contribute to STEM learning by bolstering our understanding of how the brain instantiates abstract concepts in an embodied fashion.
LTP saturation and spatial learning disruption: effects of task variables and saturation levels.
Barnes, C A; Jung, M W; McNaughton, B L; Korol, D L; Andreasson, K; Worley, P F
1994-10-01
The prediction that "saturation" of LTP/LTE at hippocampal synapses should impair spatial learning was reinvestigated in the light of a more specific consideration of the theory of Hebbian associative networks, which predicts a nonlinear relationship between LTP "saturation" and memory impairment. This nonlinearity may explain the variable results of studies that have addressed the effects of LTP "saturation" on behavior. The extent of LTP "saturation" in fascia dentata produced by the standard chronic LTP stimulation protocol was assessed both electrophysiologically and through the use of an anatomical marker (activation of the immediate-early gene zif268). Both methods point to the conclusion that the standard protocols used to induce LTP do not "saturate" the process at any dorsoventral level, and leave the ventral half of the hippocampus virtually unaffected. LTP-inducing, bilateral perforant path stimulation led to a significant deficit in the reversal of a well-learned spatial response on the Barnes circular platform task as reported previously, yet in the same animals produced no deficit in learning the Morris water task (for which previous results have been conflicting). The behavioral deficit was not a consequence of any after-discharge in the hippocampal EEG. In contrast, administration of maximal electroconvulsive shock led to robust zif268 activation throughout the hippocampus, enhancement of synaptic responses, occlusion of LTP produced by discrete high-frequency stimulation, and spatial learning deficits in the water task. These data provide further support for the involvement of LTP-like synaptic enhancement in spatial learning.
A BCM theory of meta-plasticity for online self-reorganizing fuzzy-associative learning.
Tan, Javan; Quek, Chai
2010-06-01
Self-organizing neurofuzzy approaches have matured in their online learning of fuzzy-associative structures under time-invariant conditions. To maximize their operative value for online reasoning, these self-sustaining mechanisms must also be able to reorganize fuzzy-associative knowledge in real-time dynamic environments. Hence, it is critical to recognize that they would require self-reorganizational skills to rebuild fluid associative structures when their existing organizations fail to respond well to changing circumstances. In this light, while Hebbian theory (Hebb, 1949) is the basic computational framework for associative learning, it is less attractive for time-variant online learning because it suffers from stability limitations that impedes unlearning. Instead, this paper adopts the Bienenstock-Cooper-Munro (BCM) theory of neurological learning via meta-plasticity principles (Bienenstock et al., 1982) that provides for both online associative and dissociative learning. For almost three decades, BCM theory has been shown to effectively brace physiological evidence of synaptic potentiation (association) and depression (dissociation) into a sound mathematical framework for computational learning. This paper proposes an interpretation of the BCM theory of meta-plasticity for an online self-reorganizing fuzzy-associative learning system to realize online-reasoning capabilities. Experimental findings are twofold: 1) the analysis using S&P-500 stock index illustrated that the self-reorganizing approach could follow the trajectory shifts in the time-variant S&P-500 index for about 60 years, and 2) the benchmark profiles showed that the fuzzy-associative approach yielded comparable results with other fuzzy-precision models with similar online objectives.
NASA Astrophysics Data System (ADS)
van Hemmen, J. Leo
Tinnitus, implying the perception of sound without the presence of any acoustical stimulus, is a chronic and serious problem for about 2% of the human population. In many cases, tinnitus is a pitch-like sensation associated with a hearing loss that confines the tinnitus frequency to an interval of the tonotopic axis. Even in patients with a normal audiogram the presence of tinnitus may be associated with damage of hair-cell function in this interval. It has been suggested that homeostatic regulation and, hence, increase of activity leads to the emergence of tinnitus. For patients with hearing loss, we present spike-timing-dependent Hebbian plasticity (STDP) in conjunction with homeostasis as a mechanism for ``learning'' tinnitus in a realistic neuronal network with tonotopically arranged synaptic excitation and inhibition. In so doing we use both dynamical scaling of the synaptic strengths and altering the resting potential of the cells. The corresponding simulations are robust to parameter changes. Understanding the mechanisms of tinnitus induction, such as here, may help improving therapy. Work done in collaboration with Julie Goulet and Michael Schneider. JLvH has been supported partially by BCCN - Munich.
A Mismatch-Based Model for Memory Reconsolidation and Extinction in Attractor Networks
Amaral, Olavo B.
2011-01-01
The processes of memory reconsolidation and extinction have received increasing attention in recent experimental research, as their potential clinical applications begin to be uncovered. A number of studies suggest that amnestic drugs injected after reexposure to a learning context can disrupt either of the two processes, depending on the behavioral protocol employed. Hypothesizing that reconsolidation represents updating of a memory trace in the hippocampus, while extinction represents formation of a new trace, we have built a neural network model in which either simple retrieval, reconsolidation or extinction of a stored attractor can occur upon contextual reexposure, depending on the similarity between the representations of the original learning and reexposure sessions. This is achieved by assuming that independent mechanisms mediate Hebbian-like synaptic strengthening and mismatch-driven labilization of synaptic changes, with protein synthesis inhibition preferentially affecting the former. Our framework provides a unified mechanistic explanation for experimental data showing (a) the effect of reexposure duration on the occurrence of reconsolidation or extinction and (b) the requirement of memory updating during reexposure to drive reconsolidation. PMID:21826231
Erfanian Saeedi, Nafise; Blamey, Peter J; Burkitt, Anthony N; Grayden, David B
2016-04-01
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.
Erfanian Saeedi, Nafise; Blamey, Peter J.; Burkitt, Anthony N.; Grayden, David B.
2016-01-01
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons’ action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy. PMID:27049657
Striatal degeneration impairs language learning: evidence from Huntington's disease.
De Diego-Balaguer, R; Couette, M; Dolbeau, G; Dürr, A; Youssov, K; Bachoud-Lévi, A-C
2008-11-01
Although the role of the striatum in language processing is still largely unclear, a number of recent proposals have outlined its specific contribution. Different studies report evidence converging to a picture where the striatum may be involved in those aspects of rule-application requiring non-automatized behaviour. This is the main characteristic of the earliest phases of language acquisition that require the online detection of distant dependencies and the creation of syntactic categories by means of rule learning. Learning of sequences and categorization processes in non-language domains has been known to require striatal recruitment. Thus, we hypothesized that the striatum should play a prominent role in the extraction of rules in learning a language. We studied 13 pre-symptomatic gene-carriers and 22 early stage patients of Huntington's disease (pre-HD), both characterized by a progressive degeneration of the striatum and 21 late stage patients Huntington's disease (18 stage II, two stage III and one stage IV) where cortical degeneration accompanies striatal degeneration. When presented with a simplified artificial language where words and rules could be extracted, early stage Huntington's disease patients (stage I) were impaired in the learning test, demonstrating a greater impairment in rule than word learning compared to the 20 age- and education-matched controls. Huntington's disease patients at later stages were impaired both on word and rule learning. While spared in their overall performance, gene-carriers having learned a set of abstract artificial language rules were then impaired in the transfer of those rules to similar artificial language structures. The correlation analyses among several neuropsychological tests assessing executive function showed that rule learning correlated with tests requiring working memory and attentional control, while word learning correlated with a test involving episodic memory. These learning impairments significantly correlated with the bicaudate ratio. The overall results support striatal involvement in rule extraction from speech and suggest that language acquisition requires several aspects of memory and executive functions for word and rule learning.
Moral empiricism and the bias for act-based rules.
Ayars, Alisabeth; Nichols, Shaun
2017-10-01
Previous studies on rule learning show a bias in favor of act-based rules, which prohibit intentionally producing an outcome but not merely allowing the outcome. Nichols, Kumar, Lopez, Ayars, and Chan (2016) found that exposure to a single sample violation in which an agent intentionally causes the outcome was sufficient for participants to infer that the rule was act-based. One explanation is that people have an innate bias to think rules are act-based. We suggest an alternative empiricist account: since most rules that people learn are act-based, people form an overhypothesis (Goodman, 1955) that rules are typically act-based. We report three studies that indicate that people can use information about violations to form overhypotheses about rules. In study 1, participants learned either three "consequence-based" rules that prohibited allowing an outcome or three "act-based" rules that prohibiting producing the outcome; in a subsequent learning task, we found that participants who had learned three consequence-based rules were more likely to think that the new rule prohibited allowing an outcome. In study 2, we presented participants with either 1 consequence-based rule or 3 consequence-based rules, and we found that those exposed to 3 such rules were more likely to think that a new rule was also consequence based. Thus, in both studies, it seems that learning 3 consequence-based rules generates an overhypothesis to expect new rules to be consequence-based. In a final study, we used a more subtle manipulation. We exposed participants to examples act-based or accident-based (strict liability) laws and then had them learn a novel rule. We found that participants who were exposed to the accident-based laws were more likely to think a new rule was accident-based. The fact that participants' bias for act-based rules can be shaped by evidence from other rules supports the idea that the bias for act-based rules might be acquired as an overhypothesis from the preponderance of act-based rules. Copyright © 2017 Elsevier B.V. All rights reserved.
Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek
2017-05-01
This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.
Changes in muscle coordination with training.
Carson, Richard G
2006-11-01
Three core concepts, activity-dependent coupling, the composition of muscle synergies, and Hebbian adaptation, are discussed with a view to illustrating the nature of the constraints imposed by the organization of the central nervous system on the changes in muscle coordination induced by training. It is argued that training invoked variations in the efficiency with which motor actions can be generated influence the stability of coordination by altering the potential for activity-dependent coupling between the cortical representations of the focal muscles recruited in a movement task and brain circuits that do not contribute directly to the required behavior. The behaviors that can be generated during training are also constrained by the composition of existing intrinsic muscle synergies. In circumstances in which attempts to produce forceful or high velocity movements would otherwise result in the generation of inappropriate actions, training designed to promote the development of control strategies specific to the desired movement outcome may be necessary to compensate for protogenic muscle recruitment patterns. Hebbian adaptation refers to processes whereby, for neurons that release action potentials at the same time, there is an increased probability that synaptic connections will be formed. Neural connectivity induced by the repetition of specific muscle recruitment patterns during training may, however, inhibit the subsequent acquisition of new skills. Consideration is given to the possibility that, in the presence of the appropriate sensory guidance, it is possible to gate Hebbian plasticity and to promote greater subsequent flexibility in the recruitment of the trained muscles in other task contexts.
Takeuchi, Naoyuki; Izumi, Shin-Ichi
2015-01-01
Motor recovery after stroke involves developing new neural connections, acquiring new functions, and compensating for impairments. These processes are related to neural plasticity. Various novel stroke rehabilitation techniques based on basic science and clinical studies of neural plasticity have been developed to aid motor recovery. Current research aims to determine whether using combinations of these techniques can synergistically improve motor recovery. When different stroke neurorehabilitation therapies are combined, the timing of each therapeutic program must be considered to enable optimal neural plasticity. Synchronizing stroke rehabilitation with voluntary neural and/or muscle activity can lead to motor recovery by targeting Hebbian plasticity. This reinforces the neural connections between paretic muscles and the residual motor area. Homeostatic metaplasticity, which stabilizes the activity of neurons and neural circuits, can either augment or reduce the synergic effect depending on the timing of combination therapy and types of neurorehabilitation that are used. Moreover, the possibility that the threshold and degree of induced plasticity can be altered after stroke should be noted. This review focuses on the mechanisms underlying combinations of neurorehabilitation approaches and their future clinical applications. We suggest therapeutic approaches for cortical reorganization and maximal functional gain in patients with stroke, based on the processes of Hebbian plasticity and homeostatic metaplasticity. Few of the possible combinations of stroke neurorehabilitation have been tested experimentally; therefore, further studies are required to determine the appropriate combination for motor recovery. PMID:26157374
Learning Problem-Solving Rules as Search Through a Hypothesis Space.
Lee, Hee Seung; Betts, Shawn; Anderson, John R
2016-07-01
Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem property such as computational difficulty of the rules biased the search process and so affected learning. Experiment 2 examined the impact of examples as instructional tools and found that their effectiveness was determined by whether they uniquely pointed to the correct rule. Experiment 3 compared verbal directions with examples and found that both could guide search. The final experiment tried to improve learning by using more explicit verbal directions or by adding scaffolding to the example. While both manipulations improved learning, learning still took the form of a search through a hypothesis space of possible rules. We describe a model that embodies two assumptions: (1) the instruction can bias the rules participants hypothesize rather than directly be encoded into a rule; (2) participants do not have memory for past wrong hypotheses and are likely to retry them. These assumptions are realized in a Markov model that fits all the data by estimating two sets of probabilities. First, the learning condition induced one set of Start probabilities of trying various rules. Second, should this first hypothesis prove wrong, the learning condition induced a second set of Choice probabilities of considering various rules. These findings broaden our understanding of effective instruction and provide implications for instructional design. Copyright © 2015 Cognitive Science Society, Inc.
Role of Prefrontal Cortex in Learning and Generalizing Hierarchical Rules in 8-Month-Old Infants.
Werchan, Denise M; Collins, Anne G E; Frank, Michael J; Amso, Dima
2016-10-05
Recent research indicates that adults and infants spontaneously create and generalize hierarchical rule sets during incidental learning. Computational models and empirical data suggest that, in adults, this process is supported by circuits linking prefrontal cortex (PFC) with striatum and their modulation by dopamine, but the neural circuits supporting this form of learning in infants are largely unknown. We used near-infrared spectroscopy to record PFC activity in 8-month-old human infants during a simple audiovisual hierarchical-rule-learning task. Behavioral results confirmed that infants adopted hierarchical rule sets to learn and generalize spoken object-label mappings across different speaker contexts. Infants had increased activity over right dorsal lateral PFC when rule sets switched from one trial to the next, a neural marker related to updating rule sets into working memory in the adult literature. Infants' eye blink rate, a possible physiological correlate of striatal dopamine activity, also increased when rule sets switched from one trial to the next. Moreover, the increase in right dorsolateral PFC activity in conjunction with eye blink rate also predicted infants' generalization ability, providing exploratory evidence for frontostriatal involvement during learning. These findings provide evidence that PFC is involved in rudimentary hierarchical rule learning in 8-month-old infants, an ability that was previously thought to emerge later in life in concert with PFC maturation. Hierarchical rule learning is a powerful learning mechanism that allows rules to be selected in a context-appropriate fashion and transferred or reused in novel contexts. Data from computational models and adults suggests that this learning mechanism is supported by dopamine-innervated interactions between prefrontal cortex (PFC) and striatum. Here, we provide evidence that PFC also supports hierarchical rule learning during infancy, challenging the current dogma that PFC is an underdeveloped brain system until adolescence. These results add new insights into the neurobiological mechanisms available to support learning and generalization in very early postnatal life, providing evidence that PFC and the frontostriatal circuitry are involved in organizing learning and behavior earlier in life than previously known. Copyright © 2016 the authors 0270-6474/16/3610314-09$15.00/0.
Role of Prefrontal Cortex in Learning and Generalizing Hierarchical Rules in 8-Month-Old Infants
Werchan, Denise M.; Collins, Anne G.E.; Frank, Michael J.
2016-01-01
Recent research indicates that adults and infants spontaneously create and generalize hierarchical rule sets during incidental learning. Computational models and empirical data suggest that, in adults, this process is supported by circuits linking prefrontal cortex (PFC) with striatum and their modulation by dopamine, but the neural circuits supporting this form of learning in infants are largely unknown. We used near-infrared spectroscopy to record PFC activity in 8-month-old human infants during a simple audiovisual hierarchical-rule-learning task. Behavioral results confirmed that infants adopted hierarchical rule sets to learn and generalize spoken object–label mappings across different speaker contexts. Infants had increased activity over right dorsal lateral PFC when rule sets switched from one trial to the next, a neural marker related to updating rule sets into working memory in the adult literature. Infants' eye blink rate, a possible physiological correlate of striatal dopamine activity, also increased when rule sets switched from one trial to the next. Moreover, the increase in right dorsolateral PFC activity in conjunction with eye blink rate also predicted infants' generalization ability, providing exploratory evidence for frontostriatal involvement during learning. These findings provide evidence that PFC is involved in rudimentary hierarchical rule learning in 8-month-old infants, an ability that was previously thought to emerge later in life in concert with PFC maturation. SIGNIFICANCE STATEMENT Hierarchical rule learning is a powerful learning mechanism that allows rules to be selected in a context-appropriate fashion and transferred or reused in novel contexts. Data from computational models and adults suggests that this learning mechanism is supported by dopamine-innervated interactions between prefrontal cortex (PFC) and striatum. Here, we provide evidence that PFC also supports hierarchical rule learning during infancy, challenging the current dogma that PFC is an underdeveloped brain system until adolescence. These results add new insights into the neurobiological mechanisms available to support learning and generalization in very early postnatal life, providing evidence that PFC and the frontostriatal circuitry are involved in organizing learning and behavior earlier in life than previously known. PMID:27707968
Oosterman, Joukje M; Heringa, Sophie M; Kessels, Roy P C; Biessels, Geert Jan; Koek, Huiberdina L; Maes, Joseph H R; van den Berg, Esther
2017-04-01
Rule induction tests such as the Wisconsin Card Sorting Test require executive control processes, but also the learning and memorization of simple stimulus-response rules. In this study, we examined the contribution of diminished learning and memorization of simple rules to complex rule induction test performance in patients with amnestic mild cognitive impairment (aMCI) or Alzheimer's dementia (AD). Twenty-six aMCI patients, 39 AD patients, and 32 control participants were included. A task was used in which the memory load and the complexity of the rules were independently manipulated. This task consisted of three conditions: a simple two-rule learning condition (Condition 1), a simple four-rule learning condition (inducing an increase in memory load, Condition 2), and a complex biconditional four-rule learning condition-inducing an increase in complexity and, hence, executive control load (Condition 3). Performance of AD patients declined disproportionately when the number of simple rules that had to be memorized increased (from Condition 1 to 2). An additional increment in complexity (from Condition 2 to 3) did not, however, disproportionately affect performance of the patients. Performance of the aMCI patients did not differ from that of the control participants. In the patient group, correlation analysis showed that memory performance correlated with Condition 1 performance, whereas executive task performance correlated with Condition 2 performance. These results indicate that the reduced learning and memorization of underlying task rules explains a significant part of the diminished complex rule induction performance commonly reported in AD, although results from the correlation analysis suggest involvement of executive control functions as well. Taken together, these findings suggest that care is needed when interpreting rule induction task performance in terms of executive function deficits in these patients.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazantsev, Victor; Pimashkin, Alexey; Department of Neurodynamics and Neurobiology, Nizhny Novgorod State University, 23 Gagarin Ave., 603950 Nizhny Novgorod
We propose two-layer architecture of associative memory oscillatory network with directional interlayer connectivity. The network is capable to store information in the form of phase-locked (in-phase and antiphase) oscillatory patterns. The first (input) layer takes an input pattern to be recognized and their units are unidirectionally connected with all units of the second (control) layer. The connection strengths are weighted using the Hebbian rule. The output (retrieved) patterns appear as forced-phase locked states of the control layer. The conditions are found and analytically expressed for pattern retrieval in response on incoming stimulus. It is shown that the system is capablemore » to recover patterns with a certain level of distortions or noises in their profiles. The architecture is implemented with the Kuramoto phase model and using synaptically coupled neural oscillators with spikes. It is found that the spiking model is capable to retrieve patterns using the spiking phase that translates memorized patterns into the spiking phase shifts at different time scales.« less
NASA Astrophysics Data System (ADS)
Huang, Yin; Chen, Jianhua; Xiong, Shaojun
2009-07-01
Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.
The Demise of the Synapse As the Locus of Memory: A Looming Paradigm Shift?
Trettenbrein, Patrick C
2016-01-01
Synaptic plasticity is widely considered to be the neurobiological basis of learning and memory by neuroscientists and researchers in adjacent fields, though diverging opinions are increasingly being recognized. From the perspective of what we might call "classical cognitive science" it has always been understood that the mind/brain is to be considered a computational-representational system. Proponents of the information-processing approach to cognitive science have long been critical of connectionist or network approaches to (neuro-)cognitive architecture, pointing to the shortcomings of the associative psychology that underlies Hebbian learning as well as to the fact that synapses are practically unfit to implement symbols. Recent work on memory has been adding fuel to the fire and current findings in neuroscience now provide first tentative neurobiological evidence for the cognitive scientists' doubts about the synapse as the (sole) locus of memory in the brain. This paper briefly considers the history and appeal of synaptic plasticity as a memory mechanism, followed by a summary of the cognitive scientists' objections regarding these assertions. Next, a variety of tentative neuroscientific evidence that appears to substantiate questioning the idea of the synapse as the locus of memory is presented. On this basis, a novel way of thinking about the role of synaptic plasticity in learning and memory is proposed.
Piron, Camille; Kase, Daisuke; Topalidou, Meropi; Goillandeau, Michel; Orignac, Hugues; N'Guyen, Tho-Haï; Rougier, Nicolas; Boraud, Thomas
2016-08-01
There is an apparent contradiction between experimental data showing that the basal ganglia are involved in goal-oriented and routine behaviors and clinical observations. Lesion or disruption by deep brain stimulation of the globus pallidus interna has been used for various therapeutic purposes ranging from the improvement of dystonia to the treatment of Tourette's syndrome. None of these approaches has reported any severe impairment in goal-oriented or automatic movement. To solve this conundrum, we trained 2 monkeys to perform a variant of a 2-armed bandit-task (with different reward contingencies). In the latter we alternated blocks of trials with choices between familiar rewarded targets that elicit routine behavior and blocks with novel pairs of targets that require an intentional learning process. Bilateral inactivation of the globus pallidus interna, by injection of muscimol, prevents animals from learning new contingencies while performance remains intact, although slower for the familiar stimuli. We replicate in silico these data by adding lateral competition and Hebbian learning in the cortical layer of the theoretical model of the cortex-basal ganglia loop that provided the framework of our experimental approach. The basal ganglia play a critical role in the deliberative process that underlies learning but are not necessary for the expression of routine movements. Our approach predicts that after pallidotomy or during stimulation, patients should have difficulty with complex decision-making processes or learning new goal-oriented behaviors. © 2016 Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.
A self-learning rule base for command following in dynamical systems
NASA Technical Reports Server (NTRS)
Tsai, Wei K.; Lee, Hon-Mun; Parlos, Alexander
1992-01-01
In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.
When more is less: Feedback effects in perceptual category learning ☆
Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent
2008-01-01
Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether their response was correct or incorrect, but are not informed of the correct category assignment. With full feedback subjects are informed of the correctness of their response and are also informed of the correct category assignment. An examination of the distinct neural circuits that subserve rule-based and information-integration category learning leads to the counterintuitive prediction that full feedback should facilitate rule-based learning but should also hinder information-integration learning. This prediction was supported in the experiment reported below. The implications of these results for theories of learning are discussed. PMID:18455155
Learning general phonological rules from distributional information: a computational model.
Calamaro, Shira; Jarosz, Gaja
2015-04-01
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.
Communicative signals support abstract rule learning by 7-month-old infants
Ferguson, Brock; Lew-Williams, Casey
2016-01-01
The mechanisms underlying the discovery of abstract rules like those found in natural language may be evolutionarily tuned to speech, according to previous research. When infants hear speech sounds, they can learn rules that govern their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do so. Here we show that infants’ rule learning is not tied to speech per se, but is instead enhanced more broadly by communicative signals. In two experiments, infants succeeded in learning and generalizing rules from tones that were introduced as if they could be used to communicate. In two control experiments, infants failed to learn the very same rules when familiarized to tones outside of a communicative exchange. These results reveal that infants’ attention to social agents and communication catalyzes a fundamental achievement of human learning. PMID:27150270
Hochmann, Jean-Rémy; Carey, Susan; Mehler, Jacques
2018-08-01
In two experiments, we assessed whether infants are able to learn rules predicated on two abstract relations linked by negation: same and different (not same). In an anticipatory looking paradigm, the relation between successive colored geometrical shapes predicted the location where a puppet would appear next. In Experiment 1, 7-month-olds learned and generalized a rule predicated on the relation same, but not a rule predicated on the relation different. Similarly, in Experiment 2, 12-month-olds learned a rule predicated on the relation same-shape, but not a rule predicated on the relation different-shape. Comparing our data with that from previous experiments in the speech domain, we found no effect of age, modality or rule complexity. We conclude that, in the first year of life, infants already possess a representation of the abstract relation same, which serves as input to a rule. In contrast, we find no evidence that they represent the relation different. Copyright © 2018 Elsevier B.V. All rights reserved.
Sensory memory for odors is encoded in spontaneous correlated activity between olfactory glomeruli.
Galán, Roberto F; Weidert, Marcel; Menzel, Randolf; Herz, Andreas V M; Galizia, C Giovanni
2006-01-01
Sensory memory is a short-lived persistence of a sensory stimulus in the nervous system, such as iconic memory in the visual system. However, little is known about the mechanisms underlying olfactory sensory memory. We have therefore analyzed the effect of odor stimuli on the first odor-processing network in the honeybee brain, the antennal lobe, which corresponds to the vertebrate olfactory bulb. We stained output neurons with a calcium-sensitive dye and measured across-glomerular patterns of spontaneous activity before and after a stimulus. Such a single-odor presentation changed the relative timing of spontaneous activity across glomeruli in accordance with Hebb's theory of learning. Moreover, during the first few minutes after odor presentation, correlations between the spontaneous activity fluctuations suffice to reconstruct the stimulus. As spontaneous activity is ubiquitous in the brain, modifiable fluctuations could provide an ideal substrate for Hebbian reverberations and sensory memory in other neural systems.
Rule-Based and Information-Integration Category Learning in Normal Aging
ERIC Educational Resources Information Center
Maddox, W. Todd; Pacheco, Jennifer; Reeves, Maia; Zhu, Bo; Schnyer, David M.
2010-01-01
The basal ganglia and prefrontal cortex play critical roles in category learning. Both regions evidence age-related structural and functional declines. The current study examined rule-based and information-integration category learning in a group of older and younger adults. Rule-based learning is thought to involve explicit, frontally mediated…
One Giant Leap for Categorizers: One Small Step for Categorization Theory
Smith, J. David; Ell, Shawn W.
2015-01-01
We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587
Dissociable roles of medial and lateral PFC in rule learning.
Cao, Bihua; Li, Wei; Li, Fuhong; Li, Hong
2016-11-01
Although the neural basis of rule learning is of great interest to cognitive neuroscientists, the pattern of transient brain activation during rule discovery remains to be investigated. In this study, we measured event-related functional magnetic resonance imaging (fMRI) during distinct phases of rule learning. Twenty-one healthy human volunteers were presented with a series of cards, each containing a clock-like display of 12 circles numbered sequentially. Participants were instructed that a fictitious animal would move from one circle to another either in a regular pattern (according to a rule hidden in consecutive trials) or randomly. Participants were then asked to judge whether a given step followed a rule. While the rule-search phase evoked more activation in the posterior lateral prefrontal cortex (LPFC), the rule-following phase caused stronger activation in the anterior medial prefrontal cortex (MPFC). Importantly, the intermediate phase, the rule-discovery phase evoked more activations in MPFC and dorsal anterior cingulate cortex (dACC) than rule search, and more activations in LPFC than rule following. Therefore, we can conclude that the medial and lateral PFC have dissociable contributions in rule learning.
The evolution of social learning rules: payoff-biased and frequency-dependent biased transmission.
Kendal, Jeremy; Giraldeau, Luc-Alain; Laland, Kevin
2009-09-21
Humans and other animals do not use social learning indiscriminately, rather, natural selection has favoured the evolution of social learning rules that make selective use of social learning to acquire relevant information in a changing environment. We present a gene-culture coevolutionary analysis of a small selection of such rules (unbiased social learning, payoff-biased social learning and frequency-dependent biased social learning, including conformism and anti-conformism) in a population of asocial learners where the environment is subject to a constant probability of change to a novel state. We define conditions under which each rule evolves to a genetically polymorphic equilibrium. We find that payoff-biased social learning may evolve under high levels of environmental variation if the fitness benefit associated with the acquired behaviour is either high or low but not of intermediate value. In contrast, both conformist and anti-conformist biases can become fixed when environment variation is low, whereupon the mean fitness in the population is higher than for a population of asocial learners. Our examination of the population dynamics reveals stable limit cycles under conformist and anti-conformist biases and some highly complex dynamics including chaos. Anti-conformists can out-compete conformists when conditions favour a low equilibrium frequency of the learned behaviour. We conclude that evolution, punctuated by the repeated successful invasion of different social learning rules, should continuously favour a reduction in the equilibrium frequency of asocial learning, and propose that, among competing social learning rules, the dominant rule will be the one that can persist with the lowest frequency of asocial learning.
Cohen, Yaniv; Wilson, Donald A.; Barkai, Edi
2015-01-01
Learning of a complex olfactory discrimination (OD) task results in acquisition of rule learning after prolonged training. Previously, we demonstrated enhanced synaptic connectivity between the piriform cortex (PC) and its ascending and descending inputs from the olfactory bulb (OB) and orbitofrontal cortex (OFC) following OD rule learning. Here, using recordings of evoked field postsynaptic potentials in behaving animals, we examined the dynamics by which these synaptic pathways are modified during rule acquisition. We show profound differences in synaptic connectivity modulation between the 2 input sources. During rule acquisition, the ascending synaptic connectivity from the OB to the anterior and posterior PC is simultaneously enhanced. Furthermore, post-training stimulation of the OB enhanced learning rate dramatically. In sharp contrast, the synaptic input in the descending pathway from the OFC was significantly reduced until training completion. Once rule learning was established, the strength of synaptic connectivity in the 2 pathways resumed its pretraining values. We suggest that acquisition of olfactory rule learning requires a transient enhancement of ascending inputs to the PC, synchronized with a parallel decrease in the descending inputs. This combined short-lived modulation enables the PC network to reorganize in a manner that enables it to first acquire and then maintain the rule. PMID:23960200
Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making
Schöner, Gregor; Gail, Alexander
2012-01-01
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations. PMID:23166483
NMDA Receptors Mediate Olfactory Learning and Memory in Drosophila
Xia, Shouzhen; Miyashita, Tomoyuki; Fu, Tsai-Feng; Lin, Wei-Yong; Wu, Chia-Lin; Pyzocha, Lori; Lin, Inn-Ray; Saitoe, Minoru; Tully, Tim; Chiang, Ann-Shyn
2011-01-01
Summary Background Molecular and electrophysiological properties of NMDARs suggest that they may be the Hebbian “coincidence detectors” hypothesized to underlie associative learning. Because of the nonspecificity of drugs that modulate NMDAR function or the relatively chronic genetic manipulations of various NMDAR subunits from mammalian studies, conclusive evidence for such an acute role for NMDARs in adult behavioral plasticity, however, is lacking. Moreover, a role for NMDARs in memory consolidation remains controversial. Results The Drosophila genome encodes two NMDAR homologs, dNR1 and dNR2. When coexpressed in Xenopus oocytes or Drosophila S2 cells, dNR1 and dNR2 form functional NMDARs with several of the distinguishing molecular properties observed for vertebrate NMDARs, including voltage/Mg2+-dependent activation by glutamate. Both proteins are weakly expressed throughout the entire brain but show preferential expression in several neurons surrounding the dendritic region of the mushroom bodies. Hypomorphic mutations of the essential dNR1 gene disrupt olfactory learning, and this learning defect is rescued with wild-type transgenes. Importantly, we show that Pavlovian learning is disrupted in adults within 15 hr after transient induction of a dNR1 antisense RNA transgene. Extended training is sufficient to overcome this initial learning defect, but long-term memory (LTM) specifically is abolished under these training conditions. Conclusions Our study uses a combination of molecular-genetic tools to (1) generate genomic mutations of the dNR1 gene, (2) rescue the accompanying learning deficit with a dNR1+ transgene, and (3) rapidly and transiently knockdown dNR1+ expression in adults, thereby demonstrating an evolutionarily conserved role for the acute involvement of NMDARs in associative learning and memory. PMID:15823532
NMDA receptors mediate olfactory learning and memory in Drosophila.
Xia, Shouzhen; Miyashita, Tomoyuki; Fu, Tsai-Feng; Lin, Wei-Yong; Wu, Chia-Lin; Pyzocha, Lori; Lin, Inn-Ray; Saitoe, Minoru; Tully, Tim; Chiang, Ann-Shyn
2005-04-12
Molecular and electrophysiological properties of NMDARs suggest that they may be the Hebbian "coincidence detectors" hypothesized to underlie associative learning. Because of the nonspecificity of drugs that modulate NMDAR function or the relatively chronic genetic manipulations of various NMDAR subunits from mammalian studies, conclusive evidence for such an acute role for NMDARs in adult behavioral plasticity, however, is lacking. Moreover, a role for NMDARs in memory consolidation remains controversial. The Drosophila genome encodes two NMDAR homologs, dNR1 and dNR2. When coexpressed in Xenopus oocytes or Drosophila S2 cells, dNR1 and dNR2 form functional NMDARs with several of the distinguishing molecular properties observed for vertebrate NMDARs, including voltage/Mg(2+)-dependent activation by glutamate. Both proteins are weakly expressed throughout the entire brain but show preferential expression in several neurons surrounding the dendritic region of the mushroom bodies. Hypomorphic mutations of the essential dNR1 gene disrupt olfactory learning, and this learning defect is rescued with wild-type transgenes. Importantly, we show that Pavlovian learning is disrupted in adults within 15 hr after transient induction of a dNR1 antisense RNA transgene. Extended training is sufficient to overcome this initial learning defect, but long-term memory (LTM) specifically is abolished under these training conditions. Our study uses a combination of molecular-genetic tools to (1) generate genomic mutations of the dNR1 gene, (2) rescue the accompanying learning deficit with a dNR1+ transgene, and (3) rapidly and transiently knockdown dNR1+ expression in adults, thereby demonstrating an evolutionarily conserved role for the acute involvement of NMDARs in associative learning and memory.
Meunier, Sabine; Russmann, Heike; Shamim, Ejaz; Lamy, Jean-Charles; Hallett, Mark
2012-01-01
Summary Artificial induction of plasticity by paired associative stimulation (PAS) in healthy subjects (HV) demonstrates Hebbian-like plasticity in selected inhibitory networks as well as excitatory ones. In a group of 17 patients with focal hand dystonia and a group of 19 HV, we evaluated how PAS and the learning of a simple motor task influence the circuits supporting long interval intracortical inhibition (LICI, reflecting activity of GABAB interneurons) and long latency afferent inhibition (LAI, reflecting activity of somatosensory inputs to the motor cortex). In HV, PAS and motor learning induced LTP-like plasticity of excitatory networks and a lasting decrease of LAI and LICI in the motor representation of the targeted or trained muscle. The better the motor performance, the larger was the decrease of LAI. Although motor performance in the patient group was similar to that of the control group, LAI did not decrease during the motor learning as it did in the control group. In contrast, LICI was normally modulated. In patients the results after PAS did not match those obtained after motor learning: LAI was paradoxically increased and LICI did not exhibit any change. In the normal situation, decreased excitability in inhibitory circuits after induction of LTP-like plasticity may help to shape the cortical maps according to the new sensorimotor task. In patients, the abnormal or absent modulation of afferent and intracortical long-interval inhibition might indicate maladaptive plasticity that possibly contributes to the difficulty that they have to learn a new sensorimotor task.“ PMID:22429246
NASA Astrophysics Data System (ADS)
Imada, Keita; Nakamura, Katsuhiko
This paper describes recent improvements to Synapse system for incremental learning of general context-free grammars (CFGs) and definite clause grammars (DCGs) from positive and negative sample strings. An important feature of our approach is incremental learning, which is realized by a rule generation mechanism called “bridging” based on bottom-up parsing for positive samples and the search for rule sets. The sizes of rule sets and the computation time depend on the search strategies. In addition to the global search for synthesizing minimal rule sets and serial search, another method for synthesizing semi-optimum rule sets, we incorporate beam search to the system for synthesizing semi-minimal rule sets. The paper shows several experimental results on learning CFGs and DCGs, and we analyze the sizes of rule sets and the computation time.
Complexity, Training Paradigm Design, and the Contribution of Memory Subsystems to Grammar Learning
Ettlinger, Marc; Wong, Patrick C. M.
2016-01-01
Although there is variability in nonnative grammar learning outcomes, the contributions of training paradigm design and memory subsystems are not well understood. To examine this, we presented learners with an artificial grammar that formed words via simple and complex morphophonological rules. Across three experiments, we manipulated training paradigm design and measured subjects' declarative, procedural, and working memory subsystems. Experiment 1 demonstrated that passive, exposure-based training boosted learning of both simple and complex grammatical rules, relative to no training. Additionally, procedural memory correlated with simple rule learning, whereas declarative memory correlated with complex rule learning. Experiment 2 showed that presenting corrective feedback during the test phase did not improve learning. Experiment 3 revealed that structuring the order of training so that subjects are first exposed to the simple rule and then the complex improved learning. The cumulative findings shed light on the contributions of grammatical complexity, training paradigm design, and domain-general memory subsystems in determining grammar learning success. PMID:27391085
Neural networks supporting switching, hypothesis testing, and rule application
Liu, Zhiya; Braunlich, Kurt; Wehe, Hillary S.; Seger, Carol A.
2015-01-01
We identified dynamic changes in recruitment of neural connectivity networks across three phases of a flexible rule learning and set-shifting task similar to the Wisconsin Card Sort Task: switching, rule learning via hypothesis testing, and rule application. During fMRI scanning, subjects viewed pairs of stimuli that differed across four dimensions (letter, color, size, screen location), chose one stimulus, and received feedback. Subjects were informed that the correct choice was determined by a simple unidimensional rule, for example “choose the blue letter.” Once each rule had been learned and correctly applied for 4-7 trials, subjects were cued via either negative feedback or visual cues to switch to learning a new rule. Task performance was divided into three phases: Switching (first trial after receiving the switch cue), hypothesis testing (subsequent trials through the last error trial), and rule application (correct responding after the rule was learned). We used both univariate analysis to characterize activity occurring within specific regions of the brain, and a multivariate method, constrained principal component analysis for fMRI (fMRI-CPCA), to investigate how distributed regions coordinate to subserve different processes. As hypothesized, switching was subserved by a limbic network including the ventral striatum, thalamus, and parahippocampal gyrus, in conjunction with cortical salience network regions including the anterior cingulate and frontoinsular cortex. Activity in the ventral striatum was associated with switching regardless of how switching was cued; visually cued shifts were associated with additional visual cortical activity. After switching, as subjects moved into the hypothesis testing phase, a broad fronto-parietal-striatal network (associated with the cognitive control, dorsal attention, and salience networks) increased in activity. This network was sensitive to rule learning speed, with greater extended activity for the slowest learning speed late in the time course of learning. As subjects shifted from hypothesis testing to rule application, activity in this network decreased and activity in the somatomotor and default mode networks increased. PMID:26197092
Neural networks supporting switching, hypothesis testing, and rule application.
Liu, Zhiya; Braunlich, Kurt; Wehe, Hillary S; Seger, Carol A
2015-10-01
We identified dynamic changes in recruitment of neural connectivity networks across three phases of a flexible rule learning and set-shifting task similar to the Wisconsin Card Sort Task: switching, rule learning via hypothesis testing, and rule application. During fMRI scanning, subjects viewed pairs of stimuli that differed across four dimensions (letter, color, size, screen location), chose one stimulus, and received feedback. Subjects were informed that the correct choice was determined by a simple unidimensional rule, for example "choose the blue letter". Once each rule had been learned and correctly applied for 4-7 trials, subjects were cued via either negative feedback or visual cues to switch to learning a new rule. Task performance was divided into three phases: Switching (first trial after receiving the switch cue), hypothesis testing (subsequent trials through the last error trial), and rule application (correct responding after the rule was learned). We used both univariate analysis to characterize activity occurring within specific regions of the brain, and a multivariate method, constrained principal component analysis for fMRI (fMRI-CPCA), to investigate how distributed regions coordinate to subserve different processes. As hypothesized, switching was subserved by a limbic network including the ventral striatum, thalamus, and parahippocampal gyrus, in conjunction with cortical salience network regions including the anterior cingulate and frontoinsular cortex. Activity in the ventral striatum was associated with switching regardless of how switching was cued; visually cued shifts were associated with additional visual cortical activity. After switching, as subjects moved into the hypothesis testing phase, a broad fronto-parietal-striatal network (associated with the cognitive control, dorsal attention, and salience networks) increased in activity. This network was sensitive to rule learning speed, with greater extended activity for the slowest learning speed late in the time course of learning. As subjects shifted from hypothesis testing to rule application, activity in this network decreased and activity in the somatomotor and default mode networks increased. Copyright © 2015 Elsevier Ltd. All rights reserved.
On-line Gibbs learning. II. Application to perceptron and multilayer networks
NASA Astrophysics Data System (ADS)
Kim, J. W.; Sompolinsky, H.
1998-08-01
In the preceding paper (``On-line Gibbs Learning. I. General Theory'') we have presented the on-line Gibbs algorithm (OLGA) and studied analytically its asymptotic convergence. In this paper we apply OLGA to on-line supervised learning in several network architectures: a single-layer perceptron, two-layer committee machine, and a winner-takes-all (WTA) classifier. The behavior of OLGA for a single-layer perceptron is studied both analytically and numerically for a variety of rules: a realizable perceptron rule, a perceptron rule corrupted by output and input noise, and a rule generated by a committee machine. The two-layer committee machine is studied numerically for the cases of learning a realizable rule as well as a rule that is corrupted by output noise. The WTA network is studied numerically for the case of a realizable rule. The asymptotic results reported in this paper agree with the predictions of the general theory of OLGA presented in paper I. In all the studied cases, OLGA converges to a set of weights that minimizes the generalization error. When the learning rate is chosen as a power law with an optimal power, OLGA converges with a power law that is the same as that of batch learning.
Grimm, Lisa R; Maddox, W Todd
2013-11-01
Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.
The Role of Age and Executive Function in Auditory Category Learning
Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath
2015-01-01
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987
Bimodal Emotion Congruency Is Critical to Preverbal Infants' Abstract Rule Learning
ERIC Educational Resources Information Center
Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei
2016-01-01
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of…
REVIEW: Internal models in sensorimotor integration: perspectives from adaptive control theory
NASA Astrophysics Data System (ADS)
Tin, Chung; Poon, Chi-Sang
2005-09-01
Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems—such as sensorimotor prediction or the resolution of vestibular sensory ambiguity—is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.
A Flexible Mechanism of Rule Selection Enables Rapid Feature-Based Reinforcement Learning
Balcarras, Matthew; Womelsdorf, Thilo
2016-01-01
Learning in a new environment is influenced by prior learning and experience. Correctly applying a rule that maps a context to stimuli, actions, and outcomes enables faster learning and better outcomes compared to relying on strategies for learning that are ignorant of task structure. However, it is often difficult to know when and how to apply learned rules in new contexts. In our study we explored how subjects employ different strategies for learning the relationship between stimulus features and positive outcomes in a probabilistic task context. We test the hypothesis that task naive subjects will show enhanced learning of feature specific reward associations by switching to the use of an abstract rule that associates stimuli by feature type and restricts selections to that dimension. To test this hypothesis we designed a decision making task where subjects receive probabilistic feedback following choices between pairs of stimuli. In the task, trials are grouped in two contexts by blocks, where in one type of block there is no unique relationship between a specific feature dimension (stimulus shape or color) and positive outcomes, and following an un-cued transition, alternating blocks have outcomes that are linked to either stimulus shape or color. Two-thirds of subjects (n = 22/32) exhibited behavior that was best fit by a hierarchical feature-rule model. Supporting the prediction of the model mechanism these subjects showed significantly enhanced performance in feature-reward blocks, and rapidly switched their choice strategy to using abstract feature rules when reward contingencies changed. Choice behavior of other subjects (n = 10/32) was fit by a range of alternative reinforcement learning models representing strategies that do not benefit from applying previously learned rules. In summary, these results show that untrained subjects are capable of flexibly shifting between behavioral rules by leveraging simple model-free reinforcement learning and context-specific selections to drive responses. PMID:27064794
Learning General Phonological Rules from Distributional Information: A Computational Model
ERIC Educational Resources Information Center
Calamaro, Shira; Jarosz, Gaja
2015-01-01
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony…
Ellipsoidal fuzzy learning for smart car platoons
NASA Astrophysics Data System (ADS)
Dickerson, Julie A.; Kosko, Bart
1993-12-01
A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.
ERIC Educational Resources Information Center
Merrill, Paul F.; And Others
To replicate and extend the results of a previous study, this project investigated the effects of behavioral objectives and/or rules on computer-based learning task performance. The 133 subjects were randomly assigned to an example-only, objective-example, rule example, or objective-rule example group. The availability of rules and/or objectives…
When More Is Less: Feedback Effects in Perceptual Category Learning
ERIC Educational Resources Information Center
Maddox, W. Todd; Love, Bradley C.; Glass, Brian D.; Filoteo, J. Vincent
2008-01-01
Rule-based and information-integration category learning were compared under minimal and full feedback conditions. Rule-based category structures are those for which the optimal rule is verbalizable. Information-integration category structures are those for which the optimal rule is not verbalizable. With minimal feedback subjects are told whether…
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe. PMID:24223789
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding.
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule's error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism.
Myths and legends in learning classification rules
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A discussion is presented of machine learning theory on empirically learning classification rules. Six myths are proposed in the machine learning community that address issues of bias, learning as search, computational learning theory, Occam's razor, universal learning algorithms, and interactive learning. Some of the problems raised are also addressed from a Bayesian perspective. Questions are suggested that machine learning researchers should be addressing both theoretically and experimentally.
Cognitive changes in conjunctive rule-based category learning: An ERP approach.
Rabi, Rahel; Joanisse, Marc F; Zhu, Tianshu; Minda, John Paul
2018-06-25
When learning rule-based categories, sufficient cognitive resources are needed to test hypotheses, maintain the currently active rule in working memory, update rules after feedback, and to select a new rule if necessary. Prior research has demonstrated that conjunctive rules are more complex than unidimensional rules and place greater demands on executive functions like working memory. In our study, event-related potentials (ERPs) were recorded while participants performed a conjunctive rule-based category learning task with trial-by-trial feedback. In line with prior research, correct categorization responses resulted in a larger stimulus-locked late positive complex compared to incorrect responses, possibly indexing the updating of rule information in memory. Incorrect trials elicited a pronounced feedback-locked P300 elicited which suggested a disconnect between perception, and the rule-based strategy. We also examined the differential processing of stimuli that were able to be correctly classified by the suboptimal single-dimensional rule ("easy" stimuli) versus those that could only be correctly classified by the optimal, conjunctive rule ("difficult" stimuli). Among strong learners, a larger, late positive slow wave emerged for difficult compared with easy stimuli, suggesting differential processing of category items even though strong learners performed well on the conjunctive category set. Overall, the findings suggest that ERP combined with computational modelling can be used to better understand the cognitive processes involved in rule-based category learning.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons
Burbank, Kendra S.
2015-01-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks. PMID:26633645
Learning Problem-Solving Rules as Search through a Hypothesis Space
ERIC Educational Resources Information Center
Lee, Hee Seung; Betts, Shawn; Anderson, John R.
2016-01-01
Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem…
An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm
Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En
2015-01-01
A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193
Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En
2015-08-13
A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.
Meunier, Sabine; Russmann, Heike; Shamim, Ejaz; Lamy, Jean-Charles; Hallett, Mark
2012-03-01
Artificial induction of plasticity by paired associative stimulation (PAS) in healthy volunteers (HV) demonstrates Hebbian-like plasticity in selected inhibitory networks as well as excitatory networks. In a group of 17 patients with focal hand dystonia and a group of 19 HV, we evaluated how PAS and the learning of a simple motor task influence the circuits supporting long-interval intracortical inhibition (LICI, reflecting activity of GABA(B) interneurons) and long-latency afferent inhibition (LAI, reflecting activity of somatosensory inputs to the motor cortex). In HV, PAS and motor learning induced long-term potentiation (LTP)-like plasticity of excitatory networks and a lasting decrease of LAI and LICI in the motor representation of the targeted or trained muscle. The better the motor performance, the larger was the decrease of LAI. Although motor performance in the patient group was similar to that of the control group, LAI did not decrease during the motor learning as it did in the control group. In contrast, LICI was normally modulated. In patients the results after PAS did not match those obtained after motor learning: LAI was paradoxically increased and LICI did not exhibit any change. In the normal situation, decreased excitability in inhibitory circuits after induction of LTP-like plasticity may help to shape the cortical maps according to the new sensorimotor task. In patients, the abnormal or absent modulation of afferent and intracortical long-interval inhibition might indicate maladaptive plasticity that possibly contributes to the difficulty that they have to learn a new sensorimotor task. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Students Learn by Doing: Teaching about Rules of Thumb.
ERIC Educational Resources Information Center
Cude, Brenda J.
1990-01-01
Identifies situation in which consumers are likely to substitute rules of thumb for research, reviews rules of thumb often used as substitutes, and identifies teaching activities to help students learn when substitution is appropriate. (JOW)
Mission Impossible: Learning How a Classroom Works before It's Too Late!
ERIC Educational Resources Information Center
Tattershall, Sandra
1987-01-01
The article looks at the implicit rules of classroom functioning and the importance of students learning these rules, either through osmosis or direct rule instruction, during the first few weeks of school. Speech language pathologists can help at risk students identify critical components of teacher behavior and classroom rules. (DB)
Myths and legends in learning classification rules
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
This paper is a discussion of machine learning theory on empirically learning classification rules. The paper proposes six myths in the machine learning community that address issues of bias, learning as search, computational learning theory, Occam's razor, 'universal' learning algorithms, and interactive learnings. Some of the problems raised are also addressed from a Bayesian perspective. The paper concludes by suggesting questions that machine learning researchers should be addressing both theoretically and experimentally.
Learning to Learn about Uncertain Feedback
ERIC Educational Resources Information Center
Faraut, Mailys C. M.; Procyk, Emmanuel; Wilson, Charles R. E.
2016-01-01
Unexpected outcomes can reflect noise in the environment or a change in the current rules. We should ignore noise but shift strategy after rule changes. How we learn to do this is unclear, but one possibility is that it relies on learning to learn in uncertain environments. We propose that acquisition of latent task structure during learning to…
Applications of Machine Learning and Rule Induction,
1995-02-15
An important area of application for machine learning is in automating the acquisition of knowledge bases required for expert systems. In this paper...we review the major paradigms for machine learning , including neural networks, instance-based methods, genetic learning, rule induction, and analytic
Attentional effects on rule extraction and consolidation from speech.
López-Barroso, Diana; Cucurell, David; Rodríguez-Fornells, Antoni; de Diego-Balaguer, Ruth
2016-07-01
Incidental learning plays a crucial role in the initial phases of language acquisition. However the knowledge derived from implicit learning, which is based on prediction-based mechanisms, may become explicit. The role that attention plays in the formation of implicit and explicit knowledge of the learned material is unclear. In the present study, we investigated the role that attention plays in the acquisition of non-adjacent rule learning from speech. In addition, we also tested whether the amount of attention during learning changes the representation of the learned material after a 24h delay containing sleep. For that, we developed an experiment run on two consecutive days consisting on the exposure to an artificial language that contained non-adjacent dependencies (rules) between words whereas different conditions were established to manipulate the amount of attention given to the rules (target and non-target conditions). Furthermore, we used both indirect and direct measures of learning that are more sensitive to implicit and explicit knowledge, respectively. Whereas the indirect measures indicated that learning of the rules occurred regardless of attention, more explicit judgments after learning showed differences in the type of learning reached under the two attention conditions. 24 hours later, indirect measures showed no further improvements during additional language exposure and explicit judgments indicated that only the information more robustly learned in the previous day, was consolidated. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Attentional effects on rule extraction and consolidation from speech
López-Barroso, Diana; Cucurell, David; Rodríguez-Fornells, Antoni; de Diego-Balaguer, Ruth
2016-01-01
Incidental learning plays a crucial role in the initial phases of language acquisition. However the knowledge derived from implicit learning, which is based on prediction-based mechanisms, may become explicit. The role that attention plays in the formation of implicit and explicit knowledge of the learned material is unclear. In the present study, we investigated the role that attention plays in the acquisition of non-adjacent rule learning from speech. In addition, we also tested whether the amount of attention during learning changes the representation of the learned material after a 24 h delay containing sleep. For that, we developed an experiment run on two consecutive days consisting on the exposure to an artificial language that contained non-adjacent dependencies (rules) between words whereas different conditions were established to manipulate the amount of attention given to the rules (target and non-target conditions). Furthermore, we used both indirect and direct measures of learning that are more sensitive to implicit and explicit knowledge, respectively. Whereas the indirect measures indicated that learning of the rules occurred regardless of attention, more explicit judgments after learning showed differences in the type of learning reached under the two attention conditions. 24 hours later, indirect measures showed no further improvements during additional language exposure and explicit judgments indicated that only the information more robustly learned in the previous day, was consolidated. PMID:27031495
Parameter estimation in spiking neural networks: a reverse-engineering approach.
Rostro-Gonzalez, H; Cessac, B; Vieville, T
2012-04-01
This paper presents a reverse engineering approach for parameter estimation in spiking neural networks (SNNs). We consider the deterministic evolution of a time-discretized network with spiking neurons, where synaptic transmission has delays, modeled as a neural network of the generalized integrate and fire type. Our approach aims at by-passing the fact that the parameter estimation in SNN results in a non-deterministic polynomial-time hard problem when delays are to be considered. Here, this assumption has been reformulated as a linear programming (LP) problem in order to perform the solution in a polynomial time. Besides, the LP problem formulation makes the fact that the reverse engineering of a neural network can be performed from the observation of the spike times explicit. Furthermore, we point out how the LP adjustment mechanism is local to each neuron and has the same structure as a 'Hebbian' rule. Finally, we present a generalization of this approach to the design of input-output (I/O) transformations as a practical method to 'program' a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.
Cuppini, Cristiano; Ursino, Mauro; Magosso, Elisa; Ross, Lars A.; Foxe, John J.; Molholm, Sophie
2017-01-01
Failure to appropriately develop multisensory integration (MSI) of audiovisual speech may affect a child's ability to attain optimal communication. Studies have shown protracted development of MSI into late-childhood and identified deficits in MSI in children with an autism spectrum disorder (ASD). Currently, the neural basis of acquisition of this ability is not well understood. Here, we developed a computational model informed by neurophysiology to analyze possible mechanisms underlying MSI maturation, and its delayed development in ASD. The model posits that strengthening of feedforward and cross-sensory connections, responsible for the alignment of auditory and visual speech sound representations in posterior superior temporal gyrus/sulcus, can explain behavioral data on the acquisition of MSI. This was simulated by a training phase during which the network was exposed to unisensory and multisensory stimuli, and projections were crafted by Hebbian rules of potentiation and depression. In its mature architecture, the network also reproduced the well-known multisensory McGurk speech effect. Deficits in audiovisual speech perception in ASD were well accounted for by fewer multisensory exposures, compatible with a lack of attention, but not by reduced synaptic connectivity or synaptic plasticity. PMID:29163099
Supervised Learning in Spiking Neural Networks for Precise Temporal Encoding
Gardner, Brian; Grüning, André
2016-01-01
Precise spike timing as a means to encode information in neural networks is biologically supported, and is advantageous over frequency-based codes by processing input features on a much shorter time-scale. For these reasons, much recent attention has been focused on the development of supervised learning rules for spiking neural networks that utilise a temporal coding scheme. However, despite significant progress in this area, there still lack rules that have a theoretical basis, and yet can be considered biologically relevant. Here we examine the general conditions under which synaptic plasticity most effectively takes place to support the supervised learning of a precise temporal code. As part of our analysis we examine two spike-based learning methods: one of which relies on an instantaneous error signal to modify synaptic weights in a network (INST rule), and the other one relying on a filtered error signal for smoother synaptic weight modifications (FILT rule). We test the accuracy of the solutions provided by each rule with respect to their temporal encoding precision, and then measure the maximum number of input patterns they can learn to memorise using the precise timings of individual spikes as an indication of their storage capacity. Our results demonstrate the high performance of the FILT rule in most cases, underpinned by the rule’s error-filtering mechanism, which is predicted to provide smooth convergence towards a desired solution during learning. We also find the FILT rule to be most efficient at performing input pattern memorisations, and most noticeably when patterns are identified using spikes with sub-millisecond temporal precision. In comparison with existing work, we determine the performance of the FILT rule to be consistent with that of the highly efficient E-learning Chronotron rule, but with the distinct advantage that our FILT rule is also implementable as an online method for increased biological realism. PMID:27532262
The Effects of Concurrent Verbal and Visual Tasks on Category Learning
ERIC Educational Resources Information Center
Miles, Sarah J.; Minda, John Paul
2011-01-01
Current theories of category learning posit separate verbal and nonverbal learning systems. Past research suggests that the verbal system relies on verbal working memory and executive functioning and learns rule-defined categories; the nonverbal system does not rely on verbal working memory and learns non-rule-defined categories (E. M. Waldron…
Learning in the Absence of Experience-Dependent Regulation of NMDAR Composition
ERIC Educational Resources Information Center
Lebel, David; Sidhu, Nishchal; Barkai, Edi; Quinlan, Elizabeth M.
2006-01-01
Olfactory discrimination (OD) learning consists of two phases: an initial N-methyl-d-aspartate (NMDA) receptor--sensitive rule-learning phase, followed by an NMDA receptor (NMDAR)--insensitive pair-learning phase. The rule-learning phase is accompanied by changes in the composition and function of NMDARs at synapses in the piriform cortex,…
Development of Maps of Simple and Complex Cells in the Primary Visual Cortex
Antolík, Ján; Bednar, James A.
2011-01-01
Hubel and Wiesel (1962) classified primary visual cortex (V1) neurons as either simple, with responses modulated by the spatial phase of a sine grating, or complex, i.e., largely phase invariant. Much progress has been made in understanding how simple-cells develop, and there are now detailed computational models establishing how they can form topographic maps ordered by orientation preference. There are also models of how complex cells can develop using outputs from simple cells with different phase preferences, but no model of how a topographic orientation map of complex cells could be formed based on the actual connectivity patterns found in V1. Addressing this question is important, because the majority of existing developmental models of simple-cell maps group neurons selective to similar spatial phases together, which is contrary to experimental evidence, and makes it difficult to construct complex cells. Overcoming this limitation is not trivial, because mechanisms responsible for map development drive receptive fields (RF) of nearby neurons to be highly correlated, while co-oriented RFs of opposite phases are anti-correlated. In this work, we model V1 as two topographically organized sheets representing cortical layer 4 and 2/3. Only layer 4 receives direct thalamic input. Both sheets are connected with narrow feed-forward and feedback connectivity. Only layer 2/3 contains strong long-range lateral connectivity, in line with current anatomical findings. Initially all weights in the model are random, and each is modified via a Hebbian learning rule. The model develops smooth, matching, orientation preference maps in both sheets. Layer 4 units become simple cells, with phase preference arranged randomly, while those in layer 2/3 are primarily complex cells. To our knowledge this model is the first explaining how simple cells can develop with random phase preference, and how maps of complex cells can develop, using only realistic patterns of connectivity. PMID:21559067
Collective Dynamics in Physical and Social Networks
NASA Astrophysics Data System (ADS)
Isakov, Alexander
We study four systems where individual units come together to display a range of collective behavior. First, we consider a physical system of phase oscillators on a network that expands the Kuramoto model to include oscillator-network interactions and the presence of noise: using a Hebbian-like learning rule, oscillators that synchronize in turn strengthen their connections to each other. We find that the average degree of connectivity strongly affects rates of flipping between aligned and anti-aligned states, and that this result persists to the case of complex networks. Turning to a fully multi-player, multi-strategy evolutionary dynamics model of cooperating bacteria that change who they give resources to and take resources from, we find several regimes that give rise to high levels of collective structure in the resulting networks. In this setting, we also explore the conditions in which an intervention that affects cooperation itself (e.g. "seeding the network with defectors") can lead to wiping out an infection. We find a non-monotonic connection between the percent of disabled cooperation and cure rate, suggesting that in some regimes a limited perturbation can lead to total population collapse. At a larger scale, we study how the locomotor system recovers after amputation in fruit flies. Through experiment and a theoretical model of multi-legged motion controlled by neural oscillators, we find that proprioception plays a role in the ability of flies to control leg forces appropriately to recover from a large initial turning bias induced by the injury. Finally, at the human scale, we consider a social network in a traditional society in Africa to understand how social ties lead to group formation for collective action (stealth raids). We identify critical and distinct roles for both leadership (important for catalyzing a group) and friendship (important for final composition). We conclude with prospects for future work.
The effect of negative performance stereotypes on learning.
Rydell, Robert J; Rydell, Michael T; Boucher, Kathryn L
2010-12-01
Stereotype threat (ST) research has focused exclusively on how negative group stereotypes reduce performance. The present work examines if pejorative stereotypes about women in math inhibit their ability to learn the mathematical rules and operations necessary to solve math problems. In Experiment 1, women experiencing ST had difficulty encoding math-related information into memory and, therefore, learned fewer mathematical rules and showed poorer math performance than did controls. In Experiment 2, women experiencing ST while learning modular arithmetic (MA) performed more poorly than did controls on easy MA problems; this effect was due to reduced learning of the mathematical operations underlying MA. In Experiment 3, ST reduced women's, but not men's, ability to learn abstract mathematical rules and to transfer these rules to a second, isomorphic task. This work provides the first evidence that negative stereotypes about women in math reduce their level of mathematical learning and demonstrates that reduced learning due to stereotype threat can lead to poorer performance in negatively stereotyped domains. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Rule Breaking in the Child Care Centre: Tensions for Children and Teachers
ERIC Educational Resources Information Center
Brennan, Margaret
2016-01-01
Research suggests that young children transgress conventional rules in every culture and society. In this article, the argument is made that rule teaching and learning provide insight into how children learn to be part of a group. The research question addressed is, "Why do some children transgress the rules if their actions risk jeopardising…
Rule-based mechanisms of learning for intelligent adaptive flight control
NASA Technical Reports Server (NTRS)
Handelman, David A.; Stengel, Robert F.
1990-01-01
How certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems is investigated. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learning. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.
Category Learning Strategies in Younger and Older Adults: Rule Abstraction and Memorization
Wahlheim, Christopher N.; McDaniel, Mark A.; Little, Jeri L.
2016-01-01
Despite the fundamental role of category learning in cognition, few studies have examined how this ability differs between younger and older adults. The present experiment examined possible age differences in category learning strategies and their effects on learning. Participants were trained on a category determined by a disjunctive rule applied to relational features. The utilization of rule- and exemplar-based strategies was indexed by self-reports and transfer performance. Based on self-reported strategies, both age groups had comparable frequencies of rule- and exemplar-based learners, but older adults had a higher frequency of intermediate learners (i.e., learners not identifying with a reliance on either rule- or exemplar-based strategies). Training performance was higher for younger than older adults regardless of the strategy utilized, showing that older adults were impaired in their ability to learn the correct rule or to remember exemplar-label associations. Transfer performance converged with strategy reports in showing higher fidelity category representations for younger adults. Younger adults with high working memory capacity were more likely to use an exemplar-based strategy, and older adults with high working memory capacity showed better training performance. Age groups did not differ in their self-reported memory beliefs, and these beliefs did not predict training strategies or performance. Overall, the present results contradict earlier findings that older adults prefer rule- to exemplar-based learning strategies, presumably to compensate for memory deficits. PMID:26950225
A Machine Learning Approach to Student Modeling.
1984-05-01
machine learning , and describe ACN, a student modeling system that incorporates this approach. This system begins with a set of overly general rules, which it uses to search a problem space until it arrives at the same answer as the student. The ACM computer program then uses the solution path it has discovered to determine positive and negative instances of its initial rules, and employs a discrimination learning mechanism to place additional conditions on these rules. The revised rules will reproduce the solution path without search, and constitute a cognitive model of
Bazhenov, Maxim; Huerta, Ramon; Smith, Brian H.
2013-01-01
Nonassociative and associative learning rules simultaneously modify neural circuits. However, it remains unclear how these forms of plasticity interact to produce conditioned responses. Here we integrate nonassociative and associative conditioning within a uniform model of olfactory learning in the honeybee. Honeybees show a fairly abrupt increase in response after a number of conditioning trials. The occurrence of this abrupt change takes many more trials after exposure to nonassociative trials than just using associative conditioning. We found that the interaction of unsupervised and supervised learning rules is critical for explaining latent inhibition phenomenon. Associative conditioning combined with the mutual inhibition between the output neurons produces an abrupt increase in performance despite smooth changes of the synaptic weights. The results show that an integrated set of learning rules implemented using fan-out connectivities together with neural inhibition can explain the broad range of experimental data on learning behaviors. PMID:23536082
Towards cortex sized artificial neural systems.
Johansson, Christopher; Lansner, Anders
2007-01-01
We propose, implement, and discuss an abstract model of the mammalian neocortex. This model is instantiated with a sparse recurrently connected neural network that has spiking leaky integrator units and continuous Hebbian learning. First we study the structure, modularization, and size of neocortex, and then we describe a generic computational model of the cortical circuitry. A characterizing feature of the model is that it is based on the modularization of neocortex into hypercolumns and minicolumns. Both a floating- and fixed-point arithmetic implementation of the model are presented along with simulation results. We conclude that an implementation on a cluster computer is not communication but computation bounded. A mouse and rat cortex sized version of our model executes in 44% and 23% of real-time respectively. Further, an instance of the model with 1.6 x 10(6) units and 2 x 10(11) connections performed noise reduction and pattern completion. These implementations represent the current frontier of large-scale abstract neural network simulations in terms of network size and running speed.
Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.
Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S
2002-06-01
Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.
The transfer of category knowledge by macaques (Macaca mulatta) and humans (Homo sapiens).
Zakrzewski, Alexandria C; Church, Barbara A; Smith, J David
2018-02-01
Cognitive psychologists distinguish implicit, procedural category learning (stimulus-response associations learned outside declarative cognition) from explicit-declarative category learning (conscious category rules). These systems are dissociated by category learning tasks with either a multidimensional, information-integration (II) solution or a unidimensional, rule-based (RB) solution. In the present experiments, humans and two monkeys learned II and RB category tasks fostering implicit and explicit learning, respectively. Then they received occasional transfer trials-never directly reinforced-drawn from untrained regions of the stimulus space. We hypothesized that implicit-procedural category learning-allied to associative learning-would transfer weakly because it is yoked to the training stimuli. This result was confirmed for humans and monkeys. We hypothesized that explicit category learning-allied to abstract category rules-would transfer robustly. This result was confirmed only for humans. That is, humans displayed explicit category knowledge that transferred flawlessly. Monkeys did not. This result illuminates the distinctive abstractness, stimulus independence, and representational portability of humans' explicit category rules. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Sleep facilitates learning a new linguistic rule
Batterink, Laura J.; Oudiette, Delphine; Reber, Paul J.; Paller, Ken A.
2014-01-01
Natural languages contain countless regularities. Extraction of these patterns is an essential component of language acquisition. Here we examined the hypothesis that memory processing during sleep contributes to this learning. We exposed participants to a hidden linguistic rule by presenting a large number of two-word phrases, each including a noun preceded by one of four novel words that functioned as an article (e.g., gi rhino). These novel words (ul, gi, ro and ne) were presented as obeying an explicit rule: two words signified that the noun referent was relatively near, and two that it was relatively far. Undisclosed to participants was the fact that the novel articles also predicted noun animacy, with two of the articles preceding animate referents and the other two preceding inanimate referents. Rule acquisition was tested implicitly using a task in which participants responded to each phrase according to whether the noun was animate or inanimate. Learning of the hidden rule was evident in slower responses to phrases that violated the rule. Responses were delayed regardless of whether rule-knowledge was consciously accessible. Brain potentials provided additional confirmation of implicit and explicit rule-knowledge. An afternoon nap was interposed between two 20-min learning sessions. Participants who obtained greater amounts of both slow-wave and rapid-eye-movement sleep showed increased sensitivity to the hidden linguistic rule in the second session. We conclude that during sleep, reactivation of linguistic information linked with the rule was instrumental for stabilizing learning. The combination of slow-wave and rapid-eye-movement sleep may synergistically facilitate the abstraction of complex patterns in linguistic input. PMID:25447376
Explanation-based learning in infancy.
Baillargeon, Renée; DeJong, Gerald F
2017-10-01
In explanation-based learning (EBL), domain knowledge is leveraged in order to learn general rules from few examples. An explanation is constructed for initial exemplars and is then generalized into a candidate rule that uses only the relevant features specified in the explanation; if the rule proves accurate for a few additional exemplars, it is adopted. EBL is thus highly efficient because it combines both analytic and empirical evidence. EBL has been proposed as one of the mechanisms that help infants acquire and revise their physical rules. To evaluate this proposal, 11- and 12-month-olds (n = 260) were taught to replace their current support rule (that an object is stable when half or more of its bottom surface is supported) with a more sophisticated rule (that an object is stable when half or more of the entire object is supported). Infants saw teaching events in which asymmetrical objects were placed on a base, followed by static test displays involving a novel asymmetrical object and a novel base. When the teaching events were designed to facilitate EBL, infants learned the new rule with as few as two (12-month-olds) or three (11-month-olds) exemplars. When the teaching events were designed to impede EBL, however, infants failed to learn the rule. Together, these results demonstrate that even infants, with their limited knowledge about the world, benefit from the knowledge-based approach of EBL.
Rule Based Category Learning in Patients with Parkinson’s Disease
Price, Amanda; Filoteo, J. Vincent; Maddox, W. Todd
2009-01-01
Measures of explicit rule-based category learning are commonly used in neuropsychological evaluation of individuals with Parkinson’s disease (PD) and the pattern of PD performance on these measures tends to be highly varied. We review the neuropsychological literature to clarify the manner in which PD affects the component processes of rule-based category learning and work to identify and resolve discrepancies within this literature. In particular, we address the manner in which PD and its common treatments affect the processes of rule generation, maintenance, shifting and selection. We then integrate the neuropsychological research with relevant neuroimaging and computational modeling evidence to clarify the neurobiological impact of PD on each process. Current evidence indicates that neurochemical changes associated with PD primarily disrupt rule shifting, and may disturb feedback-mediated learning processes that guide rule selection. Although surgical and pharmacological therapies remediate this deficit, it appears that the same treatments may contribute to impaired rule generation, maintenance and selection processes. These data emphasize the importance of distinguishing between the impact of PD and its common treatments when considering the neuropsychological profile of the disease. PMID:19428385
Applying the Rule Space Model to Develop a Learning Progression for Thermochemistry
NASA Astrophysics Data System (ADS)
Chen, Fu; Zhang, Shanshan; Guo, Yanfang; Xin, Tao
2017-12-01
We used the Rule Space Model, a cognitive diagnostic model, to measure the learning progression for thermochemistry for senior high school students. We extracted five attributes and proposed their hierarchical relationships to model the construct of thermochemistry at four levels using a hypothesized learning progression. For this study, we developed 24 test items addressing the attributes of exothermic and endothermic reactions, chemical bonds and heat quantity change, reaction heat and enthalpy, thermochemical equations, and Hess's law. The test was administered to a sample base of 694 senior high school students taught in 3 schools across 2 cities. Results based on the Rule Space Model analysis indicated that (1) the test items developed by the Rule Space Model were of high psychometric quality for good analysis of difficulties, discriminations, reliabilities, and validities; (2) the Rule Space Model analysis classified the students into seven different attribute mastery patterns; and (3) the initial hypothesized learning progression was modified by the attribute mastery patterns and the learning paths to be more precise and detailed.
Input and Age-Dependent Variation in Second Language Learning: A Connectionist Account.
Janciauskas, Marius; Chang, Franklin
2017-07-26
Language learning requires linguistic input, but several studies have found that knowledge of second language (L2) rules does not seem to improve with more language exposure (e.g., Johnson & Newport, 1989). One reason for this is that previous studies did not factor out variation due to the different rules tested. To examine this issue, we reanalyzed grammaticality judgment scores in Flege, Yeni-Komshian, and Liu's (1999) study of L2 learners using rule-related predictors and found that, in addition to the overall drop in performance due to a sensitive period, L2 knowledge increased with years of input. Knowledge of different grammar rules was negatively associated with input frequency of those rules. To better understand these effects, we modeled the results using a connectionist model that was trained using Korean as a first language (L1) and then English as an L2. To explain the sensitive period in L2 learning, the model's learning rate was reduced in an age-related manner. By assigning different learning rates for syntax and lexical learning, we were able to model the difference between early and late L2 learners in input sensitivity. The model's learning mechanism allowed transfer between the L1 and L2, and this helped to explain the differences between different rules in the grammaticality judgment task. This work demonstrates that an L1 model of learning and processing can be adapted to provide an explicit account of how the input and the sensitive period interact in L2 learning. © 2017 The Authors. Cognitive Science - A Multidisciplinary Journal published by Wiley Periodicals, Inc.
1990-11-01
Intelligence Systems," in Distributed Artifcial Intelligence , vol. II, L. Gasser and M. Huhns (eds), Pitman, London, 1989, pp. 413-430. Shaw, M. Harrow, B...IDTIC FILE COPY A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems N Michael I. Shaw...SUBTITLE 5. FUNDING NUMBERS A Distributed Problem-Solving Approach to Rule Induction: Learning in Distributed Artificial Intelligence Systems 6
Improving Predictions of Multiple Binary Models in ILP
2014-01-01
Despite the success of ILP systems in learning first-order rules from small number of examples and complexly structured data in various domains, they struggle in dealing with multiclass problems. In most cases they boil down a multiclass problem into multiple black-box binary problems following the one-versus-one or one-versus-rest binarisation techniques and learn a theory for each one. When evaluating the learned theories of multiple class problems in one-versus-rest paradigm particularly, there is a bias caused by the default rule toward the negative classes leading to an unrealistic high performance beside the lack of prediction integrity between the theories. Here we discuss the problem of using one-versus-rest binarisation technique when it comes to evaluating multiclass data and propose several methods to remedy this problem. We also illustrate the methods and highlight their link to binary tree and Formal Concept Analysis (FCA). Our methods allow learning of a simple, consistent, and reliable multiclass theory by combining the rules of the multiple one-versus-rest theories into one rule list or rule set theory. Empirical evaluation over a number of data sets shows that our proposed methods produce coherent and accurate rule models from the rules learned by the ILP system of Aleph. PMID:24696657
Refining Linear Fuzzy Rules by Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil
1996-01-01
Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.
Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.
van Ginneken, Bram
2017-03-01
Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised
USDA-ARS?s Scientific Manuscript database
In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...
The Utility of Implicit Learning in the Teaching of Rules
ERIC Educational Resources Information Center
Saetrevik, Bjorn; Reber, Rolf; Sannum, Petter
2006-01-01
The potential impact of implicit learning on education has been repeatedly stressed, though little research has examined this connection directly. The current paper describes two experiments that, inspired by artificial grammar learning experiments, examine the utility of implicit learning as a method for teaching atomic bonding rules to 11-12…
Minda, John P; Rabi, Rahel
2015-01-01
Considerable research on category learning has suggested that many cognitive and environmental factors can have a differential effect on the learning of rule-defined (RD) categories as opposed to the learning of non-rule-defined (NRD) categories. Prior research has also suggested that ego depletion can temporarily reduce the capacity for executive functioning and cognitive flexibility. The present study examined whether temporarily reducing participants' executive functioning via a resource depletion manipulation would differentially impact RD and NRD category learning. Participants were either asked to write a story with no restrictions (the control condition), or without using two common letters (the ego depletion condition). Participants were then asked to learn either a set of RD categories or a set of NRD categories. Resource depleted participants performed more poorly than controls on the RD task, but did not differ from controls on the NRD task, suggesting that self regulatory resources are required for successful RD category learning. These results lend support to multiple systems theories and clarify the role of self-regulatory resources within this theory.
Pavlidou, Elpis V; Williams, Joanne M
2014-07-01
We examined implicit learning in school-aged children with and without developmental dyslexia based on the proposal that implicit learning plays a significant role in mastering fluent reading. We ran two experiments with 16 typically developing children (9 to 11-years-old) and 16 age-matched children with developmental dyslexia using the artificial grammar learning (AGL) paradigm. In Experiment 1 (non-transfer task), children were trained on stimuli that followed patterns (rules) unknown to them. Subsequently, they were asked to decide from a novel set which stimuli follow the same rules (grammaticality judgments). In Experiment 2 (transfer task), training and testing stimuli differed in their superficial characteristics but followed the same rules. Again, children were asked to make grammaticality judgments. Our findings expand upon previous research by showing that children with developmental dyslexia show difficulties in implicit learning that are most likely specific to higher-order rule-like learning. These findings are discussed in relation to current theories of developmental dyslexia and of implicit learning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Minda, John P.; Rabi, Rahel
2015-01-01
Considerable research on category learning has suggested that many cognitive and environmental factors can have a differential effect on the learning of rule-defined (RD) categories as opposed to the learning of non-rule-defined (NRD) categories. Prior research has also suggested that ego depletion can temporarily reduce the capacity for executive functioning and cognitive flexibility. The present study examined whether temporarily reducing participants’ executive functioning via a resource depletion manipulation would differentially impact RD and NRD category learning. Participants were either asked to write a story with no restrictions (the control condition), or without using two common letters (the ego depletion condition). Participants were then asked to learn either a set of RD categories or a set of NRD categories. Resource depleted participants performed more poorly than controls on the RD task, but did not differ from controls on the NRD task, suggesting that self regulatory resources are required for successful RD category learning. These results lend support to multiple systems theories and clarify the role of self-regulatory resources within this theory. PMID:25688220
Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems
NASA Technical Reports Server (NTRS)
Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith
1988-01-01
Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.
Learning and disrupting invariance in visual recognition with a temporal association rule
Isik, Leyla; Leibo, Joel Z.; Poggio, Tomaso
2012-01-01
Learning by temporal association rules such as Foldiak's trace rule is an attractive hypothesis that explains the development of invariance in visual recognition. Consistent with these rules, several recent experiments have shown that invariance can be broken at both the psychophysical and single cell levels. We show (1) that temporal association learning provides appropriate invariance in models of object recognition inspired by the visual cortex, (2) that we can replicate the “invariance disruption” experiments using these models with a temporal association learning rule to develop and maintain invariance, and (3) that despite dramatic single cell effects, a population of cells is very robust to these disruptions. We argue that these models account for the stability of perceptual invariance despite the underlying plasticity of the system, the variability of the visual world and expected noise in the biological mechanisms. PMID:22754523
Information from multiple modalities helps 5-month-olds learn abstract rules.
Frank, Michael C; Slemmer, Jonathan A; Marcus, Gary F; Johnson, Scott P
2009-07-01
By 7 months of age, infants are able to learn rules based on the abstract relationships between stimuli (Marcus et al., 1999), but they are better able to do so when exposed to speech than to some other classes of stimuli. In the current experiments we ask whether multimodal stimulus information will aid younger infants in identifying abstract rules. We habituated 5-month-olds to simple abstract patterns (ABA or ABB) instantiated in coordinated looming visual shapes and speech sounds (Experiment 1), shapes alone (Experiment 2), and speech sounds accompanied by uninformative but coordinated shapes (Experiment 3). Infants showed evidence of rule learning only in the presence of the informative multimodal cues. We hypothesize that the additional evidence present in these multimodal displays was responsible for the success of younger infants in learning rules, congruent with both a Bayesian account and with the Intersensory Redundancy Hypothesis.
A fuzzy classifier system for process control
NASA Technical Reports Server (NTRS)
Karr, C. L.; Phillips, J. C.
1994-01-01
A fuzzy classifier system that discovers rules for controlling a mathematical model of a pH titration system was developed by researchers at the U.S. Bureau of Mines (USBM). Fuzzy classifier systems successfully combine the strengths of learning classifier systems and fuzzy logic controllers. Learning classifier systems resemble familiar production rule-based systems, but they represent their IF-THEN rules by strings of characters rather than in the traditional linguistic terms. Fuzzy logic is a tool that allows for the incorporation of abstract concepts into rule based-systems, thereby allowing the rules to resemble the familiar 'rules-of-thumb' commonly used by humans when solving difficult process control and reasoning problems. Like learning classifier systems, fuzzy classifier systems employ a genetic algorithm to explore and sample new rules for manipulating the problem environment. Like fuzzy logic controllers, fuzzy classifier systems encapsulate knowledge in the form of production rules. The results presented in this paper demonstrate the ability of fuzzy classifier systems to generate a fuzzy logic-based process control system.
Cooke, Sam F.; Bear, Mark F.
2014-01-01
Donald Hebb chose visual learning in primary visual cortex (V1) of the rodent to exemplify his theories of how the brain stores information through long-lasting homosynaptic plasticity. Here, we revisit V1 to consider roles for bidirectional ‘Hebbian’ plasticity in the modification of vision through experience. First, we discuss the consequences of monocular deprivation (MD) in the mouse, which have been studied by many laboratories over many years, and the evidence that synaptic depression of excitatory input from the thalamus is a primary contributor to the loss of visual cortical responsiveness to stimuli viewed through the deprived eye. Second, we describe a less studied, but no less interesting form of plasticity in the visual cortex known as stimulus-selective response potentiation (SRP). SRP results in increases in the response of V1 to a visual stimulus through repeated viewing and bears all the hallmarks of perceptual learning. We describe evidence implicating an important role for potentiation of thalamo-cortical synapses in SRP. In addition, we present new data indicating that there are some features of this form of plasticity that cannot be fully accounted for by such feed-forward Hebbian plasticity, suggesting contributions from intra-cortical circuit components. PMID:24298166
Sleep facilitates learning a new linguistic rule.
Batterink, Laura J; Oudiette, Delphine; Reber, Paul J; Paller, Ken A
2014-12-01
Natural languages contain countless regularities. Extraction of these patterns is an essential component of language acquisition. Here we examined the hypothesis that memory processing during sleep contributes to this learning. We exposed participants to a hidden linguistic rule by presenting a large number of two-word phrases, each including a noun preceded by one of four novel words that functioned as an article (e.g., gi rhino). These novel words (ul, gi, ro and ne) were presented as obeying an explicit rule: two words signified that the noun referent was relatively near, and two that it was relatively far. Undisclosed to participants was the fact that the novel articles also predicted noun animacy, with two of the articles preceding animate referents and the other two preceding inanimate referents. Rule acquisition was tested implicitly using a task in which participants responded to each phrase according to whether the noun was animate or inanimate. Learning of the hidden rule was evident in slower responses to phrases that violated the rule. Responses were delayed regardless of whether rule-knowledge was consciously accessible. Brain potentials provided additional confirmation of implicit and explicit rule-knowledge. An afternoon nap was interposed between two 20-min learning sessions. Participants who obtained greater amounts of both slow-wave and rapid-eye-movement sleep showed increased sensitivity to the hidden linguistic rule in the second session. We conclude that during sleep, reactivation of linguistic information linked with the rule was instrumental for stabilizing learning. The combination of slow-wave and rapid-eye-movement sleep may synergistically facilitate the abstraction of complex patterns in linguistic input. Copyright © 2014 Elsevier Ltd. All rights reserved.
RuleML-Based Learning Object Interoperability on the Semantic Web
ERIC Educational Resources Information Center
Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.
2008-01-01
Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…
Bimodal emotion congruency is critical to preverbal infants' abstract rule learning.
Tsui, Angeline Sin Mei; Ma, Yuen Ki; Ho, Anna; Chow, Hiu Mei; Tseng, Chia-huei
2016-05-01
Extracting general rules from specific examples is important, as we must face the same challenge displayed in various formats. Previous studies have found that bimodal presentation of grammar-like rules (e.g. ABA) enhanced 5-month-olds' capacity to acquire a rule that infants failed to learn when the rule was presented with visual presentation of the shapes alone (circle-triangle-circle) or auditory presentation of the syllables (la-ba-la) alone. However, the mechanisms and constraints for this bimodal learning facilitation are still unknown. In this study, we used audio-visual relation congruency between bimodal stimulation to disentangle possible facilitation sources. We exposed 8- to 10-month-old infants to an AAB sequence consisting of visual faces with affective expressions and/or auditory voices conveying emotions. Our results showed that infants were able to distinguish the learned AAB rule from other novel rules under bimodal stimulation when the affects in audio and visual stimuli were congruently paired (Experiments 1A and 2A). Infants failed to acquire the same rule when audio-visual stimuli were incongruently matched (Experiment 2B) and when only the visual (Experiment 1B) or the audio (Experiment 1C) stimuli were presented. Our results highlight that bimodal facilitation in infant rule learning is not only dependent on better statistical probability and redundant sensory information, but also the relational congruency of audio-visual information. A video abstract of this article can be viewed at https://m.youtube.com/watch?v=KYTyjH1k9RQ. © 2015 John Wiley & Sons Ltd.
Learning a New Selection Rule in Visual and Frontal Cortex.
van der Togt, Chris; Stănişor, Liviu; Pooresmaeili, Arezoo; Albantakis, Larissa; Deco, Gustavo; Roelfsema, Pieter R
2016-08-01
How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the visual (area V1) and frontal cortex (area FEF) of monkeys. We devised a new icon-selection task. On each day, the monkeys saw 2 new icons (small pictures) and learned which one was relevant. We rewarded eye movements to a saccade target connected to the relevant icon with a curve. Neurons in visual and frontal cortex coded the monkey's choice, because the representation of the selected curve was enhanced. Learning delayed the neuronal selection signals and we uncovered the cause of this delay in V1, where learning to select the relevant icon caused an early suppression of surrounding image elements. These results demonstrate that the learning of a new rule causes a transition from fast and random decisions to a more considerate strategy that takes additional time and they reveal the contribution of visual and frontal cortex to the learning process. © The Author 2016. Published by Oxford University Press.
Learning and innovative elements of strategy adoption rules expand cooperative network topologies.
Wang, Shijun; Szalay, Máté S; Zhang, Changshui; Csermely, Peter
2008-04-09
Cooperation plays a key role in the evolution of complex systems. However, the level of cooperation extensively varies with the topology of agent networks in the widely used models of repeated games. Here we show that cooperation remains rather stable by applying the reinforcement learning strategy adoption rule, Q-learning on a variety of random, regular, small-word, scale-free and modular network models in repeated, multi-agent Prisoner's Dilemma and Hawk-Dove games. Furthermore, we found that using the above model systems other long-term learning strategy adoption rules also promote cooperation, while introducing a low level of noise (as a model of innovation) to the strategy adoption rules makes the level of cooperation less dependent on the actual network topology. Our results demonstrate that long-term learning and random elements in the strategy adoption rules, when acting together, extend the range of network topologies enabling the development of cooperation at a wider range of costs and temptations. These results suggest that a balanced duo of learning and innovation may help to preserve cooperation during the re-organization of real-world networks, and may play a prominent role in the evolution of self-organizing, complex systems.
Johnen, Vanessa M; Neubert, Franz-Xaver; Buch, Ethan R; Verhagen, Lennart; O'Reilly, Jill X; Mars, Rogier B; Rushworth, Matthew F S
2015-01-01
Correlations in brain activity between two areas (functional connectivity) have been shown to relate to their underlying structural connections. We examine the possibility that functional connectivity also reflects short-term changes in synaptic efficacy. We demonstrate that paired transcranial magnetic stimulation (TMS) near ventral premotor cortex (PMv) and primary motor cortex (M1) with a short 8-ms inter-pulse interval evoking synchronous pre- and post-synaptic activity and which strengthens interregional connectivity between the two areas in a pattern consistent with Hebbian plasticity, leads to increased functional connectivity between PMv and M1 as measured with functional magnetic resonance imaging (fMRI). Moreover, we show that strengthening connectivity between these nodes has effects on a wider network of areas, such as decreasing coupling in a parallel motor programming stream. A control experiment revealed that identical TMS pulses at identical frequencies caused no change in fMRI-measured functional connectivity when the inter-pulse-interval was too long for Hebbian-like plasticity. DOI: http://dx.doi.org/10.7554/eLife.04585.001 PMID:25664941
Phonological Concept Learning.
Moreton, Elliott; Pater, Joe; Pertsova, Katya
2017-01-01
Linguistic and non-linguistic pattern learning have been studied separately, but we argue for a comparative approach. Analogous inductive problems arise in phonological and visual pattern learning. Evidence from three experiments shows that human learners can solve them in analogous ways, and that human performance in both cases can be captured by the same models. We test GMECCS (Gradual Maximum Entropy with a Conjunctive Constraint Schema), an implementation of the Configural Cue Model (Gluck & Bower, ) in a Maximum Entropy phonotactic-learning framework (Goldwater & Johnson, ; Hayes & Wilson, ) with a single free parameter, against the alternative hypothesis that learners seek featurally simple algebraic rules ("rule-seeking"). We study the full typology of patterns introduced by Shepard, Hovland, and Jenkins () ("SHJ"), instantiated as both phonotactic patterns and visual analogs, using unsupervised training. Unlike SHJ, Experiments 1 and 2 found that both phonotactic and visual patterns that depended on fewer features could be more difficult than those that depended on more features, as predicted by GMECCS but not by rule-seeking. GMECCS also correctly predicted performance differences between stimulus subclasses within each pattern. A third experiment tried supervised training (which can facilitate rule-seeking in visual learning) to elicit simple rule-seeking phonotactic learning, but cue-based behavior persisted. We conclude that similar cue-based cognitive processes are available for phonological and visual concept learning, and hence that studying either kind of learning can lead to significant insights about the other. Copyright © 2015 Cognitive Science Society, Inc.
Who Knows? Metacognitive Social Learning Strategies.
Heyes, Cecilia
2016-03-01
To make good use of learning from others (social learning), we need to learn from the right others; from agents who know better than we do. Research on social learning strategies (SLSs) has identified rules that focus social learning on the right agents, and has shown that the behaviour of many animals conforms to these rules. However, it has not asked what the rules are made of, that is, about the cognitive processes implementing SLSs. Here, I suggest that most SLSs depend on domain-general, sensorimotor processes. However, some SLSs have the characteristics tacitly ascribed to all of them. These metacognitive SLSs represent 'who knows' in a conscious, reportable way, and have the power to promote cultural evolution. Copyright © 2015 Elsevier Ltd. All rights reserved.
Instructional Variables in Meaningful Learning of Computer Programming.
ERIC Educational Resources Information Center
Mayer, Richard E.
Some 120 undergraduate students participated in experiments to learn how novice computer programers learn to interact with the computer. Two instructional booklets were used: A "rule" booklet consisted of definitions and examples of seven modified FORTRAN statements and appropriate grammar rules; the "model" booklet was…
McDaniel, Mark A; Cahill, Michael J; Robbins, Mathew; Wiener, Chelsea
2014-04-01
We hypothesize that during training some learners may focus on acquiring the particular exemplars and responses associated with the exemplars (termed exemplar learners), whereas other learners attempt to abstract underlying regularities reflected in the particular exemplars linked to an appropriate response (termed rule learners). Supporting this distinction, after training (on a function-learning task), participants displayed an extrapolation profile reflecting either acquisition of the trained cue-criterion associations (exemplar learners) or abstraction of the function rule (rule learners; Studies 1a and 1b). Further, working memory capacity (measured by operation span [Ospan]) was associated with the tendency to rely on rule versus exemplar processes. Studies 1c and 2 examined the persistence of these learning tendencies on several categorization tasks. Study 1c showed that rule learners were more likely than exemplar learners (indexed a priori by extrapolation profiles) to resist using idiosyncratic features (exemplar similarity) in generalization (transfer) of the trained category. Study 2 showed that the rule learners but not the exemplar learners performed well on a novel categorization task (transfer) after training on an abstract coherent category. These patterns suggest that in complex conceptual tasks, (a) individuals tend to either focus on exemplars during learning or on extracting some abstraction of the concept, (b) this tendency might be a relatively stable characteristic of the individual, and (c) transfer patterns are determined by that tendency.
McDaniel, Mark A.; Cahill, Michael J.; Robbins, Mathew; Wiener, Chelsea
2013-01-01
We hypothesize that during training some learners may focus on acquiring the particular exemplars and responses associated with the exemplars (termed exemplar learners), whereas other learners attempt to abstract underlying regularities reflected in the particular exemplars linked to an appropriate response (termed rule learners). Supporting this distinction, after training (on a function-learning task), participants either displayed an extrapolation profile reflecting acquisition of the trained cue-criterion associations (exemplar learners) or abstraction of the function rule (rule learners; Studies 1a and 1b). Further, working memory capacity (measured by Ospan) was associated with the tendency to rely on rule versus exemplar processes. Studies 1c and 2 examined the persistence of these learning tendencies on several categorization tasks. Study 1c showed that rule learners were more likely than exemplar learners (indexed a priori by extrapolation profiles) to resist using idiosyncratic features (exemplar similarity) in generalization (transfer) of the trained category. Study 2 showed that the rule learners but not the exemplar learners performed well on a novel categorization task (transfer) after training on an abstract coherent category. These patterns suggest that in complex conceptual tasks, (a) individuals tend to either focus on exemplars during learning or on extracting some abstraction of the concept, (b) this tendency might be a relatively stable characteristic of the individual, and (c) transfer patterns are determined by that tendency. PMID:23750912
eFSM--a novel online neural-fuzzy semantic memory model.
Tung, Whye Loon; Quek, Chai
2010-01-01
Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This enables eFSM to maintain a current and compact set of Mamdani-type if-then fuzzy rules that collectively generalizes and describes the salient associative mappings between the inputs and outputs of the underlying process being modeled. The learning and modeling performances of the proposed eFSM are evaluated using several benchmark applications and the results are encouraging.
Electrophysiological responses to feedback during the application of abstract rules.
Walsh, Matthew M; Anderson, John R
2013-11-01
Much research focuses on how people acquire concrete stimulus-response associations from experience; however, few neuroscientific studies have examined how people learn about and select among abstract rules. To address this issue, we recorded ERPs as participants performed an abstract rule-learning task. In each trial, they viewed a sample number and two test numbers. Participants then chose a test number using one of three abstract mathematical rules they freely selected from: greater than the sample number, less than the sample number, or equal to the sample number. No one rule was always rewarded, but some rules were rewarded more frequently than others. To maximize their earnings, participants needed to learn which rules were rewarded most frequently. All participants learned to select the best rules for repeating and novel stimulus sets that obeyed the overall reward probabilities. Participants differed, however, in the extent to which they overgeneralized those rules to repeating stimulus sets that deviated from the overall reward probabilities. The feedback-related negativity (FRN), an ERP component thought to reflect reward prediction error, paralleled behavior. The FRN was sensitive to item-specific reward probabilities in participants who detected the deviant stimulus set, and the FRN was sensitive to overall reward probabilities in participants who did not. These results show that the FRN is sensitive to the utility of abstract rules and that the individual's representation of a task's states and actions shapes behavior as well as the FRN.
Electrophysiological Responses to Feedback during the Application of Abstract Rules
Walsh, Matthew M.; Anderson, John R.
2017-01-01
Much research focuses on how people acquire concrete stimulus–response associations from experience; however, few neuroscientific studies have examined how people learn about and select among abstract rules. To address this issue, we recorded ERPs as participants performed an abstract rule-learning task. In each trial, they viewed a sample number and two test numbers. Participants then chose a test number using one of three abstract mathematical rules they freely selected from: greater than the sample number, less than the sample number, or equal to the sample number. No one rule was always rewarded, but some rules were rewarded more frequently than others. To maximize their earnings, participants needed to learn which rules were rewarded most frequently. All participants learned to select the best rules for repeating and novel stimulus sets that obeyed the overall reward probabilities. Participants differed, however, in the extent to which they overgeneralized those rules to repeating stimulus sets that deviated from the overall reward probabilities. The feedback-related negativity (FRN), an ERP component thought to reflect reward prediction error, paralleled behavior. The FRN was sensitive to item-specific reward probabilities in participants who detected the deviant stimulus set, and the FRN was sensitive to overall reward probabilities in participants who did not. These results show that the FRN is sensitive to the utility of abstract rules and that the individualʼs representation of a taskʼs states and actions shapes behavior as well as the FRN. PMID:23915052
Concreteness Fading of Algebraic Instruction: Effects on Learning
ERIC Educational Resources Information Center
Ottmar, Erin; Landy, David
2017-01-01
Learning algebra is difficult for many students in part because of an emphasis on the memorization of abstract rules. Algebraic reasoners across expertise levels often rely on perceptual-motor strategies to make these rules meaningful and memorable. However, in many cases, rules are provided as patterns to be memorized verbally, with little overt…
ERIC Educational Resources Information Center
Zhang, Zhidong
2016-01-01
This study explored an alternative assessment procedure to examine learning trajectories of matrix multiplication. It took rule-based analytical and cognitive task analysis methods specifically to break down operation rules for a given matrix multiplication. Based on the analysis results, a hierarchical Bayesian network, an assessment model,…
Wojtusiak, Janusz; Michalski, Ryszard S; Simanivanh, Thipkesone; Baranova, Ancha V
2009-12-01
Systematic reviews and meta-analysis of published clinical datasets are important part of medical research. By combining results of multiple studies, meta-analysis is able to increase confidence in its conclusions, validate particular study results, and sometimes lead to new findings. Extensive theory has been built on how to aggregate results from multiple studies and arrive to the statistically valid conclusions. Surprisingly, very little has been done to adopt advanced machine learning methods to support meta-analysis. In this paper we describe a novel machine learning methodology that is capable of inducing accurate and easy to understand attributional rules from aggregated data. Thus, the methodology can be used to support traditional meta-analysis in systematic reviews. Most machine learning applications give primary attention to predictive accuracy of the learned knowledge, and lesser attention to its understandability. Here we employed attributional rules, the special form of rules that are relatively easy to interpret for medical experts who are not necessarily trained in statistics and meta-analysis. The methodology has been implemented and initially tested on a set of publicly available clinical data describing patients with metabolic syndrome (MS). The objective of this application was to determine rules describing combinations of clinical parameters used for metabolic syndrome diagnosis, and to develop rules for predicting whether particular patients are likely to develop secondary complications of MS. The aggregated clinical data was retrieved from 20 separate hospital cohorts that included 12 groups of patients with present liver disease symptoms and 8 control groups of healthy subjects. The total of 152 attributes were used, most of which were measured, however, in different studies. Twenty most common attributes were selected for the rule learning process. By applying the developed rule learning methodology we arrived at several different possible rulesets that can be used to predict three considered complications of MS, namely nonalcoholic fatty liver disease (NAFLD), simple steatosis (SS), and nonalcoholic steatohepatitis (NASH).
Improving drivers' knowledge of road rules using digital games.
Li, Qing; Tay, Richard
2014-04-01
Although a proficient knowledge of the road rules is important to safe driving, many drivers do not retain the knowledge acquired after they have obtained their licenses. Hence, more innovative and appealing methods are needed to improve drivers' knowledge of the road rules. This study examines the effect of game based learning on drivers' knowledge acquisition and retention. We find that playing an entertaining game that is designed to impart knowledge of the road rules not only improves players' knowledge but also helps them retain such knowledge. Hence, learning by gaming appears to be a promising learning approach for driver education. Copyright © 2013 Elsevier Ltd. All rights reserved.
Category learning strategies in younger and older adults: Rule abstraction and memorization.
Wahlheim, Christopher N; McDaniel, Mark A; Little, Jeri L
2016-06-01
Despite the fundamental role of category learning in cognition, few studies have examined how this ability differs between younger and older adults. The present experiment examined possible age differences in category learning strategies and their effects on learning. Participants were trained on a category determined by a disjunctive rule applied to relational features. The utilization of rule- and exemplar-based strategies was indexed by self-reports and transfer performance. Based on self-reported strategies, the frequencies of rule- and exemplar-based learners were not significantly different between age groups, but there was a significantly higher frequency of intermediate learners (i.e., learners not identifying with a reliance on either rule- or exemplar-based strategies) in the older than younger adult group. Training performance was higher for younger than older adults regardless of the strategy utilized, showing that older adults were impaired in their ability to learn the correct rule or to remember exemplar-label associations. Transfer performance converged with strategy reports in showing higher fidelity category representations for younger adults. Younger adults with high working memory capacity were more likely to use an exemplar-based strategy, and older adults with high working memory capacity showed better training performance. Age groups did not differ in their self-reported memory beliefs, and these beliefs did not predict training strategies or performance. Overall, the present results contradict earlier findings that older adults prefer rule- to exemplar-based learning strategies, presumably to compensate for memory deficits. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
La Camera, Giancarlo; Bouret, Sebastien; Richmond, Barry J.
2018-01-01
The ability to learn and follow abstract rules relies on intact prefrontal regions including the lateral prefrontal cortex (LPFC) and the orbitofrontal cortex (OFC). Here, we investigate the specific roles of these brain regions in learning rules that depend critically on the formation of abstract concepts as opposed to simpler input-output associations. To this aim, we tested monkeys with bilateral removals of either LPFC or OFC on a rapidly learned task requiring the formation of the abstract concept of same vs. different. While monkeys with OFC removals were significantly slower than controls at both acquiring and reversing the concept-based rule, monkeys with LPFC removals were not impaired in acquiring the task, but were significantly slower at rule reversal. Neither group was impaired in the acquisition or reversal of a delayed visual cue-outcome association task without a concept-based rule. These results suggest that OFC is essential for the implementation of a concept-based rule, whereas LPFC seems essential for its modification once established. PMID:29615854
Rule-Based Category Learning in Children: The Role of Age and Executive Functioning
Rabi, Rahel; Minda, John Paul
2014-01-01
Rule-based category learning was examined in 4–11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning. PMID:24489658
Evolving fuzzy rules in a learning classifier system
NASA Technical Reports Server (NTRS)
Valenzuela-Rendon, Manuel
1993-01-01
The fuzzy classifier system (FCS) combines the ideas of fuzzy logic controllers (FLC's) and learning classifier systems (LCS's). It brings together the expressive powers of fuzzy logic as it has been applied in fuzzy controllers to express relations between continuous variables, and the ability of LCS's to evolve co-adapted sets of rules. The goal of the FCS is to develop a rule-based system capable of learning in a reinforcement regime, and that can potentially be used for process control.
Learning and coding in biological neural networks
NASA Astrophysics Data System (ADS)
Fiete, Ila Rani
How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and theoretical results on the scalability of this rule show that learning with stochastic gradient ascent may be adequately fast to explain learning in the bird. Finally, we address the more general issue of the scalability of stochastic gradient learning on quadratic cost surfaces in linear systems, as a function of system size and task characteristics, by deriving analytical expressions for the learning curves.
Hotz, Christine S; Templeton, Steven J; Christopher, Mary M
2005-03-01
A rule-based expert system using CLIPS programming language was created to classify body cavity effusions as transudates, modified transudates, exudates, chylous, and hemorrhagic effusions. The diagnostic accuracy of the rule-based system was compared with that produced by 2 machine-learning methods: Rosetta, a rough sets algorithm and RIPPER, a rule-induction method. Results of 508 body cavity fluid analyses (canine, feline, equine) obtained from the University of California-Davis Veterinary Medical Teaching Hospital computerized patient database were used to test CLIPS and to test and train RIPPER and Rosetta. The CLIPS system, using 17 rules, achieved an accuracy of 93.5% compared with pathologist consensus diagnoses. Rosetta accurately classified 91% of effusions by using 5,479 rules. RIPPER achieved the greatest accuracy (95.5%) using only 10 rules. When the original rules of the CLIPS application were replaced with those of RIPPER, the accuracy rates were identical. These results suggest that both rule-based expert systems and machine-learning methods hold promise for the preliminary classification of body fluids in the clinical laboratory.
Prefrontal Contributions to Rule-Based and Information-Integration Category Learning
ERIC Educational Resources Information Center
Schnyer, David M.; Maddox, W. Todd; Ell, Shawn; Davis, Sarah; Pacheco, Jenni; Verfaellie, Mieke
2009-01-01
Previous research revealed that the basal ganglia play a critical role in category learning [Ell, S. W., Marchant, N. L., & Ivry, R. B. (2006). "Focal putamen lesions impair learning in rule-based, but not information-integration categorization tasks." "Neuropsychologia", 44(10), 1737-1751; Maddox, W. T. & Filoteo, J.…
Learning Non-Adjacent Regularities at Age 0 ; 7
ERIC Educational Resources Information Center
Gervain, Judit; Werker, Janet F.
2013-01-01
One important mechanism suggested to underlie the acquisition of grammar is rule learning. Indeed, infants aged 0 ; 7 are able to learn rules based on simple identity relations (adjacent repetitions, ABB: "wo fe fe" and non-adjacent repetitions, ABA: "wo fe wo", respectively; Marcus et al., 1999). One unexplored issue is…
Rule-Based Category Learning in Down Syndrome
ERIC Educational Resources Information Center
Phillips, B. Allyson; Conners, Frances A.; Merrill, Edward; Klinger, Mark R.
2014-01-01
Rule-based category learning was examined in youths with Down syndrome (DS), youths with intellectual disability (ID), and typically developing (TD) youths. Two tasks measured category learning: the Modified Card Sort task (MCST) and the Concept Formation test of the Woodcock-Johnson-III (Woodcock, McGrew, & Mather, 2001). In regression-based…
A Rational Analysis of Rule-Based Concept Learning
ERIC Educational Resources Information Center
Goodman, Noah D.; Tenenbaum, Joshua B.; Feldman, Jacob; Griffiths, Thomas L.
2008-01-01
This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space--a concept language of logical rules. This article compares the model predictions to human generalization judgments in several…
ERIC Educational Resources Information Center
Riggs, Anne E.; Young, Andrew G.
2016-01-01
What influences children's normative judgments of conventional rules at different points in development? The current study explored the effects of two contextual factors on children's normative reasoning: the way in which the rules were learned and whether the rules apply to the self or others. Peer dyads practiced a novel collaborative board game…
Cerebellar Deep Nuclei Involvement in Cognitive Adaptation and Automaticity
ERIC Educational Resources Information Center
Callu, Delphine; Lopez, Joelle; El Massioui, Nicole
2013-01-01
To determine the role of the interpositus nuclei of cerebellum in rule-based learning and optimization processes, we studied (1) successive transfers of an initially acquired response rule in a cross maze and (2) behavioral strategies in learning a simple response rule in a T maze in interpositus lesioned rats (neurotoxic or electrolytic lesions).…
ERIC Educational Resources Information Center
Mehra, Bharat; Allard, Suzie; Qayyum, M. Asim; Barclay-McLaughlin, Gina
2008-01-01
This article proposes five information-based Golden Rules in intercultural education that represent a holistic approach to creating learning corridors across geographically dispersed academic communities. The Golden Rules are generated through qualitative analysis, grounded theory application, reflective practice, and critical research to…
Developing a Learning Progression for Number Sense Based on the Rule Space Model in China
ERIC Educational Resources Information Center
Chen, Fu; Yan, Yue; Xin, Tao
2017-01-01
The current study focuses on developing the learning progression of number sense for primary school students, and it applies a cognitive diagnostic model, the rule space model, to data analysis. The rule space model analysis firstly extracted nine cognitive attributes and their hierarchy model from the analysis of previous research and the…
Module Six: Parallel Circuits; Basic Electricity and Electronics Individualized Learning System.
ERIC Educational Resources Information Center
Bureau of Naval Personnel, Washington, DC.
In this module the student will learn the rules that govern the characteristics of parallel circuits; the relationships between voltage, current, resistance and power; and the results of common troubles in parallel circuits. The module is divided into four lessons: rules of voltage and current, rules for resistance and power, variational analysis,…
Fuzzy Q-Learning for Generalization of Reinforcement Learning
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1996-01-01
Fuzzy Q-Learning, introduced earlier by the author, is an extension of Q-Learning into fuzzy environments. GARIC is a methodology for fuzzy reinforcement learning. In this paper, we introduce GARIC-Q, a new method for doing incremental Dynamic Programming using a society of intelligent agents which are controlled at the top level by Fuzzy Q-Learning and at the local level, each agent learns and operates based on GARIC. GARIC-Q improves the speed and applicability of Fuzzy Q-Learning through generalization of input space by using fuzzy rules and bridges the gap between Q-Learning and rule based intelligent systems.
Kaiser, W; Faber, T S; Findeis, M
1996-01-01
The authors developed a computer program that detects myocardial infarction (MI) and left ventricular hypertrophy (LVH) in two steps: (1) by extracting parameter values from a 10-second, 12-lead electrocardiogram, and (2) by classifying the extracted parameter values with rule sets. Every disease has its dedicated set of rules. Hence, there are separate rule sets for anterior MI, inferior MI, and LVH. If at least one rule is satisfied, the disease is said to be detected. The computer program automatically develops these rule sets. A database (learning set) of healthy subjects and patients with MI, LVH, and mixed MI+LVH was used. After defining the rule type, initial limits, and expected quality of the rules (positive predictive value, minimum number of patients), the program creates a set of rules by varying the limits. The general rule type is defined as: disease = lim1l < p1 < or = lim1u and lim2l < p2 < or = lim2u and ... limnl < pn < or = limnu. When defining the rule types, only the parameters (p1 ... pn) that are known as clinical electrocardiographic criteria (amplitudes [mV] of Q, R, and T waves and ST-segment; duration [ms] of Q wave; frontal angle [degrees]) were used. This allowed for submitting the learned rule sets to an independent investigator for medical verification. It also allowed the creation of explanatory texts with the rules. These advantages are not offered by the neurons of a neural network. The learned rules were checked against a test set and the following results were obtained: MI: sensitivity 76.2%, positive predictive value 98.6%; LVH: sensitivity 72.3%, positive predictive value 90.9%. The specificity ratings for MI are better than 98%; for LVH, better than 90%.
Learning temporal rules to forecast instability in continuously monitored patients
Dubrawski, Artur; Wang, Donghan; Hravnak, Marilyn; Clermont, Gilles; Pinsky, Michael R
2017-01-01
Inductive machine learning, and in particular extraction of association rules from data, has been successfully used in multiple application domains, such as market basket analysis, disease prognosis, fraud detection, and protein sequencing. The appeal of rule extraction techniques stems from their ability to handle intricate problems yet produce models based on rules that can be comprehended by humans, and are therefore more transparent. Human comprehension is a factor that may improve adoption and use of data-driven decision support systems clinically via face validity. In this work, we explore whether we can reliably and informatively forecast cardiorespiratory instability (CRI) in step-down unit (SDU) patients utilizing data from continuous monitoring of physiologic vital sign (VS) measurements. We use a temporal association rule extraction technique in conjunction with a rule fusion protocol to learn how to forecast CRI in continuously monitored patients. We detail our approach and present and discuss encouraging empirical results obtained using continuous multivariate VS data from the bedside monitors of 297 SDU patients spanning 29 346 hours (3.35 patient-years) of observation. We present example rules that have been learned from data to illustrate potential benefits of comprehensibility of the extracted models, and we analyze the empirical utility of each VS as a potential leading indicator of an impending CRI event. PMID:27274020
McMurray, Bob; Horst, Jessica S.; Samuelson, Larissa K.
2013-01-01
Classic approaches to word learning emphasize the problem of referential ambiguity: in any naming situation the referent of a novel word must be selected from many possible objects, properties, actions, etc. To solve this problem, researchers have posited numerous constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative model in which referent selection is an online process that is independent of long-term learning. This two timescale approach creates significant power in the developing system. We illustrate this with a dynamic associative model in which referent selection is simulated as dynamic competition between competing referents, and learning is simulated using associative (Hebbian) learning. This model can account for a range of findings including the delay in expressive vocabulary relative to receptive vocabulary, learning under high degrees of referential ambiguity using cross-situational statistics, accelerating (vocabulary explosion) and decelerating (power-law) learning rates, fast-mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between individual differences in speed of processing and learning. Five theoretical points are illustrated. 1) Word learning does not require specialized processes – general association learning buttressed by dynamic competition can account for much of the literature. 2) The processes of recognizing familiar words are not different than those that support novel words (e.g., fast-mapping). 3) Online competition may allow the network (or child) to leverage information available in the task to augment performance or behavior despite what might be relatively slow learning or poor representations. 4) Even associative learning is more complex than previously thought – a major contributor to performance is the pruning of incorrect associations between words and referents. 5) Finally, the model illustrates that learning and referent selection/word recognition, though logically distinct, can be deeply and subtly related as phenomena like speed of processing and mutual exclusivity may derive in part from the way learning shapes the system. As a whole, this suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development and processing in children. PMID:23088341
A requirement for memory retrieval during and after long-term extinction learning
Ouyang, Ming; Thomas, Steven A.
2005-01-01
Current learning theories are based on the idea that learning is driven by the difference between expectations and experience (the delta rule). In extinction, one learns that certain expectations no longer apply. Here, we test the potential validity of the delta rule by manipulating memory retrieval (and thus expectations) during extinction learning. Adrenergic signaling is critical for the time-limited retrieval (but not acquisition or consolidation) of contextual fear. Using genetic and pharmacologic approaches to manipulate adrenergic signaling, we find that long-term extinction requires memory retrieval but not conditioned responding. Identical manipulations of the adrenergic system that do not affect memory retrieval do not alter extinction. The results provide substantial support for the delta rule of learning theory. In addition, the timing over which extinction is sensitive to adrenergic manipulation suggests a model whereby memory retrieval occurs during, and several hours after, extinction learning to consolidate long-term extinction memory. PMID:15947076
Applying the Rule Space Model to Develop a Learning Progression for Thermochemistry
ERIC Educational Resources Information Center
Chen, Fu; Zhang, Shanshan; Guo, Yanfang; Xin, Tao
2017-01-01
We used the Rule Space Model, a cognitive diagnostic model, to measure the learning progression for thermochemistry for senior high school students. We extracted five attributes and proposed their hierarchical relationships to model the construct of thermochemistry at four levels using a hypothesized learning progression. For this study, we…
Bayesian Learning and the Psychology of Rule Induction
ERIC Educational Resources Information Center
Endress, Ansgar D.
2013-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to…
Characterizing Rule-Based Category Learning Deficits in Patients with Parkinson's Disease
ERIC Educational Resources Information Center
Filoteo, J. Vincent; Maddox, W. Todd; Ing, A. David; Song, David D.
2007-01-01
Parkinson's disease (PD) patients and normal controls were tested in three category learning experiments to determine if previously observed rule-based category learning impairments in PD patients were due to deficits in selective attention or working memory. In Experiment 1, optimal categorization required participants to base their decision on a…
Infant Learning Is Influenced by Local Spurious Generalizations
ERIC Educational Resources Information Center
Gerken, LouAnn; Quam, Carolyn
2017-01-01
In previous work, 11-month-old infants were able to learn rules about the relation of the consonants in CVCV words from just four examples. The rules involved phonetic feature relations (same voicing or same place of articulation), and infants' learning was impeded when pairs of words allowed alternative possible generalizations (e.g. two words…
ERIC Educational Resources Information Center
Chalies, Sebastien; Escalie, Guillaume; Stefano, Bertone; Clarke, Anthony
2012-01-01
This case study sought to determine the professional development circumstances in which a preservice teacher learned rules of practice (Wittgenstein, 1996) on practicum while interacting with a cooperating teacher and university supervisor. Borrowing from a theoretical conceptualization of teacher professional development based on the postulates…
Grouin, Cyril; Zweigenbaum, Pierre
2013-01-01
In this paper, we present a comparison of two approaches to automatically de-identify medical records written in French: a rule-based system and a machine-learning based system using a conditional random fields (CRF) formalism. Both systems have been designed to process nine identifiers in a corpus of medical records in cardiology. We performed two evaluations: first, on 62 documents in cardiology, and on 10 documents in foetopathology - produced by optical character recognition (OCR) - to evaluate the robustness of our systems. We achieved a 0.843 (rule-based) and 0.883 (machine-learning) exact match overall F-measure in cardiology. While the rule-based system allowed us to achieve good results on nominative (first and last names) and numerical data (dates, phone numbers, and zip codes), the machine-learning approach performed best on more complex categories (postal addresses, hospital names, medical devices, and towns). On the foetopathology corpus, although our systems have not been designed for this corpus and despite OCR character recognition errors, we obtained promising results: a 0.681 (rule-based) and 0.638 (machine-learning) exact-match overall F-measure. This demonstrates that existing tools can be applied to process new documents of lower quality.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-08-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-01-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure. PMID:26291697
Evolution of cooperation driven by incremental learning
NASA Astrophysics Data System (ADS)
Li, Pei; Duan, Haibin
2015-02-01
It has been shown that the details of microscopic rules in structured populations can have a crucial impact on the ultimate outcome in evolutionary games. So alternative formulations of strategies and their revision processes exploring how strategies are actually adopted and spread within the interaction network need to be studied. In the present work, we formulate the strategy update rule as an incremental learning process, wherein knowledge is refreshed according to one's own experience learned from the past (self-learning) and that gained from social interaction (social-learning). More precisely, we propose a continuous version of strategy update rules, by introducing the willingness to cooperate W, to better capture the flexibility of decision making behavior. Importantly, the newly gained knowledge including self-learning and social learning is weighted by the parameter ω, establishing a strategy update rule involving innovative element. Moreover, we quantify the macroscopic features of the emerging patterns to inspect the underlying mechanisms of the evolutionary process using six cluster characteristics. In order to further support our results, we examine the time evolution course for these characteristics. Our results might provide insights for understanding cooperative behaviors and have several important implications for understanding how individuals adjust their strategies under real-life conditions.
Adaptive structured dictionary learning for image fusion based on group-sparse-representation
NASA Astrophysics Data System (ADS)
Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei
2018-04-01
Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.
The role of feedback contingency in perceptual category learning.
Ashby, F Gregory; Vucovich, Lauren E
2016-11-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from 2 or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all 4 conditions, and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects, as well as their theoretical implications, are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Role of Feedback Contingency in Perceptual Category Learning
Ashby, F. Gregory; Vucovich, Lauren E.
2016-01-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications. PMID:27149393
A network model of behavioural performance in a rule learning task.
Hasselmo, Michael E; Stern, Chantal E
2018-04-19
Humans demonstrate differences in performance on cognitive rule learning tasks which could involve differences in properties of neural circuits. An example model is presented to show how gating of the spread of neural activity could underlie rule learning and the generalization of rules to previously unseen stimuli. This model uses the activity of gating units to regulate the pattern of connectivity between neurons responding to sensory input and subsequent gating units or output units. This model allows analysis of network parameters that could contribute to differences in cognitive rule learning. These network parameters include differences in the parameters of synaptic modification and presynaptic inhibition of synaptic transmission that could be regulated by neuromodulatory influences on neural circuits. Neuromodulatory receptors play an important role in cognitive function, as demonstrated by the fact that drugs that block cholinergic muscarinic receptors can cause cognitive impairments. In discussions of the links between neuromodulatory systems and biologically based traits, the issue of mechanisms through which these linkages are realized is often missing. This model demonstrates potential roles of neural circuit parameters regulated by acetylcholine in learning context-dependent rules, and demonstrates the potential contribution of variation in neural circuit properties and neuromodulatory function to individual differences in cognitive function.This article is part of the theme issue 'Diverse perspectives on diversity: multi-disciplinary approaches to taxonomies of individual differences'. © 2018 The Author(s).
Learning to use working memory: a reinforcement learning gating model of rule acquisition in rats
Lloyd, Kevin; Becker, Nadine; Jones, Matthew W.; Bogacz, Rafal
2012-01-01
Learning to form appropriate, task-relevant working memory representations is a complex process central to cognition. Gating models frame working memory as a collection of past observations and use reinforcement learning (RL) to solve the problem of when to update these observations. Investigation of how gating models relate to brain and behavior remains, however, at an early stage. The current study sought to explore the ability of simple RL gating models to replicate rule learning behavior in rats. Rats were trained in a maze-based spatial learning task that required animals to make trial-by-trial choices contingent upon their previous experience. Using an abstract version of this task, we tested the ability of two gating algorithms, one based on the Actor-Critic and the other on the State-Action-Reward-State-Action (SARSA) algorithm, to generate behavior consistent with the rats'. Both models produced rule-acquisition behavior consistent with the experimental data, though only the SARSA gating model mirrored faster learning following rule reversal. We also found that both gating models learned multiple strategies in solving the initial task, a property which highlights the multi-agent nature of such models and which is of importance in considering the neural basis of individual differences in behavior. PMID:23115551
Discovering Fine-grained Sentiment in Suicide Notes
Wang, Wenbo; Chen, Lu; Tan, Ming; Wang, Shaojun; Sheth, Amit P.
2012-01-01
This paper presents our solution for the i2b2 sentiment classification challenge. Our hybrid system consists of machine learning and rule-based classifiers. For the machine learning classifier, we investigate a variety of lexical, syntactic and knowledge-based features, and show how much these features contribute to the performance of the classifier through experiments. For the rule-based classifier, we propose an algorithm to automatically extract effective syntactic and lexical patterns from training examples. The experimental results show that the rule-based classifier outperforms the baseline machine learning classifier using unigram features. By combining the machine learning classifier and the rule-based classifier, the hybrid system gains a better trade-off between precision and recall, and yields the highest micro-averaged F-measure (0.5038), which is better than the mean (0.4875) and median (0.5027) micro-average F-measures among all participating teams. PMID:22879770
Learning in Artificial Neural Systems
NASA Technical Reports Server (NTRS)
Matheus, Christopher J.; Hohensee, William E.
1987-01-01
This paper presents an overview and analysis of learning in Artificial Neural Systems (ANS's). It begins with a general introduction to neural networks and connectionist approaches to information processing. The basis for learning in ANS's is then described, and compared with classical Machine learning. While similar in some ways, ANS learning deviates from tradition in its dependence on the modification of individual weights to bring about changes in a knowledge representation distributed across connections in a network. This unique form of learning is analyzed from two aspects: the selection of an appropriate network architecture for representing the problem, and the choice of a suitable learning rule capable of reproducing the desired function within the given network. The various network architectures are classified, and then identified with explicit restrictions on the types of functions they are capable of representing. The learning rules, i.e., algorithms that specify how the network weights are modified, are similarly taxonomized, and where possible, the limitations inherent to specific classes of rules are outlined.
How Do Infants and Toddlers Learn the Rules? Family Discipline and Young Children
ERIC Educational Resources Information Center
Smith, Anne B.
2004-01-01
This paper examines the issue of how under three year-olds learn the rules of appropriate behaviour in the light of sociocultural, attachment, social learning, ecological theory and sociology of childhood theories. Discipline involves teaching children how to behave acceptably in their family and society, while physical punishment is the use of…
Cognitive Diffusion Model: Facilitating EFL Learning in an Authentic Environment
ERIC Educational Resources Information Center
Shadiev, Rustam; Hwang, Wu-Yuin; Huang, Yueh-Min; Liu, Tzu-Yu
2017-01-01
For this study, we designed learning activities in which students applied newly acquired knowledge to solve meaningful daily life problems in their local community--a real, familiar, and relevant environment for students. For example, students learned about signs and rules in class and then applied this new knowledge to create their own rules for…
A hybrid learning method for constructing compact rule-based fuzzy models.
Zhao, Wanqing; Niu, Qun; Li, Kang; Irwin, George W
2013-12-01
The Takagi–Sugeno–Kang-type rule-based fuzzy model has found many applications in different fields; a major challenge is, however, to build a compact model with optimized model parameters which leads to satisfactory model performance. To produce a compact model, most existing approaches mainly focus on selecting an appropriate number of fuzzy rules. In contrast, this paper considers not only the selection of fuzzy rules but also the structure of each rule premise and consequent, leading to the development of a novel compact rule-based fuzzy model. Here, each fuzzy rule is associated with two sets of input attributes, in which the first is used for constructing the rule premise and the other is employed in the rule consequent. A new hybrid learning method combining the modified harmony search method with a fast recursive algorithm is hereby proposed to determine the structure and the parameters for the rule premises and consequents. This is a hard mixed-integer nonlinear optimization problem, and the proposed hybrid method solves the problem by employing an embedded framework, leading to a significantly reduced number of model parameters and a small number of fuzzy rules with each being as simple as possible. Results from three examples are presented to demonstrate the compactness (in terms of the number of model parameters and the number of rules) and the performance of the fuzzy models obtained by the proposed hybrid learning method, in comparison with other techniques from the literature.
A neural learning classifier system with self-adaptive constructivism for mobile robot control.
Hurst, Jacob; Bull, Larry
2006-01-01
For artificial entities to achieve true autonomy and display complex lifelike behavior, they will need to exploit appropriate adaptable learning algorithms. In this context adaptability implies flexibility guided by the environment at any given time and an open-ended ability to learn appropriate behaviors. This article examines the use of constructivism-inspired mechanisms within a neural learning classifier system architecture that exploits parameter self-adaptation as an approach to realize such behavior. The system uses a rule structure in which each rule is represented by an artificial neural network. It is shown that appropriate internal rule complexity emerges during learning at a rate controlled by the learner and that the structure indicates underlying features of the task. Results are presented in simulated mazes before moving to a mobile robot platform.
Interpretable Decision Sets: A Joint Framework for Description and Prediction
Lakkaraju, Himabindu; Bach, Stephen H.; Jure, Leskovec
2016-01-01
One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model’s prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency. PMID:27853627
Topic categorisation of statements in suicide notes with integrated rules and machine learning.
Kovačević, Aleksandar; Dehghan, Azad; Keane, John A; Nenadic, Goran
2012-01-01
We describe and evaluate an automated approach used as part of the i2b2 2011 challenge to identify and categorise statements in suicide notes into one of 15 topics, including Love, Guilt, Thankfulness, Hopelessness and Instructions. The approach combines a set of lexico-syntactic rules with a set of models derived by machine learning from a training dataset. The machine learning models rely on named entities, lexical, lexico-semantic and presentation features, as well as the rules that are applicable to a given statement. On a testing set of 300 suicide notes, the approach showed the overall best micro F-measure of up to 53.36%. The best precision achieved was 67.17% when only rules are used, whereas best recall of 50.57% was with integrated rules and machine learning. While some topics (eg, Sorrow, Anger, Blame) prove challenging, the performance for relatively frequent (eg, Love) and well-scoped categories (eg, Thankfulness) was comparatively higher (precision between 68% and 79%), suggesting that automated text mining approaches can be effective in topic categorisation of suicide notes.
Brain signatures of early lexical and morphological learning of a new language.
Havas, Viktória; Laine, Matti; Rodríguez Fornells, Antoni
2017-07-01
Morphology is an important part of language processing but little is known about how adult second language learners acquire morphological rules. Using a word-picture associative learning task, we have previously shown that a brief exposure to novel words with embedded morphological structure (suffix for natural gender) is enough for language learners to acquire the hidden morphological rule. Here we used this paradigm to study the brain signatures of early morphological learning in a novel language in adults. Behavioural measures indicated successful lexical (word stem) and morphological (gender suffix) learning. A day after the learning phase, event-related brain potentials registered during a recognition memory task revealed enhanced N400 and P600 components for stem and suffix violations, respectively. An additional effect observed with combined suffix and stem violations was an enhancement of an early N2 component, most probably related to conflict-detection processes. Successful morphological learning was also evident in the ERP responses to the subsequent rule-generalization task with new stems, where violation of the morphological rule was associated with an early (250-400ms) and late positivity (750-900ms). Overall, these findings tend to converge with lexical and morphosyntactic violation effects observed in L1 processing, suggesting that even after a short exposure, adult language learners can acquire both novel words and novel morphological rules. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
Kleberg, Florence I.; Fukai, Tomoki; Gilson, Matthieu
2014-01-01
Spike-timing-dependent plasticity (STDP) has been well established between excitatory neurons and several computational functions have been proposed in various neural systems. Despite some recent efforts, however, there is a significant lack of functional understanding of inhibitory STDP (iSTDP) and its interplay with excitatory STDP (eSTDP). Here, we demonstrate by analytical and numerical methods that iSTDP contributes crucially to the balance of excitatory and inhibitory weights for the selection of a specific signaling pathway among other pathways in a feedforward circuit. This pathway selection is based on the high sensitivity of STDP to correlations in spike times, which complements a recent proposal for the role of iSTDP in firing-rate based selection. Our model predicts that asymmetric anti-Hebbian iSTDP exceeds asymmetric Hebbian iSTDP for supporting pathway-specific balance, which we show is useful for propagating transient neuronal responses. Furthermore, we demonstrate how STDPs at excitatory–excitatory, excitatory–inhibitory, and inhibitory–excitatory synapses cooperate to improve the pathway selection. We propose that iSTDP is crucial for shaping the network structure that achieves efficient processing of synchronous spikes. PMID:24847242
Associative plasticity in intracortical inhibitory circuits in human motor cortex.
Russmann, Heike; Lamy, Jean-Charles; Shamim, Ejaz A; Meunier, Sabine; Hallett, Mark
2009-06-01
Paired associative stimulation (PAS) is a transcranial magnetic stimulation technique inducing Hebbian-like synaptic plasticity in the human motor cortex (M1). PAS is produced by repetitive pairing of a peripheral nerve shock and a transcranial magnetic stimulus (TMS). Its effect is assessed by a change in size of a motor evoked response (MEP). MEP size results from excitatory and inhibitory influences exerted on cortical pyramidal cells, but no robust effects on inhibitory networks have been demonstrated so far. In 38 healthy volunteers, we assessed whether a PAS intervention influences three intracortical inhibitory circuits: short (SICI) and long (LICI) intracortical inhibitions reflecting activity of GABA(A) and GABA(B) interneurons, respectively, and long afferent inhibition (LAI) reflecting activity of somatosensory inputs. After PAS, MEP sizes, LICI and LAI levels were significantly changed while changes of SICI were inconsistent. The changes in LICI and LAI lasted 45 min after PAS. Their direction depended on the delay between the arrival time of the afferent volley at the cortex and the TMS-induced cortical activation during the PAS. PAS influences inhibitory circuits in M1. PAS paradigms can demonstrate Hebbian-like plasticity at selected inhibitory networks as well as excitatory networks.
Learning Object-Level and Meta-Level Knowledge in Expert Systems.
1985-11-01
usually a misdiagnosed one). 1.2.2. Efficiency Consideration Learning becomes a complicated issue in a complex domain like medicine where there may... misdiagnosed cases are often due to missing rules. Therefore, we would rather view this problem as a learning problem. A strategy called "retrospective...inspection after learning" is described in Chapter 5. With this strategy, rules that can make the misdiagnosed case diagnosed correctly are first found; then
Learning and tuning fuzzy logic controllers through reinforcements.
Berenji, H R; Khedkar, P
1992-01-01
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Hybrid Topological Lie-Hamiltonian Learning in Evolving Energy Landscapes
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir G.; Reid, Darryn J.
2015-11-01
In this Chapter, a novel bidirectional algorithm for hybrid (discrete + continuous-time) Lie-Hamiltonian evolution in adaptive energy landscape-manifold is designed and its topological representation is proposed. The algorithm is developed within a geometrically and topologically extended framework of Hopfield's neural nets and Haken's synergetics (it is currently designed in Mathematica, although with small changes it could be implemented in Symbolic C++ or any other computer algebra system). The adaptive energy manifold is determined by the Hamiltonian multivariate cost function H, based on the user-defined vehicle-fleet configuration matrix W, which represents the pseudo-Riemannian metric tensor of the energy manifold. Search for the global minimum of H is performed using random signal differential Hebbian adaptation. This stochastic gradient evolution is driven (or, pulled-down) by `gravitational forces' defined by the 2nd Lie derivatives of H. Topological changes of the fleet matrix W are observed during the evolution and its topological invariant is established. The evolution stops when the W-topology breaks down into several connectivity-components, followed by topology-breaking instability sequence (i.e., a series of phase transitions).
Experience-driven plasticity in binocular vision
Klink, P. Christiaan; Brascamp, Jan W.; Blake, Randolph; van Wezel, Richard J.A.
2010-01-01
Summary Experience-driven neuronal plasticity allows the brain to adapt its functional connectivity to recent sensory input. Here we use binocular rivalry [1], an experimental paradigm where conflicting images are presented to the individual eyes, to demonstrate plasticity in the neuronal mechanisms that convert visual information from two separated retinas into single perceptual experiences. Perception during binocular rivalry tended to initially consist of alternations between exclusive representations of monocularly defined images, but upon prolonged exposure, mixture percepts became more prevalent. The completeness of suppression, reflected in the incidence of mixture percepts, plausibly reflects the strength of inhibition that likely plays a role in binocular rivalry [2]. Recovery of exclusivity was possible, but required highly specific binocular stimulation. Documenting the prerequisites for these observed changes in perceptual exclusivity, our experiments suggest experience-driven plasticity at interocular inhibitory synapses, driven by the (lack of) correlated activity of neurons representing the conflicting stimuli. This form of plasticity is consistent with a previously proposed, but largely untested, anti-Hebbian learning mechanism for inhibitory synapses in vision [3, 4]. Our results implicate experience-driven plasticity as one governing principle in the neuronal organization of binocular vision. PMID:20674360
Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search
NASA Astrophysics Data System (ADS)
Nakamura, Katsuhiko; Hoshina, Akemi
This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.
Spike Train Auto-Structure Impacts Post-Synaptic Firing and Timing-Based Plasticity
Scheller, Bertram; Castellano, Marta; Vicente, Raul; Pipa, Gordon
2011-01-01
Cortical neurons are typically driven by several thousand synapses. The precise spatiotemporal pattern formed by these inputs can modulate the response of a post-synaptic cell. In this work, we explore how the temporal structure of pre-synaptic inhibitory and excitatory inputs impact the post-synaptic firing of a conductance-based integrate and fire neuron. Both the excitatory and inhibitory input was modeled by renewal gamma processes with varying shape factors for modeling regular and temporally random Poisson activity. We demonstrate that the temporal structure of mutually independent inputs affects the post-synaptic firing, while the strength of the effect depends on the firing rates of both the excitatory and inhibitory inputs. In a second step, we explore the effect of temporal structure of mutually independent inputs on a simple version of Hebbian learning, i.e., hard bound spike-timing-dependent plasticity. We explore both the equilibrium weight distribution and the speed of the transient weight dynamics for different mutually independent gamma processes. We find that both the equilibrium distribution of the synaptic weights and the speed of synaptic changes are modulated by the temporal structure of the input. Finally, we highlight that the sensitivity of both the post-synaptic firing as well as the spike-timing-dependent plasticity on the auto-structure of the input of a neuron could be used to modulate the learning rate of synaptic modification. PMID:22203800
Miskovic, Vladimir; Keil, Andreas
2015-01-01
The visual system is biased towards sensory cues that have been associated with danger or harm through temporal co-occurrence. An outstanding question about conditioning-induced changes in visuocortical processing is the extent to which they are driven primarily by top-down factors such as expectancy or by low-level factors such as the temporal proximity between conditioned stimuli and aversive outcomes. Here, we examined this question using two different differential aversive conditioning experiments: participants learned to associate a particular grating stimulus with an aversive noise that was presented either in close temporal proximity (delay conditioning experiment) or after a prolonged stimulus-free interval (trace conditioning experiment). In both experiments we probed cue-related cortical responses by recording steady-state visual evoked potentials (ssVEPs). Although behavioral ratings indicated that all participants successfully learned to discriminate between the grating patterns that predicted the presence versus absence of the aversive noise, selective amplification of population-level responses in visual cortex for the conditioned danger signal was observed only when the grating and the noise were temporally contiguous. Our findings are in line with notions purporting that changes in the electrocortical response of visual neurons induced by aversive conditioning are a product of Hebbian associations among sensory cell assemblies rather than being driven entirely by expectancy-based, declarative processes. PMID:23398582
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
Prospective Coding by Spiking Neurons
Brea, Johanni; Gaál, Alexisz Tamás; Senn, Walter
2016-01-01
Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron’s firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ). PMID:27341100
Foraging Ecology Predicts Learning Performance in Insectivorous Bats
Clarin, Theresa M. A.; Ruczyński, Ireneusz; Page, Rachel A.
2013-01-01
Bats are unusual among mammals in showing great ecological diversity even among closely related species and are thus well suited for studies of adaptation to the ecological background. Here we investigate whether behavioral flexibility and simple- and complex-rule learning performance can be predicted by foraging ecology. We predict faster learning and higher flexibility in animals hunting in more complex, variable environments than in animals hunting in more simple, stable environments. To test this hypothesis, we studied three closely related insectivorous European bat species of the genus Myotis that belong to three different functional groups based on foraging habitats: M. capaccinii, an open water forager, M. myotis, a passive listening gleaner, and M. emarginatus, a clutter specialist. We predicted that M. capaccinii would show the least flexibility and slowest learning reflecting its relatively unstructured foraging habitat and the stereotypy of its natural foraging behavior, while the other two species would show greater flexibility and more rapid learning reflecting the complexity of their natural foraging tasks. We used a purposefully unnatural and thus species-fair crawling maze to test simple- and complex-rule learning, flexibility and re-learning performance. We found that M. capaccinii learned a simple rule as fast as the other species, but was slower in complex rule learning and was less flexible in response to changes in reward location. We found no differences in re-learning ability among species. Our results corroborate the hypothesis that animals’ cognitive skills reflect the demands of their ecological niche. PMID:23755146
NASA Astrophysics Data System (ADS)
Alzubaidi, Mohammad; Patel, Ameet; Panchanathan, Sethuraman; Black, John A., Jr.
2010-02-01
Radiological images constitute a special class of images that are captured (or computed) specifically for the purpose of diagnosing patients. However, because these are not "natural" images, radiologists must be trained to interpret them through a process called "perceptual learning". However, because perceptual learning is implicit, experienced radiologists may sometimes find it difficult to explicitly (i.e. verbally) train less experienced colleagues. As a result, current methods of training can take years before a new radiologist is fully competent to independently interpret medical images. We hypothesize that eye tracking technology (coupled with multimedia technology) can be used to accelerate the process of perceptual training, through a Hebbian learning process. This would be accomplished by providing a radiologist-in-training with real-time feedback as he/she is fixating on important regions of an image. Of course this requires that the training system have information about what regions of an image are important - information that could presumably be solicited from experienced radiologists. However, our previous work has suggested that experienced radiologists are not always aware of those regions of an image that attract their attention, but are not clinically significant - information that is very important to a radiologist in training. This paper discusses a study in which local entropy computations were done on scan path data, and were found to provide a quantitative measure of the moment-by-moment interest level of radiologists as they scanned chest x-rays. The results also showed a striking contrast between the moment-by-moment deployment of attention between experienced radiologists and radiologists in training.
Self-organizing adaptive map: autonomous learning of curves and surfaces from point samples.
Piastra, Marco
2013-05-01
Competitive Hebbian Learning (CHL) (Martinetz, 1993) is a simple and elegant method for estimating the topology of a manifold from point samples. The method has been adopted in a number of self-organizing networks described in the literature and has given rise to related studies in the fields of geometry and computational topology. Recent results from these fields have shown that a faithful reconstruction can be obtained using the CHL method only for curves and surfaces. Within these limitations, these findings constitute a basis for defining a CHL-based, growing self-organizing network that produces a faithful reconstruction of an input manifold. The SOAM (Self-Organizing Adaptive Map) algorithm adapts its local structure autonomously in such a way that it can match the features of the manifold being learned. The adaptation process is driven by the defects arising when the network structure is inadequate, which cause a growth in the density of units. Regions of the network undergo a phase transition and change their behavior whenever a simple, local condition of topological regularity is met. The phase transition is eventually completed across the entire structure and the adaptation process terminates. In specific conditions, the structure thus obtained is homeomorphic to the input manifold. During the adaptation process, the network also has the capability to focus on the acquisition of input point samples in critical regions, with a substantial increase in efficiency. The behavior of the network has been assessed experimentally with typical data sets for surface reconstruction, including suboptimal conditions, e.g. with undersampling and noise. Copyright © 2012 Elsevier Ltd. All rights reserved.
Assessing the uniqueness of language: Animal grammatical abilities take center stage.
Ten Cate, Carel
2017-02-01
Questions related to the uniqueness of language can only be addressed properly by referring to sound knowledge of the relevant cognitive abilities of nonhuman animals. A key question concerns the nature and extent of animal rule-learning abilities. I discuss two approaches used to assess these abilities. One is comparing the structures of animal vocalizations to linguistic ones, and another is addressing the grammatical rule- and pattern-learning abilities of animals through experiments using artificial grammars. Neither of these approaches has so far provided unambiguous evidence of advanced animal abilities. However, when we consider how animal vocalizations are analyzed, the types of stimuli and tasks that are used in artificial grammar learning experiments, the limited number of species examined, and the groups to which these belong, I argue that the currently available evidence is insufficient to arrive at firm conclusions concerning the limitations of animal grammatical abilities. As a consequence, the gap between human linguistic rule-learning abilities and those of nonhuman animals may be smaller and less clear than is currently assumed. This means that it is still an open question whether a difference in the rule-learning and rule abstraction abilities between animals and humans played the key role in the evolution of language.
Dog Is a Dog Is a Dog: Infant Rule Learning Is Not Specific to Language
ERIC Educational Resources Information Center
Saffran, Jenny R.; Pollak, Seth D.; Seibel, Rebecca L.; Shkolnik, Anna
2007-01-01
Human infants possess powerful learning mechanisms used for the acquisition of language. To what extent are these mechanisms domain specific? One well-known infant language learning mechanism is the ability to detect and generalize rule-like similarity patterns, such as ABA or ABB [Marcus, G. F., Vijayan, S., Rao, S. B., & Vishton, P. M. (1999).…
Dopamine neurons modulate pheromone responses in Drosophila courtship learning.
Keleman, Krystyna; Vrontou, Eleftheria; Krüttner, Sebastian; Yu, Jai Y; Kurtovic-Kozaric, Amina; Dickson, Barry J
2012-09-06
Learning through trial-and-error interactions allows animals to adapt innate behavioural ‘rules of thumb’ to the local environment, improving their prospects for survival and reproduction. Naive Drosophila melanogaster males, for example, court both virgin and mated females, but learn through experience to selectively suppress futile courtship towards females that have already mated. Here we show that courtship learning reflects an enhanced response to the male pheromone cis-vaccenyl acetate (cVA), which is deposited on females during mating and thus distinguishes mated females from virgins. Dissociation experiments suggest a simple learning rule in which unsuccessful courtship enhances sensitivity to cVA. The learning experience can be mimicked by artificial activation of dopaminergic neurons, and we identify a specific class of dopaminergic neuron that is critical for courtship learning. These neurons provide input to the mushroom body (MB) γ lobe, and the DopR1 dopamine receptor is required in MBγ neurons for both natural and artificial courtship learning. Our work thus reveals critical behavioural, cellular and molecular components of the learning rule by which Drosophila adjusts its innate mating strategy according to experience.
NASA Technical Reports Server (NTRS)
Hruska, S. I.; Dalke, A.; Ferguson, J. J.; Lacher, R. C.
1991-01-01
Rule-based expert systems may be structurally and functionally mapped onto a special class of neural networks called expert networks. This mapping lends itself to adaptation of connectionist learning strategies for the expert networks. A parsing algorithm to translate C Language Integrated Production System (CLIPS) rules into a network of interconnected assertion and operation nodes has been developed. The translation of CLIPS rules to an expert network and back again is illustrated. Measures of uncertainty similar to those rules in MYCIN-like systems are introduced into the CLIPS system and techniques for combining and hiring nodes in the network based on rule-firing with these certainty factors in the expert system are presented. Several learning algorithms are under study which automate the process of attaching certainty factors to rules.
Deficits in Category Learning in Older Adults: Rule-Based Versus Clustering Accounts
2017-01-01
Memory research has long been one of the key areas of investigation for cognitive aging researchers but only in the last decade or so has categorization been used to understand age differences in cognition. Categorization tasks focus more heavily on the grouping and organization of items in memory, and often on the process of learning relationships through trial and error. Categorization studies allow researchers to more accurately characterize age differences in cognition: whether older adults show declines in the way in which they represent categories with simple rules or declines in representing categories by similarity to past examples. In the current study, young and older adults participated in a set of classic category learning problems, which allowed us to distinguish between three hypotheses: (a) rule-complexity: categories were represented exclusively with rules and older adults had differential difficulty when more complex rules were required, (b) rule-specific: categories could be represented either by rules or by similarity, and there were age deficits in using rules, and (c) clustering: similarity was mainly used and older adults constructed a less-detailed representation by lumping more items into fewer clusters. The ordinal levels of performance across different conditions argued against rule-complexity, as older adults showed greater deficits on less complex categories. The data also provided evidence against rule-specificity, as single-dimensional rules could not explain age declines. Instead, computational modeling of the data indicated that older adults utilized fewer conceptual clusters of items in memory than did young adults. PMID:28816474
A Santos, Jose C; Nassif, Houssam; Page, David; Muggleton, Stephen H; E Sternberg, Michael J
2012-07-11
There is a need for automated methods to learn general features of the interactions of a ligand class with its diverse set of protein receptors. An appropriate machine learning approach is Inductive Logic Programming (ILP), which automatically generates comprehensible rules in addition to prediction. The development of ILP systems which can learn rules of the complexity required for studies on protein structure remains a challenge. In this work we use a new ILP system, ProGolem, and demonstrate its performance on learning features of hexose-protein interactions. The rules induced by ProGolem detect interactions mediated by aromatics and by planar-polar residues, in addition to less common features such as the aromatic sandwich. The rules also reveal a previously unreported dependency for residues cys and leu. They also specify interactions involving aromatic and hydrogen bonding residues. This paper shows that Inductive Logic Programming implemented in ProGolem can derive rules giving structural features of protein/ligand interactions. Several of these rules are consistent with descriptions in the literature. In addition to confirming literature results, ProGolem's model has a 10-fold cross-validated predictive accuracy that is superior, at the 95% confidence level, to another ILP system previously used to study protein/hexose interactions and is comparable with state-of-the-art statistical learners.
Single neurons in prefrontal cortex encode abstract rules.
Wallis, J D; Anderson, K C; Miller, E K
2001-06-21
The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the 'rules' for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.
A supervised learning rule for classification of spatiotemporal spike patterns.
Lilin Guo; Zhenzhong Wang; Adjouadi, Malek
2016-08-01
This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.
Giraldo, Sergio I; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules.
Giraldo, Sergio I.; Ramirez, Rafael
2016-01-01
Expert musicians introduce expression in their performances by manipulating sound properties such as timing, energy, pitch, and timbre. Here, we present a data driven computational approach to induce expressive performance rule models for note duration, onset, energy, and ornamentation transformations in jazz guitar music. We extract high-level features from a set of 16 commercial audio recordings (and corresponding music scores) of jazz guitarist Grant Green in order to characterize the expression in the pieces. We apply machine learning techniques to the resulting features to learn expressive performance rule models. We (1) quantitatively evaluate the accuracy of the induced models, (2) analyse the relative importance of the considered musical features, (3) discuss some of the learnt expressive performance rules in the context of previous work, and (4) assess their generailty. The accuracies of the induced predictive models is significantly above base-line levels indicating that the audio performances and the musical features extracted contain sufficient information to automatically learn informative expressive performance patterns. Feature analysis shows that the most important musical features for predicting expressive transformations are note duration, pitch, metrical strength, phrase position, Narmour structure, and tempo and key of the piece. Similarities and differences between the induced expressive rules and the rules reported in the literature were found. Differences may be due to the fact that most previously studied performance data has consisted of classical music recordings. Finally, the rules' performer specificity/generality is assessed by applying the induced rules to performances of the same pieces performed by two other professional jazz guitar players. Results show a consistency in the ornamentation patterns between Grant Green and the other two musicians, which may be interpreted as a good indicator for generality of the ornamentation rules. PMID:28066290
NASA Astrophysics Data System (ADS)
Kotelnikov, E. V.; Milov, V. R.
2018-05-01
Rule-based learning algorithms have higher transparency and easiness to interpret in comparison with neural networks and deep learning algorithms. These properties make it possible to effectively use such algorithms to solve descriptive tasks of data mining. The choice of an algorithm depends also on its ability to solve predictive tasks. The article compares the quality of the solution of the problems with binary and multiclass classification based on the experiments with six datasets from the UCI Machine Learning Repository. The authors investigate three algorithms: Ripper (rule induction), C4.5 (decision trees), In-Close (formal concept analysis). The results of the experiments show that In-Close demonstrates the best quality of classification in comparison with Ripper and C4.5, however the latter two generate more compact rule sets.
Riggs, Anne E; Young, Andrew G
2016-08-01
What influences children's normative judgments of conventional rules at different points in development? The current study explored the effects of two contextual factors on children's normative reasoning: the way in which the rules were learned and whether the rules apply to the self or others. Peer dyads practiced a novel collaborative board game comprising two complementary roles. Dyads were either taught both the prescriptive (i.e., what to do) and proscriptive (i.e., what not to do) forms of the rules, taught only the prescriptive form of the rules, or created the rules themselves. Children then judged whether third parties were violating or conforming to the rules governing their own roles and their partner's roles. Early school-aged children's (6- to 7-year-olds; N = 60) normative judgments were strongest when they had been taught the rules (with or without the proscriptive form), but were more flexible for rules they created themselves. Preschool-aged children's (4- to 5-year-olds; N = 60) normative judgments, however, were strongest when they were taught both the prescriptive and proscriptive forms of the rules. Additionally, preschoolers exhibited stronger normative judgments when the rules governed their own roles rather than their partner's roles, whereas school-aged children treated all rules as equally normative. These results demonstrate that children's normative reasoning is contingent on contextual factors of the learning environment and, moreover, highlight 2 specific areas in which children's inferences about the normativity of conventions strengthen over development. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
Polymeric Coating of Supporting Substrates Facilities: New Source Performance Standards (NSPS)
Learn more about the New Source Performance Standards (NSPS) rule for polymeric coating by reading the rule summary, rule history and the code of federal regulations subpart. Information on related rules is also on this page.
Málková, L; Bachevalier, J; Webster, M; Mishkin, M
2000-01-01
The ability of rhesus monkeys to master the rule for delayed nonmatching-to-sample (DNMS) has a protracted ontogenetic development, reaching adult levels of proficiency around 4 to 5 years of age (Bachevalier, 1990). To test the possibility that this slow development could be due, at least in part, to immaturity of the prefrontal component of a temporo-prefrontal circuit important for DNMS rule learning (Kowalska, Bachevalier, & Mishkin, 1991; Weinstein, Saunders, & Mishkin, 1988), monkeys with neonatal lesions of the inferior prefrontal convexity were compared on DNMS with both normal controls and animals given neonatal lesions of the medial temporal lobe. Consistent with our previous results (Bachevalier & Mishkin, 1994; Málková, Mishkin, & Bachevalier, 1995), the neonatal medial temporal lesions led to marked impairment in rule learning (as well as in recognition memory with long delays and list lengths) at both 3 months and 2 years of age. By contrast, the neonatal inferior convexity lesions yielded no impairment in rule-learning at 3 months and only a mild impairment at 2 years, a finding that also contrasts sharply with the marked effects of the same lesion made in adulthood. This pattern of sparing closely resembles the one found earlier after neonatal lesions to the cortical visual area TE (Bachevalier & Mishkin, 1994; Málková et al., 1995). The functional sparing at 3 months probably reflects the fact that the temporo-prefrontal circuit is nonfunctional at this early age, resulting in a total dependency on medial temporal contributions to rule learning. With further development, however, this circuit begins to provide a supplementary route for learning.
NASA Astrophysics Data System (ADS)
Li, Qiang; Wang, Zhi; Le, Yansi; Sun, Chonghui; Song, Xiaojia; Wu, Chongqing
2016-10-01
Neuromorphic engineering has a wide range of applications in the fields of machine learning, pattern recognition, adaptive control, etc. Photonics, characterized by its high speed, wide bandwidth, low power consumption and massive parallelism, is an ideal way to realize ultrafast spiking neural networks (SNNs). Synaptic plasticity is believed to be critical for learning, memory and development in neural circuits. Experimental results have shown that changes of synapse are highly dependent on the relative timing of pre- and postsynaptic spikes. Synaptic plasticity in which presynaptic spikes preceding postsynaptic spikes results in strengthening, while the opposite timing results in weakening is called antisymmetric spike-timing-dependent plasticity (STDP) learning rule. And synaptic plasticity has the opposite effect under the same conditions is called antisymmetric anti-STDP learning rule. We proposed and experimentally demonstrated an optical implementation of neural learning algorithms, which can achieve both of antisymmetric STDP and anti-STDP learning rule, based on the cross-gain modulation (XGM) within a single semiconductor optical amplifier (SOA). The weight and height of the potentitation and depression window can be controlled by adjusting the injection current of the SOA, to mimic the biological antisymmetric STDP and anti-STDP learning rule more realistically. As the injection current increases, the width of depression and potentitation window decreases and height increases, due to the decreasing of recovery time and increasing of gain under a stronger injection current. Based on the demonstrated optical STDP circuit, ultrafast learning in optical SNNs can be realized.
Miles, Sarah J; Matsuki, Kazunaga; Minda, John Paul
2014-07-01
Category learning is often characterized as being supported by two separate learning systems. A verbal system learns rule-defined (RD) categories that can be described using a verbal rule and relies on executive functions (EFs) to learn via hypothesis testing. A nonverbal system learns non-rule-defined (NRD) categories that cannot be described by a verbal rule and uses automatic, procedural learning. The verbal system is dominant in that adults tend to use it during initial learning but may switch to the nonverbal system when the verbal system is unsuccessful. The nonverbal system has traditionally been thought to operate independently of EFs, but recent studies suggest that EFs may play a role in the nonverbal system-specifically, to facilitate the transition away from the verbal system. Accordingly, continuously interfering with EFs during the categorization process, so that EFs are never fully available to facilitate the transition, may be more detrimental to the nonverbal system than is temporary EF interference. Participants learned an NRD or an RD category while EFs were untaxed, taxed temporarily, or taxed continuously. When EFs were continuously taxed during NRD categorization, participants were less likely to use a nonverbal categorization strategy than when EFs were temporarily taxed, suggesting that when EFs were unavailable, the transition to the nonverbal system was hindered. For the verbal system, temporary and continuous interference had similar effects on categorization performance and on strategy use, illustrating that EFs play an important but different role in each of the category-learning systems.
Neural plasticity and behavior - sixty years of conceptual advances.
Sweatt, J David
2016-10-01
This brief review summarizes 60 years of conceptual advances that have demonstrated a role for active changes in neuronal connectivity as a controller of behavior and behavioral change. Seminal studies in the first phase of the six-decade span of this review firmly established the cellular basis of behavior - a concept that we take for granted now, but which was an open question at the time. Hebbian plasticity, including long-term potentiation and long-term depression, was then discovered as being important for local circuit refinement in the context of memory formation and behavioral change and stabilization in the mammalian central nervous system. Direct demonstration of plasticity of neuronal circuit function in vivo, for example, hippocampal neurons forming place cell firing patterns, extended this concept. However, additional neurophysiologic and computational studies demonstrated that circuit development and stabilization additionally relies on non-Hebbian, homoeostatic, forms of plasticity, such as synaptic scaling and control of membrane intrinsic properties. Activity-dependent neurodevelopment was found to be associated with cell-wide adjustments in post-synaptic receptor density, and found to occur in conjunction with synaptic pruning. Pioneering cellular neurophysiologic studies demonstrated the critical roles of transmembrane signal transduction, NMDA receptor regulation, regulation of neural membrane biophysical properties, and back-propagating action potential in critical time-dependent coincidence detection in behavior-modifying circuits. Concerning the molecular mechanisms underlying these processes, regulation of gene transcription was found to serve as a bridge between experience and behavioral change, closing the 'nature versus nurture' divide. Both active DNA (de)methylation and regulation of chromatin structure have been validated as crucial regulators of gene transcription during learning. The discovery of protein synthesis dependence on the acquisition of behavioral change was an influential discovery in the neurochemistry of behavioral modification. Higher order cognitive functions such as decision making and spatial and language learning were also discovered to hinge on neural plasticity mechanisms. The role of disruption of these processes in intellectual disabilities, memory disorders, and drug addiction has recently been clarified based on modern genetic techniques, including in the human. The area of neural plasticity and behavior has seen tremendous advances over the last six decades, with many of those advances being specifically in the neurochemistry domain. This review provides an overview of the progress in the area of neuroplasticity and behavior over the life-span of the Journal of Neurochemistry. To organize the broad literature base, the review collates progress into fifteen broad categories identified as 'conceptual advances', as viewed by the author. The fifteen areas are delineated in the figure above. This article is part of the 60th Anniversary special issue. © 2016 International Society for Neurochemistry.
Designing boosting ensemble of relational fuzzy systems.
Scherer, Rafał
2010-10-01
A method frequently used in classification systems for improving classification accuracy is to combine outputs of several classifiers. Among various types of classifiers, fuzzy ones are tempting because of using intelligible fuzzy if-then rules. In the paper we build an AdaBoost ensemble of relational neuro-fuzzy classifiers. Relational fuzzy systems bond input and output fuzzy linguistic values by a binary relation; thus, fuzzy rules have additional, comparing to traditional fuzzy systems, weights - elements of a fuzzy relation matrix. Thanks to this the system is better adjustable to data during learning. In the paper an ensemble of relational fuzzy systems is proposed. The problem is that such an ensemble contains separate rule bases which cannot be directly merged. As systems are separate, we cannot treat fuzzy rules coming from different systems as rules from the same (single) system. In the paper, the problem is addressed by a novel design of fuzzy systems constituting the ensemble, resulting in normalization of individual rule bases during learning. The method described in the paper is tested on several known benchmarks and compared with other machine learning solutions from the literature.
Implementation of a spike-based perceptron learning rule using TiO2-x memristors.
Mostafa, Hesham; Khiat, Ali; Serb, Alexander; Mayr, Christian G; Indiveri, Giacomo; Prodromakis, Themis
2015-01-01
Synaptic plasticity plays a crucial role in allowing neural networks to learn and adapt to various input environments. Neuromorphic systems need to implement plastic synapses to obtain basic "cognitive" capabilities such as learning. One promising and scalable approach for implementing neuromorphic synapses is to use nano-scale memristors as synaptic elements. In this paper we propose a hybrid CMOS-memristor system comprising CMOS neurons interconnected through TiO2-x memristors, and spike-based learning circuits that modulate the conductance of the memristive synapse elements according to a spike-based Perceptron plasticity rule. We highlight a number of advantages for using this spike-based plasticity rule as compared to other forms of spike timing dependent plasticity (STDP) rules. We provide experimental proof-of-concept results with two silicon neurons connected through a memristive synapse that show how the CMOS plasticity circuits can induce stable changes in memristor conductances, giving rise to increased synaptic strength after a potentiation episode and to decreased strength after a depression episode.
Medial Prefrontal Cortex Reduces Memory Interference by Modifying Hippocampal Encoding
Guise, Kevin G.; Shapiro, Matthew L.
2017-01-01
Summary The prefrontal cortex (PFC) is crucial for accurate memory performance when prior knowledge interferes with new learning, but the mechanisms that minimize proactive interference are unknown. To investigate these, we assessed the influence of medial PFC (mPFC) activity on spatial learning and hippocampal coding in a plus maze task that requires both structures. mPFC inactivation did not impair spatial learning or retrieval per se, but impaired the ability to follow changing spatial rules. mPFC and CA1 ensembles recorded simultaneously predicted goal choices and tracked changing rules; inactivating mPFC attenuated CA1 prospective coding. mPFC activity modified CA1 codes during learning, which in turn predicted how quickly rats adapted to subsequent rule changes. The results suggest that task rules signaled by the mPFC become incorporated into hippocampal representations and support prospective coding. By this mechanism, mPFC activity prevents interference by “teaching” the hippocampus to retrieve distinct representations of similar circumstances. PMID:28343868
Learning temporal rules to forecast instability in continuously monitored patients.
Guillame-Bert, Mathieu; Dubrawski, Artur; Wang, Donghan; Hravnak, Marilyn; Clermont, Gilles; Pinsky, Michael R
2017-01-01
Inductive machine learning, and in particular extraction of association rules from data, has been successfully used in multiple application domains, such as market basket analysis, disease prognosis, fraud detection, and protein sequencing. The appeal of rule extraction techniques stems from their ability to handle intricate problems yet produce models based on rules that can be comprehended by humans, and are therefore more transparent. Human comprehension is a factor that may improve adoption and use of data-driven decision support systems clinically via face validity. In this work, we explore whether we can reliably and informatively forecast cardiorespiratory instability (CRI) in step-down unit (SDU) patients utilizing data from continuous monitoring of physiologic vital sign (VS) measurements. We use a temporal association rule extraction technique in conjunction with a rule fusion protocol to learn how to forecast CRI in continuously monitored patients. We detail our approach and present and discuss encouraging empirical results obtained using continuous multivariate VS data from the bedside monitors of 297 SDU patients spanning 29 346 hours (3.35 patient-years) of observation. We present example rules that have been learned from data to illustrate potential benefits of comprehensibility of the extracted models, and we analyze the empirical utility of each VS as a potential leading indicator of an impending CRI event. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.
Juang, C F; Lin, J Y; Lin, C T
2000-01-01
An efficient genetic reinforcement learning algorithm for designing fuzzy controllers is proposed in this paper. The genetic algorithm (GA) adopted in this paper is based upon symbiotic evolution which, when applied to fuzzy controller design, complements the local mapping property of a fuzzy rule. Using this Symbiotic-Evolution-based Fuzzy Controller (SEFC) design method, the number of control trials, as well as consumed CPU time, are considerably reduced when compared to traditional GA-based fuzzy controller design methods and other types of genetic reinforcement learning schemes. Moreover, unlike traditional fuzzy controllers, which partition the input space into a grid, SEFC partitions the input space in a flexible way, thus creating fewer fuzzy rules. In SEFC, different types of fuzzy rules whose consequent parts are singletons, fuzzy sets, or linear equations (TSK-type fuzzy rules) are allowed. Further, the free parameters (e.g., centers and widths of membership functions) and fuzzy rules are all tuned automatically. For the TSK-type fuzzy rule especially, which put the proposed learning algorithm in use, only the significant input variables are selected to participate in the consequent of a rule. The proposed SEFC design method has been applied to different simulated control problems, including the cart-pole balancing system, a magnetic levitation system, and a water bath temperature control system. The proposed SEFC has been verified to be efficient and superior from these control problems, and from comparisons with some traditional GA-based fuzzy systems.
Perceptual Learning Improves Adult Amblyopic Vision Through Rule-Based Cognitive Compensation
Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong
2014-01-01
Purpose. We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Methods. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Results. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). Conclusions. The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation. PMID:24550359
Perceptual learning improves adult amblyopic vision through rule-based cognitive compensation.
Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A; Levi, Dennis M; Yu, Cong
2014-04-01
We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P < 0.001) in the amblyopic eyes with computerized tests and a clinical E acuity chart. It also improved stereoacuity by 53% (P < 0.001). The complete transfer of learning suggests that perceptual learning in amblyopia may reflect high-level learning of rules for performing a visual discrimination task. These rules are applicable to new orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation.
The Convallis Rule for Unsupervised Learning in Cortical Networks
Yger, Pierre; Harris, Kenneth D.
2013-01-01
The phenomenology and cellular mechanisms of cortical synaptic plasticity are becoming known in increasing detail, but the computational principles by which cortical plasticity enables the development of sensory representations are unclear. Here we describe a framework for cortical synaptic plasticity termed the “Convallis rule”, mathematically derived from a principle of unsupervised learning via constrained optimization. Implementation of the rule caused a recurrent cortex-like network of simulated spiking neurons to develop rate representations of real-world speech stimuli, enabling classification by a downstream linear decoder. Applied to spike patterns used in in vitro plasticity experiments, the rule reproduced multiple results including and beyond STDP. However STDP alone produced poorer learning performance. The mathematical form of the rule is consistent with a dual coincidence detector mechanism that has been suggested by experiments in several synaptic classes of juvenile neocortex. Based on this confluence of normative, phenomenological, and mechanistic evidence, we suggest that the rule may approximate a fundamental computational principle of the neocortex. PMID:24204224
Timely Diagnostic Feedback for Database Concept Learning
ERIC Educational Resources Information Center
Lin, Jian-Wei; Lai, Yuan-Cheng; Chuang, Yuh-Shy
2013-01-01
To efficiently learn database concepts, this work adopts association rules to provide diagnostic feedback for drawing an Entity-Relationship Diagram (ERD). Using association rules and Asynchronous JavaScript and XML (AJAX) techniques, this work implements a novel Web-based Timely Diagnosis System (WTDS), which provides timely diagnostic feedback…
Bayesian learning and the psychology of rule induction
Endress, Ansgar D.
2014-01-01
In recent years, Bayesian learning models have been applied to an increasing variety of domains. While such models have been criticized on theoretical grounds, the underlying assumptions and predictions are rarely made concrete and tested experimentally. Here, I use Frank and Tenenbaum's (2011) Bayesian model of rule-learning as a case study to spell out the underlying assumptions, and to confront them with the empirical results Frank and Tenenbaum (2011) propose to simulate, as well as with novel experiments. While rule-learning is arguably well suited to rational Bayesian approaches, I show that their models are neither psychologically plausible nor ideal observer models. Further, I show that their central assumption is unfounded: humans do not always preferentially learn more specific rules, but, at least in some situations, those rules that happen to be more salient. Even when granting the unsupported assumptions, I show that all of the experiments modeled by Frank and Tenenbaum (2011) either contradict their models, or have a large number of more plausible interpretations. I provide an alternative account of the experimental data based on simple psychological mechanisms, and show that this account both describes the data better, and is easier to falsify. I conclude that, despite the recent surge in Bayesian models of cognitive phenomena, psychological phenomena are best understood by developing and testing psychological theories rather than models that can be fit to virtually any data. PMID:23454791
Reinforcement Learning in a Nonstationary Environment: The El Farol Problem
NASA Technical Reports Server (NTRS)
Bell, Ann Maria
1999-01-01
This paper examines the performance of simple learning rules in a complex adaptive system based on a coordination problem modeled on the El Farol problem. The key features of the El Farol problem are that it typically involves a medium number of agents and that agents' pay-off functions have a discontinuous response to increased congestion. First we consider a single adaptive agent facing a stationary environment. We demonstrate that the simple learning rules proposed by Roth and Er'ev can be extremely sensitive to small changes in the initial conditions and that events early in a simulation can affect the performance of the rule over a relatively long time horizon. In contrast, a reinforcement learning rule based on standard practice in the computer science literature converges rapidly and robustly. The situation is reversed when multiple adaptive agents interact: the RE algorithms often converge rapidly to a stable average aggregate attendance despite the slow and erratic behavior of individual learners, while the CS based learners frequently over-attend in the early and intermediate terms. The symmetric mixed strategy equilibria is unstable: all three learning rules ultimately tend towards pure strategies or stabilize in the medium term at non-equilibrium probabilities of attendance. The brittleness of the algorithms in different contexts emphasize the importance of thorough and thoughtful examination of simulation-based results.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Discontinuous categories affect information-integration but not rule-based category learning.
Maddox, W Todd; Filoteo, J Vincent; Lauritzen, J Scott; Connally, Emily; Hejl, Kelli D
2005-07-01
Three experiments were conducted that provide a direct examination of within-category discontinuity manipulations on the implicit, procedural-based learning and the explicit, hypothesis-testing systems proposed in F. G. Ashby, L. A. Alfonso-Reese, A. U. Turken, and E. M. Waldron's (1998) competition between verbal and implicit systems model. Discontinuous categories adversely affected information-integration but not rule-based category learning. Increasing the magnitude of the discontinuity did not lead to a significant decline in performance. The distance to the bound provides a reasonable description of the generalization profile associated with the hypothesis-testing system, whereas the distance to the bound plus the distance to the trained response region provides a reasonable description of the generalization profile associated with the procedural-based learning system. These results suggest that within-category discontinuity differentially impacts information-integration but not rule-based category learning and provides information regarding the detailed processing characteristics of each category learning system. ((c) 2005 APA, all rights reserved).
2012-01-01
Background There is a need for automated methods to learn general features of the interactions of a ligand class with its diverse set of protein receptors. An appropriate machine learning approach is Inductive Logic Programming (ILP), which automatically generates comprehensible rules in addition to prediction. The development of ILP systems which can learn rules of the complexity required for studies on protein structure remains a challenge. In this work we use a new ILP system, ProGolem, and demonstrate its performance on learning features of hexose-protein interactions. Results The rules induced by ProGolem detect interactions mediated by aromatics and by planar-polar residues, in addition to less common features such as the aromatic sandwich. The rules also reveal a previously unreported dependency for residues cys and leu. They also specify interactions involving aromatic and hydrogen bonding residues. This paper shows that Inductive Logic Programming implemented in ProGolem can derive rules giving structural features of protein/ligand interactions. Several of these rules are consistent with descriptions in the literature. Conclusions In addition to confirming literature results, ProGolem’s model has a 10-fold cross-validated predictive accuracy that is superior, at the 95% confidence level, to another ILP system previously used to study protein/hexose interactions and is comparable with state-of-the-art statistical learners. PMID:22783946
Blanco, Nathaniel J; Saucedo, Celeste L; Gonzalez-Lima, F
2017-03-01
This is the first randomized, controlled study comparing the cognitive effects of transcranial laser stimulation on category learning tasks. Transcranial infrared laser stimulation is a new non-invasive form of brain stimulation that shows promise for wide-ranging experimental and neuropsychological applications. It involves using infrared laser to enhance cerebral oxygenation and energy metabolism through upregulation of the respiratory enzyme cytochrome oxidase, the primary infrared photon acceptor in cells. Previous research found that transcranial infrared laser stimulation aimed at the prefrontal cortex can improve sustained attention, short-term memory, and executive function. In this study, we directly investigated the influence of transcranial infrared laser stimulation on two neurobiologically dissociable systems of category learning: a prefrontal cortex mediated reflective system that learns categories using explicit rules, and a striatally mediated reflexive learning system that forms gradual stimulus-response associations. Participants (n=118) received either active infrared laser to the lateral prefrontal cortex or sham (placebo) stimulation, and then learned one of two category structures-a rule-based structure optimally learned by the reflective system, or an information-integration structure optimally learned by the reflexive system. We found that prefrontal rule-based learning was substantially improved following transcranial infrared laser stimulation as compared to placebo (treatment X block interaction: F(1, 298)=5.117, p=0.024), while information-integration learning did not show significant group differences (treatment X block interaction: F(1, 288)=1.633, p=0.202). These results highlight the exciting potential of transcranial infrared laser stimulation for cognitive enhancement and provide insight into the neurobiological underpinnings of category learning. Copyright © 2017 Elsevier Inc. All rights reserved.
Social inference and social anxiety: evidence of a fear-congruent self-referential learning bias.
Button, Katherine S; Browning, Michael; Munafò, Marcus R; Lewis, Glyn
2012-12-01
Fears of negative evaluation characterise social anxiety, and preferential processing of fear-relevant information is implicated in maintaining symptoms. Little is known, however, about the relationship between social anxiety and the process of inferring negative evaluation. The ability to use social information to learn what others think about one, referred to here as self-referential learning, is fundamental for effective social interaction. The aim of this research was to examine whether social anxiety is associated with self-referential learning. 102 Females with either high (n = 52) or low (n = 50) self-reported social anxiety completed a novel probabilistic social learning task. Using trial and error, the task required participants to learn two self-referential rules, 'I am liked' and 'I am disliked'. Participants across the sample were better at learning the positive rule 'I am liked' than the negative rule 'I am disliked', β = -6.4, 95% CI [-8.0, -4.7], p < 0.001. This preference for learning positive self-referential information was strongest in the lowest socially anxious and was abolished in the most symptomatic participants. Relative to the low group, the high anxiety group were better at learning they were disliked and worse at learning they were liked, social anxiety by rule interaction β = 3.6; 95% CI [+0.3, +7.0], p = 0.03. The specificity of the results to self-referential processing requires further research. Healthy individuals show a robust preference for learning that they are liked relative to disliked. This positive self-referential bias is reduced in social anxiety in a way that would be expected to exacerbate anxiety symptoms. Copyright © 2012 Elsevier Ltd. All rights reserved.
Acute anxiety and social inference: An experimental manipulation with 7.5% carbon dioxide inhalation
Button, Katherine S; Karwatowska, Lucy; Kounali, Daphne; Munafò, Marcus R; Attwood, Angela S
2016-01-01
Background: Positive self-bias is thought to be protective for mental health. We previously found that the degree of positive bias when learning self-referential social evaluation decreases with increasing social anxiety. It is unclear whether this reduction is driven by differences in state or trait anxiety, as both are elevated in social anxiety; therefore, we examined the effects on the state of anxiety induced by the 7.5% carbon dioxide (CO2) inhalation model of generalised anxiety disorder (GAD) on social evaluation learning. Methods: For our study, 48 (24 of female gender) healthy volunteers took two inhalations (medical air and 7.5% CO2, counterbalanced) whilst learning social rules (self-like, self-dislike, other-like and other-dislike) in an instrumental social evaluation learning task. We analysed the outcomes (number of positive responses and errors to criterion) using the random effects Poisson regression. Results: Participants made fewer and more positive responses when breathing 7.5% CO2 in the other-like and other-dislike rules, respectively (gas × condition × rule interaction p = 0.03). Individuals made fewer errors learning self-like than self-dislike, and this positive self-bias was unaffected by CO2. Breathing 7.5% CO2 increased errors, but only in the other-referential rules (gas × condition × rule interaction p = 0.003). Conclusions: Positive self-bias (i.e. fewer errors learning self-like than self-dislike) seemed robust to changes in state anxiety. In contrast, learning other-referential evaluation was impaired as state anxiety increased. This suggested that the previously observed variations in self-bias arise due to trait, rather than state, characteristics. PMID:27380750
Button, Katherine S; Karwatowska, Lucy; Kounali, Daphne; Munafò, Marcus R; Attwood, Angela S
2016-10-01
Positive self-bias is thought to be protective for mental health. We previously found that the degree of positive bias when learning self-referential social evaluation decreases with increasing social anxiety. It is unclear whether this reduction is driven by differences in state or trait anxiety, as both are elevated in social anxiety; therefore, we examined the effects on the state of anxiety induced by the 7.5% carbon dioxide (CO2) inhalation model of generalised anxiety disorder (GAD) on social evaluation learning. For our study, 48 (24 of female gender) healthy volunteers took two inhalations (medical air and 7.5% CO2, counterbalanced) whilst learning social rules (self-like, self-dislike, other-like and other-dislike) in an instrumental social evaluation learning task. We analysed the outcomes (number of positive responses and errors to criterion) using the random effects Poisson regression. Participants made fewer and more positive responses when breathing 7.5% CO2 in the other-like and other-dislike rules, respectively (gas × condition × rule interaction p = 0.03). Individuals made fewer errors learning self-like than self-dislike, and this positive self-bias was unaffected by CO2. Breathing 7.5% CO2 increased errors, but only in the other-referential rules (gas × condition × rule interaction p = 0.003). Positive self-bias (i.e. fewer errors learning self-like than self-dislike) seemed robust to changes in state anxiety. In contrast, learning other-referential evaluation was impaired as state anxiety increased. This suggested that the previously observed variations in self-bias arise due to trait, rather than state, characteristics. © The Author(s) 2016.
Online Pedagogical Tutorial Tactics Optimization Using Genetic-Based Reinforcement Learning
Lin, Hsuan-Ta; Lee, Po-Ming; Hsiao, Tzu-Chien
2015-01-01
Tutorial tactics are policies for an Intelligent Tutoring System (ITS) to decide the next action when there are multiple actions available. Recent research has demonstrated that when the learning contents were controlled so as to be the same, different tutorial tactics would make difference in students' learning gains. However, the Reinforcement Learning (RL) techniques that were used in previous studies to induce tutorial tactics are insufficient when encountering large problems and hence were used in offline manners. Therefore, we introduced a Genetic-Based Reinforcement Learning (GBML) approach to induce tutorial tactics in an online-learning manner without basing on any preexisting dataset. The introduced method can learn a set of rules from the environment in a manner similar to RL. It includes a genetic-based optimizer for rule discovery task by generating new rules from the old ones. This increases the scalability of a RL learner for larger problems. The results support our hypothesis about the capability of the GBML method to induce tutorial tactics. This suggests that the GBML method should be favorable in developing real-world ITS applications in the domain of tutorial tactics induction. PMID:26065018
Online Pedagogical Tutorial Tactics Optimization Using Genetic-Based Reinforcement Learning.
Lin, Hsuan-Ta; Lee, Po-Ming; Hsiao, Tzu-Chien
2015-01-01
Tutorial tactics are policies for an Intelligent Tutoring System (ITS) to decide the next action when there are multiple actions available. Recent research has demonstrated that when the learning contents were controlled so as to be the same, different tutorial tactics would make difference in students' learning gains. However, the Reinforcement Learning (RL) techniques that were used in previous studies to induce tutorial tactics are insufficient when encountering large problems and hence were used in offline manners. Therefore, we introduced a Genetic-Based Reinforcement Learning (GBML) approach to induce tutorial tactics in an online-learning manner without basing on any preexisting dataset. The introduced method can learn a set of rules from the environment in a manner similar to RL. It includes a genetic-based optimizer for rule discovery task by generating new rules from the old ones. This increases the scalability of a RL learner for larger problems. The results support our hypothesis about the capability of the GBML method to induce tutorial tactics. This suggests that the GBML method should be favorable in developing real-world ITS applications in the domain of tutorial tactics induction.
Rapid Transfer of Abstract Rules to Novel Contexts in Human Lateral Prefrontal Cortex
Cole, Michael W.; Etzel, Joset A.; Zacks, Jeffrey M.; Schneider, Walter; Braver, Todd S.
2011-01-01
Flexible, adaptive behavior is thought to rely on abstract rule representations within lateral prefrontal cortex (LPFC), yet it remains unclear how these representations provide such flexibility. We recently demonstrated that humans can learn complex novel tasks in seconds. Here we hypothesized that this impressive mental flexibility may be possible due to rapid transfer of practiced rule representations within LPFC to novel task contexts. We tested this hypothesis using functional MRI and multivariate pattern analysis, classifying LPFC activity patterns across 64 tasks. Classifiers trained to identify abstract rules based on practiced task activity patterns successfully generalized to novel tasks. This suggests humans can transfer practiced rule representations within LPFC to rapidly learn new tasks, facilitating cognitive performance in novel circumstances. PMID:22125519
Karakida, Ryo; Okada, Masato; Amari, Shun-Ichi
2016-07-01
The restricted Boltzmann machine (RBM) is an essential constituent of deep learning, but it is hard to train by using maximum likelihood (ML) learning, which minimizes the Kullback-Leibler (KL) divergence. Instead, contrastive divergence (CD) learning has been developed as an approximation of ML learning and widely used in practice. To clarify the performance of CD learning, in this paper, we analytically derive the fixed points where ML and CDn learning rules converge in two types of RBMs: one with Gaussian visible and Gaussian hidden units and the other with Gaussian visible and Bernoulli hidden units. In addition, we analyze the stability of the fixed points. As a result, we find that the stable points of CDn learning rule coincide with those of ML learning rule in a Gaussian-Gaussian RBM. We also reveal that larger principal components of the input data are extracted at the stable points. Moreover, in a Gaussian-Bernoulli RBM, we find that both ML and CDn learning can extract independent components at one of stable points. Our analysis demonstrates that the same feature components as those extracted by ML learning are extracted simply by performing CD1 learning. Expanding this study should elucidate the specific solutions obtained by CD learning in other types of RBMs or in deep networks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ambrosino, R; Buchanan, B G; Cooper, G F; Fine, M J
1995-01-01
Cost-effective health care is at the forefront of today's important health-related issues. A research team at the University of Pittsburgh has been interested in lowering the cost of medical care by attempting to define a subset of patients with community-acquire pneumonia for whom outpatient therapy is appropriate and safe. Sensitivity and specificity requirements for this domain make it difficult to use rule-based learning algorithms with standard measures of performance based on accuracy. This paper describes the use of misclassification costs to assist a rule-based machine-learning program in deriving a decision-support aid for choosing outpatient therapy for patients with community-acquired pneumonia.
Díez-García, Andrea; Barros-Zulaica, Natali; Núñez, Ángel; Buño, Washington; Fernández de Sevilla, David
2017-01-01
According to Hebb's original hypothesis (Hebb, 1949), synapses are reinforced when presynaptic activity triggers postsynaptic firing, resulting in long-term potentiation (LTP) of synaptic efficacy. Long-term depression (LTD) is a use-dependent decrease in synaptic strength that is thought to be due to synaptic input causing a weak postsynaptic effect. Although the mechanisms that mediate long-term synaptic plasticity have been investigated for at least three decades not all question have as yet been answered. Therefore, we aimed at determining the mechanisms that generate LTP or LTD with the simplest possible protocol. Low-frequency stimulation of basal dendrite inputs in Layer 5 pyramidal neurons of the rat barrel cortex induces LTP. This stimulation triggered an EPSP, an action potential (AP) burst, and a Ca 2+ spike. The same stimulation induced LTD following manipulations that reduced the Ca 2+ spike and Ca 2+ signal or the AP burst. Low-frequency whisker deflections induced similar bidirectional plasticity of action potential evoked responses in anesthetized rats. These results suggest that both in vitro and in vivo similar mechanisms regulate the balance between LTP and LTD. This simple induction form of bidirectional hebbian plasticity could be present in the natural conditions to regulate the detection, flow, and storage of sensorimotor information.
Mulas, Marcello; Waniek, Nicolai; Conradt, Jörg
2016-01-01
After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments. PMID:26924979
Díez-García, Andrea; Barros-Zulaica, Natali; Núñez, Ángel; Buño, Washington; Fernández de Sevilla, David
2017-01-01
According to Hebb's original hypothesis (Hebb, 1949), synapses are reinforced when presynaptic activity triggers postsynaptic firing, resulting in long-term potentiation (LTP) of synaptic efficacy. Long-term depression (LTD) is a use-dependent decrease in synaptic strength that is thought to be due to synaptic input causing a weak postsynaptic effect. Although the mechanisms that mediate long-term synaptic plasticity have been investigated for at least three decades not all question have as yet been answered. Therefore, we aimed at determining the mechanisms that generate LTP or LTD with the simplest possible protocol. Low-frequency stimulation of basal dendrite inputs in Layer 5 pyramidal neurons of the rat barrel cortex induces LTP. This stimulation triggered an EPSP, an action potential (AP) burst, and a Ca2+ spike. The same stimulation induced LTD following manipulations that reduced the Ca2+ spike and Ca2+ signal or the AP burst. Low-frequency whisker deflections induced similar bidirectional plasticity of action potential evoked responses in anesthetized rats. These results suggest that both in vitro and in vivo similar mechanisms regulate the balance between LTP and LTD. This simple induction form of bidirectional hebbian plasticity could be present in the natural conditions to regulate the detection, flow, and storage of sensorimotor information. PMID:28203145
Butts, Daniel A; Kanold, Patrick O; Shatz, Carla J
2007-01-01
Patterned spontaneous activity in the developing retina is necessary to drive synaptic refinement in the lateral geniculate nucleus (LGN). Using perforated patch recordings from neurons in LGN slices during the period of eye segregation, we examine how such burst-based activity can instruct this refinement. Retinogeniculate synapses have a novel learning rule that depends on the latencies between pre- and postsynaptic bursts on the order of one second: coincident bursts produce long-lasting synaptic enhancement, whereas non-overlapping bursts produce mild synaptic weakening. It is consistent with “Hebbian” development thought to exist at this synapse, and we demonstrate computationally that such a rule can robustly use retinal waves to drive eye segregation and retinotopic refinement. Thus, by measuring plasticity induced by natural activity patterns, synaptic learning rules can be linked directly to their larger role in instructing the patterning of neural connectivity. PMID:17341130
Experiments on individual strategy updating in iterated snowdrift game under random rematching.
Qi, Hang; Ma, Shoufeng; Jia, Ning; Wang, Guangchao
2015-03-07
How do people actually play the iterated snowdrift games, particularly under random rematching protocol is far from well explored. Two sets of laboratory experiments on snowdrift game were conducted to investigate human strategy updating rules. Four groups of subjects were modeled by experience-weighted attraction learning theory at individual-level. Three out of the four groups (75%) passed model validation. Substantial heterogeneity is observed among the players who update their strategies in four typical types, whereas rare people behave like belief-based learners even under fixed pairing. Most subjects (63.9%) adopt the reinforcement learning (or alike) rules; but, interestingly, the performance of averaged reinforcement learners suffered. It is observed that two factors seem to benefit players in competition, i.e., the sensitivity to their recent experiences and the overall consideration of forgone payoffs. Moreover, subjects with changing opponents tend to learn faster based on their own recent experience, and display more diverse strategy updating rules than they do with fixed opponent. These findings suggest that most of subjects do apply reinforcement learning alike updating rules even under random rematching, although these rules may not improve their performance. The findings help evolutionary biology researchers to understand sophisticated human behavioral strategies in social dilemmas. Copyright © 2015 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-04-01
... of the transfer a partnership or fiduciary learns that a partner's or beneficiary's certification of... transfer a partnership or fiduciary learns that a corporation's statement (that an interest in the... a transfer of property in accordance with the rules of this section, then no additional tax is...
Code of Federal Regulations, 2012 CFR
2012-04-01
... of the transfer a partnership or fiduciary learns that a partner's or beneficiary's certification of... transfer a partnership or fiduciary learns that a corporation's statement (that an interest in the... a transfer of property in accordance with the rules of this section, then no additional tax is...
Code of Federal Regulations, 2014 CFR
2014-04-01
... of the transfer a partnership or fiduciary learns that a partner's or beneficiary's certification of... transfer a partnership or fiduciary learns that a corporation's statement (that an interest in the... a transfer of property in accordance with the rules of this section, then no additional tax is...
Code of Federal Regulations, 2013 CFR
2013-04-01
... of the transfer a partnership or fiduciary learns that a partner's or beneficiary's certification of... transfer a partnership or fiduciary learns that a corporation's statement (that an interest in the... a transfer of property in accordance with the rules of this section, then no additional tax is...
DOT National Transportation Integrated Search
2003-01-01
The Federal Railroad Administration (FRA) Human Factors Research and Development (R&D) Program sponsored a lessons-learned study to examine the impact of safety rules revision on safety culture, incident rates, and liability claims in the railroad in...
ERIC Educational Resources Information Center
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
Binary translation using peephole translation rules
Bansal, Sorav; Aiken, Alex
2010-05-04
An efficient binary translator uses peephole translation rules to directly translate executable code from one instruction set to another. In a preferred embodiment, the translation rules are generated using superoptimization techniques that enable the translator to automatically learn translation rules for translating code from the source to target instruction set architecture.
Aust, Ulrike; Braunöder, Elisabeth
2015-02-01
The present experiment investigated pigeons' and humans' processing styles-local or global-in an exemplar-based visual categorization task in which category membership of every stimulus had to be learned individually, and in a rule-based task in which category membership was defined by a perceptual rule. Group Intact was trained with the original pictures (providing both intact local and global information), Group Scrambled was trained with scrambled versions of the same pictures (impairing global information), and Group Blurred was trained with blurred versions (impairing local information). Subsequently, all subjects were tested for transfer to the 2 untrained presentation modes. Humans outperformed pigeons regarding learning speed and accuracy as well as transfer performance and showed good learning irrespective of group assignment, whereas the pigeons of Group Blurred needed longer to learn the training tasks than the pigeons of Groups Intact and Scrambled. Also, whereas humans generalized equally well to any novel presentation mode, pigeons' transfer from and to blurred stimuli was impaired. Both species showed faster learning and, for the most part, better transfer in the rule-based than in the exemplar-based task, but there was no evidence of the used processing mode depending on the type of task (exemplar- or rule-based). Whereas pigeons relied on local information throughout, humans did not show a preference for either processing level. Additional tests with grayscale versions of the training stimuli, with versions that were both blurred and scrambled, and with novel instances of the rule-based task confirmed and further extended these findings. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Social biases determine spatiotemporal sparseness of ciliate mating heuristics.
Clark, Kevin B
2012-01-01
Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present findings indicate sparseness performs a similar function in single aneural cells by tuning the size and density of encoded computational architectures useful for decision making in social contexts.
Social biases determine spatiotemporal sparseness of ciliate mating heuristics
2012-01-01
Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate’s initial subjective bias, responsiveness, or preparedness, as defined by Stevens’ Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present findings indicate sparseness performs a similar function in single aneural cells by tuning the size and density of encoded computational architectures useful for decision making in social contexts. PMID:22482001
Flexible Vinyl and Urethane Coating and Printing: New Source Performance Standards (NSPS)
Learn about the New Source Performance Standards (NSPS) for flexible vinyl and urethane coating and printing by reading the rule summary, the rule history, the code of federal regulations subpart and related rules
Learn about the NESHAP for ethylene oxide emissions for sterilization facilities. Find the rule history information, federal register citations, legal authority, and related rules as well as a rule summary.
Ganchev, Philip; Malehorn, David; Bigbee, William L.; Gopalakrishnan, Vanathi
2013-01-01
We present a novel framework for integrative biomarker discovery from related but separate data sets created in biomarker profiling studies. The framework takes prior knowledge in the form of interpretable, modular rules, and uses them during the learning of rules on a new data set. The framework consists of two methods of transfer of knowledge from source to target data: transfer of whole rules and transfer of rule structures. We evaluated the methods on three pairs of data sets: one genomic and two proteomic. We used standard measures of classification performance and three novel measures of amount of transfer. Preliminary evaluation shows that whole-rule transfer improves classification performance over using the target data alone, especially when there is more source data than target data. It also improves performance over using the union of the data sets. PMID:21571094
Reuveni, Iris; Lin, Longnian; Barkai, Edi
2018-06-15
Following training in a difficult olfactory-discrimination (OD) task rats acquire the capability to perform the task easily, with little effort. This new acquired skill, of 'learning how to learn' is termed 'rule learning'. At the single-cell level, rule learning is manifested in long-term enhancement of intrinsic neuronal excitability of piriform cortex (PC) pyramidal neurons, and in excitatory synaptic connections between these neurons to maintain cortical stability, such long-lasting increase in excitability must be accompanied by paralleled increase in inhibitory processes that would prevent hyper-excitable activation. In this review we describe the cellular and molecular mechanisms underlying complex-learning-induced long-lasting modifications in GABA A -receptors and GABA B -receptor-mediated synaptic inhibition. Subsequently we discuss how such modifications support the induction and preservation of long-term memories in the in the mammalian brain. Based on experimental results, computational analysis and modeling, we propose that rule learning is maintained by doubling the strength of synaptic inputs, excitatory as well as inhibitory, in a sub-group of neurons. This enhanced synaptic transmission, which occurs in all (or almost all) synaptic inputs onto these neurons, activates specific stored memories. At the molecular level, such rule-learning-relevant synaptic strengthening is mediated by doubling the conductance of synaptic channels, but not their numbers. This post synaptic process is controlled by a whole-cell mechanism via particular second messenger systems. This whole-cell mechanism enables memory amplification when required and memory extinction when not relevant. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks.
Tran, Son N; d'Avila Garcez, Artur S
2018-02-01
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language-a set of logical rules that we call confidence rules-and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
Neftci, Emre O.; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. PMID:28680387
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.
Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios
2017-01-01
An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.
Toward a dual-learning systems model of speech category learning
Chandrasekaran, Bharath; Koslov, Seth R.; Maddox, W. T.
2014-01-01
More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article, we describe a neurobiologically constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, unidimensional rules to more complex, reflexive, multi-dimensional rules. In a second application, we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions. PMID:25132827
Design issues for a reinforcement-based self-learning fuzzy controller
NASA Technical Reports Server (NTRS)
Yen, John; Wang, Haojin; Dauherity, Walter
1993-01-01
Fuzzy logic controllers have some often cited advantages over conventional techniques such as PID control: easy implementation, its accommodation to natural language, the ability to cover wider range of operating conditions and others. One major obstacle that hinders its broader application is the lack of a systematic way to develop and modify its rules and as result the creation and modification of fuzzy rules often depends on try-error or pure experimentation. One of the proposed approaches to address this issue is self-learning fuzzy logic controllers (SFLC) that use reinforcement learning techniques to learn the desirability of states and to adjust the consequent part of fuzzy control rules accordingly. Due to the different dynamics of the controlled processes, the performance of self-learning fuzzy controller is highly contingent on the design. The design issue has not received sufficient attention. The issues related to the design of a SFLC for the application to chemical process are discussed and its performance is compared with that of PID and self-tuning fuzzy logic controller.
Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco
2018-03-01
This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.
Advanced soft computing diagnosis method for tumour grading.
Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N
2006-01-01
To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.
Learning and transfer of category knowledge in an indirect categorization task.
Helie, Sebastien; Ashby, F Gregory
2012-05-01
Knowledge representations acquired during category learning experiments are 'tuned' to the task goal. A useful paradigm to study category representations is indirect category learning. In the present article, we propose a new indirect categorization task called the "same"-"different" categorization task. The same-different categorization task is a regular same-different task, but the question asked to the participants is about the stimulus category membership instead of stimulus identity. Experiment 1 explores the possibility of indirectly learning rule-based and information-integration category structures using the new paradigm. The results suggest that there is little learning about the category structures resulting from an indirect categorization task unless the categories can be separated by a one-dimensional rule. Experiment 2 explores whether a category representation learned indirectly can be used in a direct classification task (and vice versa). The results suggest that previous categorical knowledge acquired during a direct classification task can be expressed in the same-different categorization task only when the categories can be separated by a rule that is easily verbalized. Implications of these results for categorization research are discussed.
Group learning versus local learning: Which is prefer for public cooperation?
NASA Astrophysics Data System (ADS)
Yang, Shi-Han; Song, Qi-Qing
2018-01-01
We study the evolution of cooperation in public goods games on various graphs, focusing on the effects that are brought by different kinds of strategy donors. This highlights a basic feature of a public good game, for which there exists a remarkable difference between the interactive players and the players who are imitated. A player can learn from all the groups where the player is a member or from the typically local nearest neighbors, and the results show that the group learning rules have better performance in promoting cooperation on many networks than the local learning rules. The heterogeneity of networks' degree may be an effective mechanism for harvesting the cooperation expectation in many cases, however, we find that heterogeneity does not definitely mean the high frequency of cooperators in a population under group learning rules. It was shown that cooperators always hardly evolve whenever the interaction and the replacement do not coincide for evolutionary pairwise dilemmas on graphs, while for PG games we find that breaking the symmetry is conducive to the survival of cooperators.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
This paper presents a new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system. In particular, our generalized approximate reasoning-based intelligent control (GARIC) architecture (1) learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward neural network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto et al. (1983) to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
learn about the NSPS for municipal solid waste landfills by reading the rule summary, rule history, code of federal regulations text, fact sheets, background information documents, related rules and compliance information.
Teaching with Procedural Variation: A Chinese Way of Promoting Deep Understanding of Mathematics
ERIC Educational Resources Information Center
Lai, Mun Yee; Murray, Sara
2012-01-01
In mathematics education, there has been tension between deep learning and repetitive learning. Western educators often emphasize the need for students to construct a conceptual understanding of mathematical symbols and rules before they practise the rules (Li, 2006). On the other hand, Chinese learners tend to be oriented towards rote learning…
Analysis and Synthesis of Adaptive Neural Elements and Assemblies
1992-12-14
network, a learning rule (activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning , was...activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning, was demonstrated to support many...network, a learning rule (activity-dependent neuromodulation ), which has been proposed as a cellular mechanism for classical conditioning, was
ARI Basic Research Program FY 1999-2000
1999-06-01
visual cues, reinforcement, and instruction concerning abstract , general rules. In our future research, we plan to examine the learning of novel...Watch, • Graduate student apprenticeship program - Consortium Research Fellows Program- with the Consortium of Metropolitan Washington Universities...do learn complex rules involving different levels of abstraction when given sufficient specific examples but that they also benefit from explicit
ERIC Educational Resources Information Center
Kundey, Shannon M. A.; Strandell, Brittany; Mathis, Heather; Rowan, James D.
2010-01-01
(Hulse and Dorsky, 1977) and (Hulse and Dorsky, 1979) found that rats, like humans, learn sequences following a simple rule-based structure more quickly than those lacking a rule-based structure. Through two experiments, we explored whether two additional species--domesticated horses ("Equus callabus") and chickens ("Gallus domesticus")--would…
Quest for the Golden Rule: An Effective Social Skills Promotion and Bullying Prevention Program
ERIC Educational Resources Information Center
Rubin-Vaughan, Alice; Pepler, Debra; Brown, Steven; Craig, Wendy
2011-01-01
Everyday many students face bullying situations that they are ill equipped to manage. E-learning has recently emerged as a potentially effective tool in teaching children social skills, in addition to academic subject matter. Quest for the Golden Rule is one of the first bullying prevention e-learning programs available, designed by the…
Brain Regions Involved in the Learning and Application of Reward Rules in a Two-Deck Gambling Task
ERIC Educational Resources Information Center
Hartstra, E.; Oldenburg, J. F. E.; Van Leijenhorst, L.; Rombouts, S. A. R. B.; Crone, E. A.
2010-01-01
Decision-making involves the ability to choose between competing actions that are associated with uncertain benefits and penalties. The Iowa Gambling Task (IGT), which mimics real-life decision-making, involves learning a reward-punishment rule over multiple trials. Patients with damage to ventromedial prefrontal cortex (VMPFC) show deficits…
Supreme Court's Patent Ruling Could Spell Trouble For Blackboard and Others
ERIC Educational Resources Information Center
Carnevale, Dan
2007-01-01
Many college officials have criticized Blackboard Inc. for its patent on its course-management system, arguing that the patent is overly broad and seems to cover the entire concept of online learning. Critics of Blackboard and other companies that have patents on learning technology are welcoming a recent Supreme Court ruling that they hope may…
ERIC Educational Resources Information Center
Hinze, Scott R.; Bunting, Michael F; Pellegrino, James W.
2009-01-01
The involvement of working memory capacity (WMC) in ruled-based cognitive skill acquisition is well-established, but the duration of its involvement and its role in learning strategy selection are less certain. Participants (N=610) learned four logic rules, their corresponding symbols, or logic gates, and the appropriate input-output combinations…
The effects of cumulative practice on mathematics problem solving.
Mayfield, Kristin H; Chase, Philip N
2002-01-01
This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.
Reimagining the learned intermediary rule for the new pharmaceutical marketplace.
Hall, Timothy S
2004-01-01
For the past decade, the learned intermediary rule--the rule of tort law that provides that drug manufacturers may satisfy their duty to warn of a drug's dangers by warning the prescribing physician rather than the end user of the drug--has been the subject of vigorous academic debate. That debate has been largely moot, however, as the courts have proven reluctant to make significant inroads on the protection offered by the Rule to drug manufacturers. This Article proposes a new approach to the Rule. Part I discusses the history and overwhelming adoption of the Rule pursuant to the Restatement (Second) of Torts. Part II argues that changes in the health care delivery system have resulted in a legal system that introduces market distortions by effectively immunizing the pharmaceutical industry from the legal and social consequences of its own actions. Part III then sets forth a reconceptualization of the Rule, which preserves the Rule's benefits with respect to the drug industry, the health care system, and the goals of tort law, while also strengthening the protection the tort system offers to individuals injured by prescription drugs.
The effects of cumulative practice on mathematics problem solving.
Mayfield, Kristin H; Chase, Philip N
2002-01-01
This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving. PMID:12102132
Developmental metaplasticity in neural circuit codes of firing and structure.
Baram, Yoram
2017-01-01
Firing-rate dynamics have been hypothesized to mediate inter-neural information transfer in the brain. While the Hebbian paradigm, relating learning and memory to firing activity, has put synaptic efficacy variation at the center of cortical plasticity, we suggest that the external expression of plasticity by changes in the firing-rate dynamics represents a more general notion of plasticity. Hypothesizing that time constants of plasticity and firing dynamics increase with age, and employing the filtering property of the neuron, we obtain the elementary code of global attractors associated with the firing-rate dynamics in each developmental stage. We define a neural circuit connectivity code as an indivisible set of circuit structures generated by membrane and synapse activation and silencing. Synchronous firing patterns under parameter uniformity, and asynchronous circuit firing are shown to be driven, respectively, by membrane and synapse silencing and reactivation, and maintained by the neuronal filtering property. Analytic, graphical and simulation representation of the discrete iteration maps and of the global attractor codes of neural firing rate are found to be consistent with previous empirical neurobiological findings, which have lacked, however, a specific correspondence between firing modes, time constants, circuit connectivity and cortical developmental stages. Copyright © 2016 Elsevier Ltd. All rights reserved.
Synaptic plasticity in a cerebellum-like structure depends on temporal order
NASA Astrophysics Data System (ADS)
Bell, Curtis C.; Han, Victor Z.; Sugawara, Yoshiko; Grant, Kirsty
1997-05-01
Cerebellum-like structures in fish appear to act as adaptive sensory processors, in which learned predictions about sensory input are generated and subtracted from actual sensory input, allowing unpredicted inputs to stand out1-3. Pairing sensory input with centrally originating predictive signals, such as corollary discharge signals linked to motor commands, results in neural responses to the predictive signals alone that are Negative images' of the previously paired sensory responses. Adding these 'negative images' to actual sensory inputs minimizes the neural response to predictable sensory features. At the cellular level, sensory input is relayed to the basal region of Purkinje-like cells, whereas predictive signals are relayed by parallel fibres to the apical dendrites of the same cells4. The generation of negative images could be explained by plasticity at parallel fibre synapses5-7. We show here that such plasticity exists in the electrosensory lobe of mormyrid electric fish and that it has the necessary properties for such a model: it is reversible, anti-hebbian (excitatory postsynaptic potentials (EPSPs) are depressed after pairing with a postsynaptic spike) and tightly dependent on the sequence of pre- and postsynaptic events, with depression occurring only if the postsynaptic spike follows EPSP onset within 60 ms.
A model for evolution of overlapping community networks
NASA Astrophysics Data System (ADS)
Karan, Rituraj; Biswal, Bibhu
2017-05-01
A model is proposed for the evolution of network topology in social networks with overlapping community structure. Starting from an initial community structure that is defined in terms of group affiliations, the model postulates that the subsequent growth and loss of connections is similar to the Hebbian learning and unlearning in the brain and is governed by two dominant factors: the strength and frequency of interaction between the members, and the degree of overlap between different communities. The temporal evolution from an initial community structure to the current network topology can be described based on these two parameters. It is possible to quantify the growth occurred so far and predict the final stationary state to which the network is likely to evolve. Applications in epidemiology or the spread of email virus in a computer network as well as finding specific target nodes to control it are envisaged. While facing the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities one faces the most basic questions: how do communities evolve in time? This work aims to address this issue by developing a mathematical model for the evolution of community networks and studying it through computer simulation.
High-Degree Neurons Feed Cortical Computations
Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.
2016-01-01
Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network. PMID:27159884
Machine learning with quantum relative entropy
NASA Astrophysics Data System (ADS)
Tsuda, Koji
2009-12-01
Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.
Learning and inference in a nonequilibrium Ising model with hidden nodes.
Dunn, Benjamin; Roudi, Yasser
2013-02-01
We study inference and reconstruction of couplings in a partially observed kinetic Ising model. With hidden spins, calculating the likelihood of a sequence of observed spin configurations requires performing a trace over the configurations of the hidden ones. This, as we show, can be represented as a path integral. Using this representation, we demonstrate that systematic approximate inference and learning rules can be derived using dynamical mean-field theory. Although naive mean-field theory leads to an unstable learning rule, taking into account Gaussian corrections allows learning the couplings involving hidden nodes. It also improves learning of the couplings between the observed nodes compared to when hidden nodes are ignored.
Matsuda, Eiko; Hubert, Julien; Ikegami, Takashi
2014-01-01
Vicarious trial-and-error (VTE) is a behavior observed in rat experiments that seems to suggest self-conflict. This behavior is seen mainly when the rats are uncertain about making a decision. The presence of VTE is regarded as an indicator of a deliberative decision-making process, that is, searching, predicting, and evaluating outcomes. This process is slower than automated decision-making processes, such as reflex or habituation, but it allows for flexible and ongoing control of behavior. In this study, we propose for the first time a robotic model of VTE to see if VTE can emerge just from a body-environment interaction and to show the underlying mechanism responsible for the observation of VTE and the advantages provided by it. We tried several robots with different parameters, and we have found that they showed three different types of VTE: high numbers of VTE at the beginning of learning, decreasing numbers afterward (similar VTE pattern to experiments with rats), low during the whole learning period, and high numbers all the time. Therefore, we were able to reproduce the phenomenon of VTE in a model robot using only a simple dynamical neural network with Hebbian learning, which suggests that VTE is an emergent property of a plastic and embodied neural network. From a comparison of the three types of VTE, we demonstrated that 1) VTE is associated with chaotic activity of neurons in our model and 2) VTE-showing robots were robust to environmental perturbations. We suggest that the instability of neuronal activity found in VTE allows ongoing learning to rebuild its strategy continuously, which creates robust behavior. Based on these results, we suggest that VTE is caused by a similar mechanism in biology and leads to robust decision making in an analogous way.
Compensatory processing during rule-based category learning in older adults.
Bharani, Krishna L; Paller, Ken A; Reber, Paul J; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G
2016-01-01
Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex.
An agent-based model of dialect evolution in killer whales.
Filatova, Olga A; Miller, Patrick J O
2015-05-21
The killer whale is one of the few animal species with vocal dialects that arise from socially learned group-specific call repertoires. We describe a new agent-based model of killer whale populations and test a set of vocal-learning rules to assess which mechanisms may lead to the formation of dialect groupings observed in the wild. We tested a null model with genetic transmission and no learning, and ten models with learning rules that differ by template source (mother or matriline), variation type (random errors or innovations) and type of call change (no divergence from kin vs. divergence from kin). The null model without vocal learning did not produce the pattern of group-specific call repertoires we observe in nature. Learning from either mother alone or the entire matriline with calls changing by random errors produced a graded distribution of the call phenotype, without the discrete call types observed in nature. Introducing occasional innovation or random error proportional to matriline variance yielded more or less discrete and stable call types. A tendency to diverge from the calls of related matrilines provided fast divergence of loose call clusters. A pattern resembling the dialect diversity observed in the wild arose only when rules were applied in combinations and similar outputs could arise from different learning rules and their combinations. Our results emphasize the lack of information on quantitative features of wild killer whale dialects and reveal a set of testable questions that can draw insights into the cultural evolution of killer whale dialects. Copyright © 2015 Elsevier Ltd. All rights reserved.
Compensatory Processing During Rule-Based Category Learning in Older Adults
Bharani, Krishna L.; Paller, Ken A.; Reber, Paul J.; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G.
2016-01-01
Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex. PMID:26422522
Video Self-Modeling to Teach Classroom Rules to Two Students with Asperger's
ERIC Educational Resources Information Center
Lang, Russell; Shogren, Karrie A.; Machalicek, Wendy; Rispoli, Mandy; O'Reilly, Mark; Baker, Sonia; Regester, April
2009-01-01
Classroom rules are an integral part of classroom management. Children with Asperger's may require systematic instruction to learn classroom rules, but may be placed in classrooms in which the rules are not explicitly taught. A multiple baseline design across students with probes for maintenance after the intervention ceased was used to evaluate…
A Simple Computer-Aided Three-Dimensional Molecular Modeling for the Octant Rule
ERIC Educational Resources Information Center
Kang, Yinan; Kang, Fu-An
2011-01-01
The Moffitt-Woodward-Moscowitz-Klyne-Djerassi octant rule is one of the most successful empirical rules in organic chemistry. However, the lack of a simple effective modeling method for the octant rule in the past 50 years has posed constant difficulties for researchers, teachers, and students, particularly the young generations, to learn and…
Rules, Technique, and Practical Knowledge: A Wittgensteinian Exploration of Vocational Learning
ERIC Educational Resources Information Center
Winch, Christopher
2006-01-01
In this essay, Christopher Winch explores the relevance of Ludwig Wittgenstein's account of rule-following to vocational education with particular reference to the often-made claim that any account of an activity in terms of rule-following implies rigidity and inflexibility. He argues that most rule-following is only successful when it involves a…