Sample records for neuronal network models

  1. Computational properties of networks of synchronous groups of spiking neurons.

    PubMed

    Dayhoff, Judith E

    2007-09-01

    We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.

  2. Simplicity and efficiency of integrate-and-fire neuron models.

    PubMed

    Plesser, Hans E; Diesmann, Markus

    2009-02-01

    Lovelace and Cios (2008) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 10(5) neurons and 10(9) connections on moderate computer clusters.

  3. Shaping Neuronal Network Activity by Presynaptic Mechanisms

    PubMed Central

    Ashery, Uri

    2015-01-01

    Neuronal microcircuits generate oscillatory activity, which has been linked to basic functions such as sleep, learning and sensorimotor gating. Although synaptic release processes are well known for their ability to shape the interaction between neurons in microcircuits, most computational models do not simulate the synaptic transmission process directly and hence cannot explain how changes in synaptic parameters alter neuronal network activity. In this paper, we present a novel neuronal network model that incorporates presynaptic release mechanisms, such as vesicle pool dynamics and calcium-dependent release probability, to model the spontaneous activity of neuronal networks. The model, which is based on modified leaky integrate-and-fire neurons, generates spontaneous network activity patterns, which are similar to experimental data and robust under changes in the model's primary gain parameters such as excitatory postsynaptic potential and connectivity ratio. Furthermore, it reliably recreates experimental findings and provides mechanistic explanations for data obtained from microelectrode array recordings, such as network burst termination and the effects of pharmacological and genetic manipulations. The model demonstrates how elevated asynchronous release, but not spontaneous release, synchronizes neuronal network activity and reveals that asynchronous release enhances utilization of the recycling vesicle pool to induce the network effect. The model further predicts a positive correlation between vesicle priming at the single-neuron level and burst frequency at the network level; this prediction is supported by experimental findings. Thus, the model is utilized to reveal how synaptic release processes at the neuronal level govern activity patterns and synchronization at the network level. PMID:26372048

  4. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    PubMed

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  5. Results on a binding neuron model and their implications for modified hourglass model for neuronal network.

    PubMed

    Arunachalam, Viswanathan; Akhavan-Tabatabaei, Raha; Lopez, Cristina

    2013-01-01

    The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008) in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  6. Dynamical estimation of neuron and network properties III: network analysis using neuron spike times.

    PubMed

    Knowlton, Chris; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2014-06-01

    Estimating the behavior of a network of neurons requires accurate models of the individual neurons along with accurate characterizations of the connections among them. Whereas for a single cell, measurements of the intracellular voltage are technically feasible and sufficient to characterize a useful model of its behavior, making sufficient numbers of simultaneous intracellular measurements to characterize even small networks is infeasible. This paper builds on prior work on single neurons to explore whether knowledge of the time of spiking of neurons in a network, once the nodes (neurons) have been characterized biophysically, can provide enough information to usefully constrain the functional architecture of the network: the existence of synaptic links among neurons and their strength. Using standardized voltage and synaptic gating variable waveforms associated with a spike, we demonstrate that the functional architecture of a small network of model neurons can be established.

  7. Computational model of electrically coupled, intrinsically distinct pacemaker neurons.

    PubMed

    Soto-Treviño, Cristina; Rabbah, Pascale; Marder, Eve; Nadim, Farzan

    2005-07-01

    Electrical coupling between neurons with similar properties is often studied. Nonetheless, the role of electrical coupling between neurons with widely different intrinsic properties also occurs, but is less well understood. Inspired by the pacemaker group of the crustacean pyloric network, we developed a multicompartment, conductance-based model of a small network of intrinsically distinct, electrically coupled neurons. In the pyloric network, a small intrinsically bursting neuron, through gap junctions, drives 2 larger, tonically spiking neurons to reliably burst in-phase with it. Each model neuron has 2 compartments, one responsible for spike generation and the other for producing a slow, large-amplitude oscillation. We illustrate how these compartments interact and determine the dynamics of the model neurons. Our model captures the dynamic oscillation range measured from the isolated and coupled biological neurons. At the network level, we explore the range of coupling strengths for which synchronous bursting oscillations are possible. The spatial segregation of ionic currents significantly enhances the ability of the 2 neurons to burst synchronously, and the oscillation range of the model pacemaker network depends not only on the strength of the electrical synapse but also on the identity of the neuron receiving inputs. We also compare the activity of the electrically coupled, distinct neurons with that of a network of coupled identical bursting neurons. For small to moderate coupling strengths, the network of identical elements, when receiving asymmetrical inputs, can have a smaller dynamic range of oscillation than that of its constituent neurons in isolation.

  8. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  9. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    NASA Astrophysics Data System (ADS)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network and potentially contributes to development of improved therapy for neurological disorders such as Parkinson's disease.

  10. Reducing Neuronal Networks to Discrete Dynamics

    PubMed Central

    Terman, David; Ahn, Sungwoo; Wang, Xueying; Just, Winfried

    2008-01-01

    We consider a general class of purely inhibitory and excitatory-inhibitory neuronal networks, with a general class of network architectures, and characterize the complex firing patterns that emerge. Our strategy for studying these networks is to first reduce them to a discrete model. In the discrete model, each neuron is represented as a finite number of states and there are rules for how a neuron transitions from one state to another. In this paper, we rigorously demonstrate that the continuous neuronal model can be reduced to the discrete model if the intrinsic and synaptic properties of the cells are chosen appropriately. In a companion paper [1], we analyze the discrete model. PMID:18443649

  11. Can simple rules control development of a pioneer vertebrate neuronal network generating behavior?

    PubMed

    Roberts, Alan; Conte, Deborah; Hull, Mike; Merrison-Hort, Robert; al Azad, Abul Kalam; Buhl, Edgar; Borisyuk, Roman; Soffe, Stephen R

    2014-01-08

    How do the pioneer networks in the axial core of the vertebrate nervous system first develop? Fundamental to understanding any full-scale neuronal network is knowledge of the constituent neurons, their properties, synaptic interconnections, and normal activity. Our novel strategy uses basic developmental rules to generate model networks that retain individual neuron and synapse resolution and are capable of reproducing correct, whole animal responses. We apply our developmental strategy to young Xenopus tadpoles, whose brainstem and spinal cord share a core vertebrate plan, but at a tractable complexity. Following detailed anatomical and physiological measurements to complete a descriptive library of each type of spinal neuron, we build models of their axon growth controlled by simple chemical gradients and physical barriers. By adding dendrites and allowing probabilistic formation of synaptic connections, we reconstruct network connectivity among up to 2000 neurons. When the resulting "network" is populated by model neurons and synapses, with properties based on physiology, it can respond to sensory stimulation by mimicking tadpole swimming behavior. This functioning model represents the most complete reconstruction of a vertebrate neuronal network that can reproduce the complex, rhythmic behavior of a whole animal. The findings validate our novel developmental strategy for generating realistic networks with individual neuron- and synapse-level resolution. We use it to demonstrate how early functional neuronal connectivity and behavior may in life result from simple developmental "rules," which lay out a scaffold for the vertebrate CNS without specific neuron-to-neuron recognition.

  12. Qualitative validation of the reduction from two reciprocally coupled neurons to one self-coupled neuron in a respiratory network model.

    PubMed

    Dunmyre, Justin R

    2011-06-01

    The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.

  13. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    PubMed Central

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  14. Computational models of neuron-astrocyte interactions lead to improved efficacy in the performance of neural networks.

    PubMed

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.

  15. Bifurcations of large networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-08-01

    Recently, a class of two-dimensional integrate and fire models has been used to faithfully model spiking neurons. This class includes the Izhikevich model, the adaptive exponential integrate and fire model, and the quartic integrate and fire model. The bifurcation types for the individual neurons have been thoroughly analyzed by Touboul (SIAM J Appl Math 68(4):1045-1079, 2008). However, when the models are coupled together to form networks, the networks can display bifurcations that an uncoupled oscillator cannot. For example, the networks can transition from firing with a constant rate to burst firing. This paper introduces a technique to reduce a full network of this class of neurons to a mean field model, in the form of a system of switching ordinary differential equations. The reduction uses population density methods and a quasi-steady state approximation to arrive at the mean field system. Reduced models are derived for networks with different topologies and different model neurons with biologically derived parameters. The mean field equations are able to qualitatively and quantitatively describe the bifurcations that the full networks display. Extensions and higher order approximations are discussed.

  16. A modeling comparison of projection neuron- and neuromodulator-elicited oscillations in a central pattern generating network.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2008-06-01

    Many central pattern generating networks are influenced by synaptic input from modulatory projection neurons. The network response to a projection neuron is sometimes mimicked by bath applying the neuronally-released modulator, despite the absence of network interactions with the projection neuron. One interesting example occurs in the crab stomatogastric ganglion (STG), where bath applying the neuropeptide pyrokinin (PK) elicits a gastric mill rhythm which is similar to that elicited by the projection neuron modulatory commissural neuron 1 (MCN1), despite the absence of PK in MCN1 and the fact that MCN1 is not active during the PK-elicited rhythm. MCN1 terminals have fast and slow synaptic actions on the gastric mill network and are presynaptically inhibited by this network in the STG. These local connections are inactive in the PK-elicited rhythm, and the mechanism underlying this rhythm is unknown. We use mathematical and biophysically-realistic modeling to propose potential mechanisms by which PK can elicit a gastric mill rhythm that is similar to the MCN1-elicited rhythm. We analyze slow-wave network oscillations using simplified mathematical models and, in parallel, develop biophysically-realistic models that account for fast, action potential-driven oscillations and some spatial structure of the network neurons. Our results illustrate how the actions of bath-applied neuromodulators can mimic those of descending projection neurons through mathematically similar but physiologically distinct mechanisms.

  17. Simulation of Code Spectrum and Code Flow of Cultured Neuronal Networks.

    PubMed

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime

    2016-01-01

    It has been shown that, in cultured neuronal networks on a multielectrode, pseudorandom-like sequences (codes) are detected, and they flow with some spatial decay constant. Each cultured neuronal network is characterized by a specific spectrum curve. That is, we may consider the spectrum curve as a "signature" of its associated neuronal network that is dependent on the characteristics of neurons and network configuration, including the weight distribution. In the present study, we used an integrate-and-fire model of neurons with intrinsic and instantaneous fluctuations of characteristics for performing a simulation of a code spectrum from multielectrodes on a 2D mesh neural network. We showed that it is possible to estimate the characteristics of neurons such as the distribution of number of neurons around each electrode and their refractory periods. Although this process is a reverse problem and theoretically the solutions are not sufficiently guaranteed, the parameters seem to be consistent with those of neurons. That is, the proposed neural network model may adequately reflect the behavior of a cultured neuronal network. Furthermore, such prospect is discussed that code analysis will provide a base of communication within a neural network that will also create a base of natural intelligence.

  18. Neural networks with multiple general neuron models: a hybrid computational intelligence approach using Genetic Programming.

    PubMed

    Barton, Alan J; Valdés, Julio J; Orchard, Robert

    2009-01-01

    Classical neural networks are composed of neurons whose nature is determined by a certain function (the neuron model), usually pre-specified. In this paper, a type of neural network (NN-GP) is presented in which: (i) each neuron may have its own neuron model in the form of a general function, (ii) any layout (i.e network interconnection) is possible, and (iii) no bias nodes or weights are associated to the connections, neurons or layers. The general functions associated to a neuron are learned by searching a function space. They are not provided a priori, but are rather built as part of an Evolutionary Computation process based on Genetic Programming. The resulting network solutions are evaluated based on a fitness measure, which may, for example, be based on classification or regression errors. Two real-world examples are presented to illustrate the promising behaviour on classification problems via construction of a low-dimensional representation of a high-dimensional parameter space associated to the set of all network solutions.

  19. The effects of neuron morphology on graph theoretic measures of network connectivity: the analysis of a two-level statistical model.

    PubMed

    Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena

    2015-01-01

    We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.

  20. Sustained synchronized neuronal network activity in a human astrocyte co-culture system

    PubMed Central

    Kuijlaars, Jacobine; Oyelami, Tutu; Diels, Annick; Rohrbacher, Jutta; Versweyveld, Sofie; Meneghello, Giulia; Tuefferd, Marianne; Verstraelen, Peter; Detrez, Jan R.; Verschuuren, Marlies; De Vos, Winnok H.; Meert, Theo; Peeters, Pieter J.; Cik, Miroslav; Nuydens, Rony; Brône, Bert; Verheyen, An

    2016-01-01

    Impaired neuronal network function is a hallmark of neurodevelopmental and neurodegenerative disorders such as autism, schizophrenia, and Alzheimer’s disease and is typically studied using genetically modified cellular and animal models. Weak predictive capacity and poor translational value of these models urge for better human derived in vitro models. The implementation of human induced pluripotent stem cells (hiPSCs) allows studying pathologies in differentiated disease-relevant and patient-derived neuronal cells. However, the differentiation process and growth conditions of hiPSC-derived neurons are non-trivial. In order to study neuronal network formation and (mal)function in a fully humanized system, we have established an in vitro co-culture model of hiPSC-derived cortical neurons and human primary astrocytes that recapitulates neuronal network synchronization and connectivity within three to four weeks after final plating. Live cell calcium imaging, electrophysiology and high content image analyses revealed an increased maturation of network functionality and synchronicity over time for co-cultures compared to neuronal monocultures. The cells express GABAergic and glutamatergic markers and respond to inhibitors of both neurotransmitter pathways in a functional assay. The combination of this co-culture model with quantitative imaging of network morphofunction is amenable to high throughput screening for lead discovery and drug optimization for neurological diseases. PMID:27819315

  1. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    PubMed Central

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  2. Transition to subthreshold activity with the use of phase shifting in a model thalamic network

    NASA Astrophysics Data System (ADS)

    Thomas, Elizabeth; Grisar, Thierry

    1997-05-01

    Absence epilepsy involves a state of low frequency synchronous oscillations by the involved neuronal networks. These oscillations may be either above or subthreshold. In this investigation, we studied the methods which could be utilized to transform the threshold activity of neurons in the network to a subthreshold state. A model thalamic network was constructed using the Hodgkin Huxley framework. Subthreshold activity was achieved by the application of stimuli to the network which caused phase shifts in the oscillatory activity of selected neurons in the network. In some instances the stimulus was a periodic pulse train of low frequency to the reticular thalamic neurons of the network while in others, it was a constant hyperpolarizing current applied to the thalamocortical neurons.

  3. A distance constrained synaptic plasticity model of C. elegans neuronal network

    NASA Astrophysics Data System (ADS)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  4. Synaptic dynamics regulation in response to high frequency stimulation in neuronal networks

    NASA Astrophysics Data System (ADS)

    Su, Fei; Wang, Jiang; Li, Huiyan; Wei, Xile; Yu, Haitao; Deng, Bin

    2018-02-01

    High frequency stimulation (HFS) has confirmed its ability in modulating the pathological neural activities. However its detailed mechanism is unclear. This study aims to explore the effects of HFS on neuronal networks dynamics. First, the two-neuron FitzHugh-Nagumo (FHN) networks with static coupling strength and the small-world FHN networks with spike-time-dependent plasticity (STDP) modulated synaptic coupling strength are constructed. Then, the multi-scale method is used to transform the network models into equivalent averaged models, where the HFS intensity is modeled as the ratio between stimulation amplitude and frequency. Results show that in static two-neuron networks, there is still synaptic current projected to the postsynaptic neuron even if the presynaptic neuron is blocked by the HFS. In the small-world networks, the effects of the STDP adjusting rate parameter on the inactivation ratio and synchrony degree increase with the increase of HFS intensity. However, only when the HFS intensity becomes very large can the STDP time window parameter affect the inactivation ratio and synchrony index. Both simulation and numerical analysis demonstrate that the effects of HFS on neuronal network dynamics are realized through the adjustment of synaptic variable and conductance.

  5. Small Modifications to Network Topology Can Induce Stochastic Bistable Spiking Dynamics in a Balanced Cortical Model

    PubMed Central

    McDonnell, Mark D.; Ward, Lawrence M.

    2014-01-01

    Abstract Directed random graph models frequently are used successfully in modeling the population dynamics of networks of cortical neurons connected by chemical synapses. Experimental results consistently reveal that neuronal network topology is complex, however, in the sense that it differs statistically from a random network, and differs for classes of neurons that are physiologically different. This suggests that complex network models whose subnetworks have distinct topological structure may be a useful, and more biologically realistic, alternative to random networks. Here we demonstrate that the balanced excitation and inhibition frequently observed in small cortical regions can transiently disappear in otherwise standard neuronal-scale models of fluctuation-driven dynamics, solely because the random network topology was replaced by a complex clustered one, whilst not changing the in-degree of any neurons. In this network, a small subset of cells whose inhibition comes only from outside their local cluster are the cause of bistable population dynamics, where different clusters of these cells irregularly switch back and forth from a sparsely firing state to a highly active state. Transitions to the highly active state occur when a cluster of these cells spikes sufficiently often to cause strong unbalanced positive feedback to each other. Transitions back to the sparsely firing state rely on occasional large fluctuations in the amount of non-local inhibition received. Neurons in the model are homogeneous in their intrinsic dynamics and in-degrees, but differ in the abundance of various directed feedback motifs in which they participate. Our findings suggest that (i) models and simulations should take into account complex structure that varies for neuron and synapse classes; (ii) differences in the dynamics of neurons with similar intrinsic properties may be caused by their membership in distinctive local networks; (iii) it is important to identify neurons that share physiological properties and location, but differ in their connectivity. PMID:24743633

  6. Functional model of biological neural networks.

    PubMed

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  7. Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) using Complex Quantum Neuron (CQN): Applications to time series prediction.

    PubMed

    Cui, Yiqian; Shi, Junyou; Wang, Zili

    2015-11-01

    Quantum Neural Networks (QNN) models have attracted great attention since it innovates a new neural computing manner based on quantum entanglement. However, the existing QNN models are mainly based on the real quantum operations, and the potential of quantum entanglement is not fully exploited. In this paper, we proposes a novel quantum neuron model called Complex Quantum Neuron (CQN) that realizes a deep quantum entanglement. Also, a novel hybrid networks model Complex Rotation Quantum Dynamic Neural Networks (CRQDNN) is proposed based on Complex Quantum Neuron (CQN). CRQDNN is a three layer model with both CQN and classical neurons. An infinite impulse response (IIR) filter is embedded in the Networks model to enable the memory function to process time series inputs. The Levenberg-Marquardt (LM) algorithm is used for fast parameter learning. The networks model is developed to conduct time series predictions. Two application studies are done in this paper, including the chaotic time series prediction and electronic remaining useful life (RUL) prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity

    PubMed Central

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-01-01

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points. PMID:27212008

  9. Dynamical state of the network determines the efficacy of single neuron properties in shaping the network activity.

    PubMed

    Sahasranamam, Ajith; Vlachos, Ioannis; Aertsen, Ad; Kumar, Arvind

    2016-05-23

    Spike patterns are among the most common electrophysiological descriptors of neuron types. Surprisingly, it is not clear how the diversity in firing patterns of the neurons in a network affects its activity dynamics. Here, we introduce the state-dependent stochastic bursting neuron model allowing for a change in its firing patterns independent of changes in its input-output firing rate relationship. Using this model, we show that the effect of single neuron spiking on the network dynamics is contingent on the network activity state. While spike bursting can both generate and disrupt oscillations, these patterns are ineffective in large regions of the network state space in changing the network activity qualitatively. Finally, we show that when single-neuron properties are made dependent on the population activity, a hysteresis like dynamics emerges. This novel phenomenon has important implications for determining the network response to time-varying inputs and for the network sensitivity at different operating points.

  10. An integrate-and-fire model for synchronized bursting in a network of cultured cortical neurons.

    PubMed

    French, D A; Gruenstein, E I

    2006-12-01

    It has been suggested that spontaneous synchronous neuronal activity is an essential step in the formation of functional networks in the central nervous system. The key features of this type of activity consist of bursts of action potentials with associated spikes of elevated cytoplasmic calcium. These features are also observed in networks of rat cortical neurons that have been formed in culture. Experimental studies of these cultured networks have led to several hypotheses for the mechanisms underlying the observed synchronized oscillations. In this paper, bursting integrate-and-fire type mathematical models for regular spiking (RS) and intrinsic bursting (IB) neurons are introduced and incorporated through a small-world connection scheme into a two-dimensional excitatory network similar to those in the cultured network. This computer model exhibits spontaneous synchronous activity through mechanisms similar to those hypothesized for the cultured experimental networks. Traces of the membrane potential and cytoplasmic calcium from the model closely match those obtained from experiments. We also consider the impact on network behavior of the IB neurons, the geometry and the small world connection scheme.

  11. Collective behavior of large-scale neural networks with GPU acceleration.

    PubMed

    Qu, Jingyi; Wang, Rubin

    2017-12-01

    In this paper, the collective behaviors of a small-world neuronal network motivated by the anatomy of a mammalian cortex based on both Izhikevich model and Rulkov model are studied. The Izhikevich model can not only reproduce the rich behaviors of biological neurons but also has only two equations and one nonlinear term. Rulkov model is in the form of difference equations that generate a sequence of membrane potential samples in discrete moments of time to improve computational efficiency. These two models are suitable for the construction of large scale neural networks. By varying some key parameters, such as the connection probability and the number of nearest neighbor of each node, the coupled neurons will exhibit types of temporal and spatial characteristics. It is demonstrated that the implementation of GPU can achieve more and more acceleration than CPU with the increasing of neuron number and iterations. These two small-world network models and GPU acceleration give us a new opportunity to reproduce the real biological network containing a large number of neurons.

  12. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  13. Phase synchronization of bursting neurons in clustered small-world networks

    NASA Astrophysics Data System (ADS)

    Batista, C. A. S.; Lameu, E. L.; Batista, A. M.; Lopes, S. R.; Pereira, T.; Zamora-López, G.; Kurths, J.; Viana, R. L.

    2012-07-01

    We investigate the collective dynamics of bursting neurons on clustered networks. The clustered network model is composed of subnetworks, each of them presenting the so-called small-world property. This model can also be regarded as a network of networks. In each subnetwork a neuron is connected to other ones with regular as well as random connections, the latter with a given intracluster probability. Moreover, in a given subnetwork each neuron has an intercluster probability to be connected to the other subnetworks. The local neuron dynamics has two time scales (fast and slow) and is modeled by a two-dimensional map. In such small-world network the neuron parameters are chosen to be slightly different such that, if the coupling strength is large enough, there may be synchronization of the bursting (slow) activity. We give bounds for the critical coupling strength to obtain global burst synchronization in terms of the network structure, that is, the probabilities of intracluster and intercluster connections. We find that, as the heterogeneity in the network is reduced, the network global synchronizability is improved. We show that the transitions to global synchrony may be abrupt or smooth depending on the intercluster probability.

  14. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

    PubMed Central

    Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370

  15. Population coding in sparsely connected networks of noisy neurons.

    PubMed

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  16. Spiking Neurons for Analysis of Patterns

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological neurons). These features enable the neurons to adapt their responses to high-rate inputs from sensors, and to adapt their firing thresholds to mitigate noise or effects of potential sensor failure. The mathematical derivation of the SVM starts from a prior model, known in the art as the point soma model, which captures all of the salient properties of neuronal response while keeping the computational cost low. The point-soma latency time is modified to be an exponentially decaying function of the strength of the applied potential. Choosing computational efficiency over biological fidelity, the dendrites surrounding a neuron are represented by simplified compartmental submodels and there are no dendritic spines. Updates to the dendritic potential, calcium-ion concentrations and conductances, and potassium-ion conductances are done by use of equations similar to those of the point soma. Diffusion processes in dendrites are modeled by averaging among nearest-neighbor compartments. Inputs to each of the dendritic compartments come from sensors. Alternatively or in addition, when an affected neuron is part of a pool, inputs can come from other spiking neurons. At present, SVM neural networks are implemented by computational simulation, using algorithms that encode the SVM and its submodels. However, it should be possible to implement these neural networks in hardware: The differential equations for the dendritic and cellular processes in the SVM model of spiking neurons map to equivalent circuits that can be implemented directly in analog very-large-scale integrated (VLSI) circuits.

  17. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks

    PubMed Central

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model. PMID:24634645

  18. Comparison of the dynamics of neural interactions between current-based and conductance-based integrate-and-fire recurrent networks.

    PubMed

    Cavallari, Stefano; Panzeri, Stefano; Mazzoni, Alberto

    2014-01-01

    Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.

  19. Numerical simulation of coherent resonance in a model network of Rulkov neurons

    NASA Astrophysics Data System (ADS)

    Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.

    2018-04-01

    In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.

  20. Rhythmogenic neuronal networks, emergent leaders, and k-cores.

    PubMed

    Schwab, David J; Bruinsma, Robijn F; Feldman, Jack L; Levine, Alex J

    2010-11-01

    Neuronal network behavior results from a combination of the dynamics of individual neurons and the connectivity of the network that links them together. We study a simplified model, based on the proposal of Feldman and Del Negro (FDN) [Nat. Rev. Neurosci. 7, 232 (2006)], of the preBötzinger Complex, a small neuronal network that participates in the control of the mammalian breathing rhythm through periodic firing bursts. The dynamics of this randomly connected network of identical excitatory neurons differ from those of a uniformly connected one. Specifically, network connectivity determines the identity of emergent leader neurons that trigger the firing bursts. When neuronal desensitization is controlled by the number of input signals to the neurons (as proposed by FDN), the network's collective desensitization--required for successful burst termination--is mediated by k-core clusters of neurons.

  1. Generalized Adaptive Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1993-01-01

    Mathematical model of supervised learning by artificial neural network provides for simultaneous adjustments of both temperatures of neurons and synaptic weights, and includes feedback as well as feedforward synaptic connections. Extension of mathematical model described in "Adaptive Neurons For Artificial Neural Networks" (NPO-17803). Dynamics of neural network represented in new model by less-restrictive continuous formalism.

  2. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    PubMed

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  3. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    PubMed

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  4. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves

    NASA Astrophysics Data System (ADS)

    Paraskevov, A. V.; Zendrikov, D. K.

    2017-04-01

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  5. Towards Reproducible Descriptions of Neuronal Network Models

    PubMed Central

    Nordlie, Eilen; Gewaltig, Marc-Oliver; Plesser, Hans Ekkehard

    2009-01-01

    Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing—and thinking about—complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain. PMID:19662159

  6. The application of the multi-alternative approach in active neural network models

    NASA Astrophysics Data System (ADS)

    Podvalny, S.; Vasiljev, E.

    2017-02-01

    The article refers to the construction of intelligent systems based artificial neuron networks are used. We discuss the basic properties of the non-compliance of artificial neuron networks and their biological prototypes. It is shown here that the main reason for these discrepancies is the structural immutability of the neuron network models in the learning process, that is, their passivity. Based on the modern understanding of the biological nervous system as a structured ensemble of nerve cells, it is proposed to abandon the attempts to simulate its work at the level of the elementary neurons functioning processes and proceed to the reproduction of the information structure of data storage and processing on the basis of the general enough evolutionary principles of multialternativity, i.e. the multi-level structural model, diversity and modularity. The implementation method of these principles is offered, using the faceted memory organization in the neuron network with the rearranging active structure. An example of the implementation of the active facet-type neuron network in the intellectual decision-making system in the conditions of critical events development in the electrical distribution system.

  7. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons.

    PubMed

    Zerlaut, Yann; Chemla, Sandrine; Chavane, Frederic; Destexhe, Alain

    2018-02-01

    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at macroscopic scales. Since for each pixel VSDi signals report the average membrane potential over hundreds of neurons, it seems natural to use a mean-field formalism to model such signals. Here, we present a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. We study a network of regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons to describe the average dynamics of the coupled populations. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the analytical description. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model predicts the response time course of the population. Finally, to model VSDi signals, we consider a one-dimensional ring model made of interconnected RS-FS mean-field units. We found that this model can reproduce the spatio-temporal patterns seen in VSDi of awake monkey visual cortex as a response to local and transient visual stimuli. Conversely, we show that the model allows one to infer physiological parameters from the experimentally-recorded spatio-temporal patterns.

  8. Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level.

    PubMed

    Bono, Jacopo; Clopath, Claudia

    2017-09-26

    Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites.Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.

  9. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models.

    PubMed

    Mazzoni, Alberto; Lindén, Henrik; Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T

    2015-12-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  10. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    PubMed Central

    Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024

  11. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.

    PubMed

    Ranganayaki, V; Deepa, S N

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.

  12. An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems

    PubMed Central

    Ranganayaki, V.; Deepa, S. N.

    2016-01-01

    Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973

  13. Quantitative 3D investigation of Neuronal network in mouse spinal cord model

    NASA Astrophysics Data System (ADS)

    Bukreeva, I.; Campi, G.; Fratini, M.; Spanò, R.; Bucci, D.; Battaglia, G.; Giove, F.; Bravin, A.; Uccelli, A.; Venturi, C.; Mastrogiacomo, M.; Cedola, A.

    2017-01-01

    The investigation of the neuronal network in mouse spinal cord models represents the basis for the research on neurodegenerative diseases. In this framework, the quantitative analysis of the single elements in different districts is a crucial task. However, conventional 3D imaging techniques do not have enough spatial resolution and contrast to allow for a quantitative investigation of the neuronal network. Exploiting the high coherence and the high flux of synchrotron sources, X-ray Phase-Contrast multiscale-Tomography allows for the 3D investigation of the neuronal microanatomy without any aggressive sample preparation or sectioning. We investigated healthy-mouse neuronal architecture by imaging the 3D distribution of the neuronal-network with a spatial resolution of 640 nm. The high quality of the obtained images enables a quantitative study of the neuronal structure on a subject-by-subject basis. We developed and applied a spatial statistical analysis on the motor neurons to obtain quantitative information on their 3D arrangement in the healthy-mice spinal cord. Then, we compared the obtained results with a mouse model of multiple sclerosis. Our approach paves the way to the creation of a “database” for the characterization of the neuronal network main features for a comparative investigation of neurodegenerative diseases and therapies.

  14. Collective Dynamics for Heterogeneous Networks of Theta Neurons

    NASA Astrophysics Data System (ADS)

    Luke, Tanushree

    Collective behavior in neural networks has often been used as an indicator of communication between different brain areas. These collective synchronization and desynchronization patterns are also considered an important feature in understanding normal and abnormal brain function. To understand the emergence of these collective patterns, I create an analytic model that identifies all such macroscopic steady-states attainable by a network of Type-I neurons. This network, whose basic unit is the model "theta'' neuron, contains a mixture of excitable and spiking neurons coupled via a smooth pulse-like synapse. Applying the Ott-Antonsen reduction method in the thermodynamic limit, I obtain a low-dimensional evolution equation that describes the asymptotic dynamics of the macroscopic mean field of the network. This model can be used as the basis in understanding more complicated neuronal networks when additional dynamical features are included. From this reduced dynamical equation for the mean field, I show that the network exhibits three collective attracting steady-states. The first two are equilibrium states that both reflect partial synchronization in the network, whereas the third is a limit cycle in which the degree of network synchronization oscillates in time. In addition to a comprehensive identification of all possible attracting macro-states, this analytic model permits a complete bifurcation analysis of the collective behavior of the network with respect to three key network features: the degree of excitability of the neurons, the heterogeneity of the population, and the overall coupling strength. The network typically tends towards the two macroscopic equilibrium states when the neuron's intrinsic dynamics and the network interactions reinforce each other. In contrast, the limit cycle state, bifurcations, and multistability tend to occur when there is competition between these network features. I also outline here an extension of the above model where the neurons' excitability now varies in time sinuosoidally, thus simulating a parabolic bursting network. This time-varying excitability can lead to the emergence of macroscopic chaos and multistability in the collective behavior of the network. Finally, I expand the single population model described above to examine a two-population neuronal network where each population has its own unique mixture of excitable and spiking neurons, as well as its own coupling strength (either excitatory or inhibitory in nature). Specifically, I consider the situation where the first population is only allowed to influence the second population without any feedback, thus effectively creating a feed-forward "driver-response" system. In this special arrangement, the driver's asymptotic macroscopic dynamics are fully explored in the comprehensive analysis of the single population. Then, in the presence of an influence from the driver, the modified dynamics of the second population, which now acts as a response population, can also be fully analyzed. As in the time-varying model, these modifications give rise to richer dynamics to the response population than those found from the single population formalism, including multi-periodicity and chaos.

  15. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    PubMed Central

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot”) suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain. PMID:22046178

  16. Lateral information processing by spiking neurons: a theoretical model of the neural correlate of consciousness.

    PubMed

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on "autopilot"). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the "conscious pilot") suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious "auto-pilot" cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways "gap junctions" in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of perception are represented and contained. This mobile zone can be viewed as a model of the neural correlate of consciousness in the brain.

  17. Simulating synchronization in neuronal networks

    NASA Astrophysics Data System (ADS)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  18. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  19. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  20. Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    PubMed Central

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-01-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons. PMID:22096452

  1. Improved system identification using artificial neural networks and analysis of individual differences in responses of an identified neuron.

    PubMed

    Costalago Meruelo, Alicia; Simpson, David M; Veres, Sandor M; Newland, Philip L

    2016-03-01

    Mathematical modelling is used routinely to understand the coding properties and dynamics of responses of neurons and neural networks. Here we analyse the effectiveness of Artificial Neural Networks (ANNs) as a modelling tool for motor neuron responses. We used ANNs to model the synaptic responses of an identified motor neuron, the fast extensor motor neuron, of the desert locust in response to displacement of a sensory organ, the femoral chordotonal organ, which monitors movements of the tibia relative to the femur of the leg. The aim of the study was threefold: first to determine the potential value of ANNs as tools to model and investigate neural networks, second to understand the generalisation properties of ANNs across individuals and to different input signals and third, to understand individual differences in responses of an identified neuron. A metaheuristic algorithm was developed to design the ANN architectures. The performance of the models generated by the ANNs was compared with those generated through previous mathematical models of the same neuron. The results suggest that ANNs are significantly better than LNL and Wiener models in predicting specific neural responses to Gaussian White Noise, but not significantly different when tested with sinusoidal inputs. They are also able to predict responses of the same neuron in different individuals irrespective of which animal was used to develop the model, although notable differences between some individuals were evident. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. An artificial network model for estimating the network structure underlying partially observed neuronal signals.

    PubMed

    Komatsu, Misako; Namikawa, Jun; Chao, Zenas C; Nagasaka, Yasuo; Fujii, Naotaka; Nakamura, Kiyohiko; Tani, Jun

    2014-01-01

    Many previous studies have proposed methods for quantifying neuronal interactions. However, these methods evaluated the interactions between recorded signals in an isolated network. In this study, we present a novel approach for estimating interactions between observed neuronal signals by theorizing that those signals are observed from only a part of the network that also includes unobserved structures. We propose a variant of the recurrent network model that consists of both observable and unobservable units. The observable units represent recorded neuronal activity, and the unobservable units are introduced to represent activity from unobserved structures in the network. The network structures are characterized by connective weights, i.e., the interaction intensities between individual units, which are estimated from recorded signals. We applied this model to multi-channel brain signals recorded from monkeys, and obtained robust network structures with physiological relevance. Furthermore, the network exhibited common features that portrayed cortical dynamics as inversely correlated interactions between excitatory and inhibitory populations of neurons, which are consistent with the previous view of cortical local circuits. Our results suggest that the novel concept of incorporating an unobserved structure into network estimations has theoretical advantages and could provide insights into brain dynamics beyond what can be directly observed. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  3. Associative memory in phasing neuron networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nair, Niketh S; Bochove, Erik J.; Braiman, Yehuda

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  4. Synchronous behaviour in network model based on human cortico-cortical connections.

    PubMed

    Protachevicz, Paulo Ricardo; Borges, Rafael Ribaski; Reis, Adriane da Silva; Borges, Fernando da Silva; Iarosz, Kelly Cristina; Caldas, Ibere Luiz; Lameu, Ewandson Luiz; Macau, Elbert Einstein Nehrer; Viana, Ricardo Luiz; Sokolov, Igor M; Ferrari, Fabiano A S; Kurths, Jürgen; Batista, Antonio Marcos

    2018-06-22

    We consider a network topology according to the cortico-cortical connec- tion network of the human brain, where each cortical area is composed of a random network of adaptive exponential integrate-and-fire neurons. Depending on the parameters, this neuron model can exhibit spike or burst patterns. As a diagnostic tool to identify spike and burst patterns we utilise the coefficient of variation of the neuronal inter-spike interval. In our neuronal network, we verify the existence of spike and burst synchronisation in different cortical areas. Our simulations show that the network arrangement, i.e., its rich-club organisation, plays an important role in the transition of the areas from desynchronous to synchronous behaviours. © 2018 Institute of Physics and Engineering in Medicine.

  5. Adaptive Neurons For Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1990-01-01

    Training time decreases dramatically. In improved mathematical model of neural-network processor, temperature of neurons (in addition to connection strengths, also called weights, of synapses) varied during supervised-learning phase of operation according to mathematical formalism and not heuristic rule. Evidence that biological neural networks also process information at neuronal level.

  6. Extensive excitatory network interactions shape temporal processing of communication signals in a model sensory system.

    PubMed

    Ma, Xiaofeng; Kohashi, Tsunehiko; Carlson, Bruce A

    2013-07-01

    Many sensory brain regions are characterized by extensive local network interactions. However, we know relatively little about the contribution of this microcircuitry to sensory coding. Detailed analyses of neuronal microcircuitry are usually performed in vitro, whereas sensory processing is typically studied by recording from individual neurons in vivo. The electrosensory pathway of mormyrid fish provides a unique opportunity to link in vitro studies of synaptic physiology with in vivo studies of sensory processing. These fish communicate by actively varying the intervals between pulses of electricity. Within the midbrain posterior exterolateral nucleus (ELp), the temporal filtering of afferent spike trains establishes interval tuning by single neurons. We characterized pairwise neuronal connectivity among ELp neurons with dual whole cell recording in an in vitro whole brain preparation. We found a densely connected network in which single neurons influenced the responses of other neurons throughout the network. Similarly tuned neurons were more likely to share an excitatory synaptic connection than differently tuned neurons, and synaptic connections between similarly tuned neurons were stronger than connections between differently tuned neurons. We propose a general model for excitatory network interactions in which strong excitatory connections both reinforce and adjust tuning and weak excitatory connections make smaller modifications to tuning. The diversity of interval tuning observed among this population of neurons can be explained, in part, by each individual neuron receiving a different complement of local excitatory inputs.

  7. Chimera-like states in a neuronal network model of the cat brain

    NASA Astrophysics Data System (ADS)

    Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.

    2017-08-01

    Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.

  8. On the continuous differentiability of inter-spike intervals of synaptically connected cortical spiking neurons in a neuronal network.

    PubMed

    Kumar, Gautam; Kothare, Mayuresh V

    2013-12-01

    We derive conditions for continuous differentiability of inter-spike intervals (ISIs) of spiking neurons with respect to parameters (decision variables) of an external stimulating input current that drives a recurrent network of synaptically connected neurons. The dynamical behavior of individual neurons is represented by a class of discontinuous single-neuron models. We report here that ISIs of neurons in the network are continuously differentiable with respect to decision variables if (1) a continuously differentiable trajectory of the membrane potential exists between consecutive action potentials with respect to time and decision variables and (2) the partial derivative of the membrane potential of spiking neurons with respect to time is not equal to the partial derivative of their firing threshold with respect to time at the time of action potentials. Our theoretical results are supported by showing fulfillment of these conditions for a class of known bidimensional spiking neuron models.

  9. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    PubMed

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  10. Effects of inhibitory neurons on the quorum percolation model and dynamical extension with the Brette-Gerstner model

    NASA Astrophysics Data System (ADS)

    Fardet, Tanguy; Bottani, Samuel; Métens, Stéphane; Monceau, Pascal

    2018-06-01

    The Quorum Percolation model (QP) has been designed in the context of neurobiology to describe the initiation of activity bursts occurring in neuronal cultures from the point of view of statistical physics rather than from a dynamical synchronization approach. This paper aims at investigating an extension of the original QP model by taking into account the presence of inhibitory neurons in the cultures (IQP model). The first part of this paper is focused on an equivalence between the presence of inhibitory neurons and a reduction of the network connectivity. By relying on a simple topological argument, we show that the mean activation behavior of networks containing a fraction η of inhibitory neurons can be mapped onto purely excitatory networks with an appropriately modified wiring, provided that η remains in the range usually observed in neuronal cultures, namely η ⪅ 20%. As a striking result, we show that such a mapping enables to predict the evolution of the critical point of the IQP model with the fraction of inhibitory neurons. In a second part, we bridge the gap between the description of bursts in the framework of percolation and the temporal description of neural networks activity by showing how dynamical simulations of bursts with an adaptive exponential integrate-and-fire model lead to a mean description of bursts activation which is captured by Quorum Percolation.

  11. Beyond Critical Exponents in Neuronal Avalanches

    NASA Astrophysics Data System (ADS)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  12. From in silico astrocyte cell models to neuron-astrocyte network models: A review.

    PubMed

    Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin

    2018-01-01

    The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    PubMed

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  14. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  15. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  16. Hybrid discrete-time neural networks.

    PubMed

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  17. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    PubMed

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks including irregular, Poisson-like spike times, and a tight balance between excitation and inhibition. These results significantly increase the biological plausibility of the spike-based approach to network computation, and uncover how several components of biological networks may work together to efficiently carry out computation. Copyright © 2015 the authors 0270-6474/15/3510112-23$15.00/0.

  18. Nonlinear multiplicative dendritic integration in neuron and network models

    PubMed Central

    Zhang, Danke; Li, Yuanqing; Rasch, Malte J.; Wu, Si

    2013-01-01

    Neurons receive inputs from thousands of synapses distributed across dendritic trees of complex morphology. It is known that dendritic integration of excitatory and inhibitory synapses can be highly non-linear in reality and can heavily depend on the exact location and spatial arrangement of inhibitory and excitatory synapses on the dendrite. Despite this known fact, most neuron models used in artificial neural networks today still only describe the voltage potential of a single somatic compartment and assume a simple linear summation of all individual synaptic inputs. We here suggest a new biophysical motivated derivation of a single compartment model that integrates the non-linear effects of shunting inhibition, where an inhibitory input on the route of an excitatory input to the soma cancels or “shunts” the excitatory potential. In particular, our integration of non-linear dendritic processing into the neuron model follows a simple multiplicative rule, suggested recently by experiments, and allows for strict mathematical treatment of network effects. Using our new formulation, we further devised a spiking network model where inhibitory neurons act as global shunting gates, and show that the network exhibits persistent activity in a low firing regime. PMID:23658543

  19. Cortical Dynamics in Presence of Assemblies of Densely Connected Weight-Hub Neurons

    PubMed Central

    Setareh, Hesam; Deger, Moritz; Petersen, Carl C. H.; Gerstner, Wulfram

    2017-01-01

    Experimental measurements of pairwise connection probability of pyramidal neurons together with the distribution of synaptic weights have been used to construct randomly connected model networks. However, several experimental studies suggest that both wiring and synaptic weight structure between neurons show statistics that differ from random networks. Here we study a network containing a subset of neurons which we call weight-hub neurons, that are characterized by strong inward synapses. We propose a connectivity structure for excitatory neurons that contain assemblies of densely connected weight-hub neurons, while the pairwise connection probability and synaptic weight distribution remain consistent with experimental data. Simulations of such a network with generalized integrate-and-fire neurons display regular and irregular slow oscillations akin to experimentally observed up/down state transitions in the activity of cortical neurons with a broad distribution of pairwise spike correlations. Moreover, stimulation of a model network in the presence or absence of assembly structure exhibits responses similar to light-evoked responses of cortical layers in optogenetically modified animals. We conclude that a high connection probability into and within assemblies of excitatory weight-hub neurons, as it likely is present in some but not all cortical layers, changes the dynamics of a layer of cortical microcircuitry significantly. PMID:28690508

  20. Self-sustained asynchronous irregular states and Up-Down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons.

    PubMed

    Destexhe, Alain

    2009-12-01

    Randomly-connected networks of integrate-and-fire (IF) neurons are known to display asynchronous irregular (AI) activity states, which resemble the discharge activity recorded in the cerebral cortex of awake animals. However, it is not clear whether such activity states are specific to simple IF models, or if they also exist in networks where neurons are endowed with complex intrinsic properties similar to electrophysiological measurements. Here, we investigate the occurrence of AI states in networks of nonlinear IF neurons, such as the adaptive exponential IF (Brette-Gerstner-Izhikevich) model. This model can display intrinsic properties such as low-threshold spike (LTS), regular spiking (RS) or fast-spiking (FS). We successively investigate the oscillatory and AI dynamics of thalamic, cortical and thalamocortical networks using such models. AI states can be found in each case, sometimes with surprisingly small network size of the order of a few tens of neurons. We show that the presence of LTS neurons in cortex or in thalamus, explains the robust emergence of AI states for relatively small network sizes. Finally, we investigate the role of spike-frequency adaptation (SFA). In cortical networks with strong SFA in RS cells, the AI state is transient, but when SFA is reduced, AI states can be self-sustained for long times. In thalamocortical networks, AI states are found when the cortex is itself in an AI state, but with strong SFA, the thalamocortical network displays Up and Down state transitions, similar to intracellular recordings during slow-wave sleep or anesthesia. Self-sustained Up and Down states could also be generated by two-layer cortical networks with LTS cells. These models suggest that intrinsic properties such as adaptation and low-threshold bursting activity are crucial for the genesis and control of AI states in thalamocortical networks.

  1. The relevance of network micro-structure for neural dynamics.

    PubMed

    Pernice, Volker; Deger, Moritz; Cardanobile, Stefano; Rotter, Stefan

    2013-01-01

    The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits.

  2. The Dynamics of Networks of Identical Theta Neurons.

    PubMed

    Laing, Carlo R

    2018-02-05

    We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.

  3. A Markov model for the temporal dynamics of balanced random networks of finite size

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between neuronal populations also opens new doors to analyze the joint dynamics of multiple interacting networks. PMID:25520644

  4. Emergent properties of interacting populations of spiking neurons.

    PubMed

    Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.

  5. Emergent Properties of Interacting Populations of Spiking Neurons

    PubMed Central

    Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations. PMID:22207844

  6. Observability and synchronization of neuron models.

    PubMed

    Aguirre, Luis A; Portes, Leonardo L; Letellier, Christophe

    2017-10-01

    Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.

  7. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    PubMed

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  8. Echo state networks with filter neurons and a delay&sum readout.

    PubMed

    Holzmann, Georg; Hauser, Helmut

    2010-03-01

    Echo state networks (ESNs) are a novel approach to recurrent neural network training with the advantage of a very simple and linear learning algorithm. It has been demonstrated that ESNs outperform other methods on a number of benchmark tasks. Although the approach is appealing, there are still some inherent limitations in the original formulation. Here we suggest two enhancements of this network model. First, the previously proposed idea of filters in neurons is extended to arbitrary infinite impulse response (IIR) filter neurons. This enables such networks to learn multiple attractors and signals at different timescales, which is especially important for modeling real-world time series. Second, a delay&sum readout is introduced, which adds trainable delays in the synaptic connections of output neurons and therefore vastly improves the memory capacity of echo state networks. It is shown in commonly used benchmark tasks and real-world examples, that this new structure is able to significantly outperform standard ESNs and other state-of-the-art models for nonlinear dynamical system modeling. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Connectomic constraints on computation in feedforward networks of spiking neurons.

    PubMed

    Ramaswamy, Venkatakrishnan; Banerjee, Arunava

    2014-10-01

    Several efforts are currently underway to decipher the connectome or parts thereof in a variety of organisms. Ascertaining the detailed physiological properties of all the neurons in these connectomes, however, is out of the scope of such projects. It is therefore unclear to what extent knowledge of the connectome alone will advance a mechanistic understanding of computation occurring in these neural circuits, especially when the high-level function of the said circuit is unknown. We consider, here, the question of how the wiring diagram of neurons imposes constraints on what neural circuits can compute, when we cannot assume detailed information on the physiological response properties of the neurons. We call such constraints-that arise by virtue of the connectome-connectomic constraints on computation. For feedforward networks equipped with neurons that obey a deterministic spiking neuron model which satisfies a small number of properties, we ask if just by knowing the architecture of a network, we can rule out computations that it could be doing, no matter what response properties each of its neurons may have. We show results of this form, for certain classes of network architectures. On the other hand, we also prove that with the limited set of properties assumed for our model neurons, there are fundamental limits to the constraints imposed by network structure. Thus, our theory suggests that while connectomic constraints might restrict the computational ability of certain classes of network architectures, we may require more elaborate information on the properties of neurons in the network, before we can discern such results for other classes of networks.

  10. Recording axonal conduction to evaluate the integration of pluripotent cell-derived neurons into a neuronal network.

    PubMed

    Shimba, Kenta; Sakai, Koji; Takayama, Yuzo; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-10-01

    Stem cell transplantation is a promising therapy to treat neurodegenerative disorders, and a number of in vitro models have been developed for studying interactions between grafted neurons and the host neuronal network to promote drug discovery. However, methods capable of evaluating the process by which stem cells integrate into the host neuronal network are lacking. In this study, we applied an axonal conduction-based analysis to a co-culture study of primary and differentiated neurons. Mouse cortical neurons and neuronal cells differentiated from P19 embryonal carcinoma cells, a model for early neural differentiation of pluripotent stem cells, were co-cultured in a microfabricated device. The somata of these cells were separated by the co-culture device, but their axons were able to elongate through microtunnels and then form synaptic contacts. Propagating action potentials were recorded from these axons by microelectrodes embedded at the bottom of the microtunnels and sorted into clusters representing individual axons. While the number of axons of cortical neurons increased until 14 days in vitro and then decreased, those of P19 neurons increased throughout the culture period. Network burst analysis showed that P19 neurons participated in approximately 80% of the bursting activity after 14 days in vitro. Interestingly, the axonal conduction delay of P19 neurons was significantly greater than that of cortical neurons, suggesting that there are some physiological differences in their axons. These results suggest that our method is feasible to evaluate the process by which stem cell-derived neurons integrate into a host neuronal network.

  11. Engineered 3D vascular and neuronal networks in a microfluidic platform.

    PubMed

    Osaki, Tatsuya; Sivathanu, Vivek; Kamm, Roger D

    2018-03-26

    Neurovascular coupling plays a key role in the pathogenesis of neurodegenerative disorders including motor neuron disease (MND). In vitro models provide an opportunity to understand the pathogenesis of MND, and offer the potential for drug screening. Here, we describe a new 3D microvascular and neuronal network model in a microfluidic platform to investigate interactions between these two systems. Both 3D networks were established by co-culturing human embryonic stem (ES)-derived MN spheroids and endothelial cells (ECs) in microfluidic devices. Co-culture with ECs improves neurite elongation and neuronal connectivity as measured by Ca 2+ oscillation. This improvement was regulated not only by paracrine signals such as brain-derived neurotrophic factor secreted by ECs but also through direct cell-cell interactions via the delta-notch pathway, promoting neuron differentiation and neuroprotection. Bi-directional signaling was observed in that the neural networks also affected vascular network formation under perfusion culture. This in vitro model could enable investigations of neuro-vascular coupling, essential to understanding the pathogenesis of neurodegenerative diseases including MNDs such as amyotrophic lateral sclerosis.

  12. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    PubMed

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  13. Using a hybrid neuron in physiologically inspired models of the basal ganglia.

    PubMed

    Thibeault, Corey M; Srinivasa, Narayan

    2013-01-01

    Our current understanding of the basal ganglia (BG) has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the BG, however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the BG, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation (DBS). The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under DBS. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of DBS and the latter allowing for the efficient simulation of larger more comprehensive networks.

  14. Data-driven inference of network connectivity for modeling the dynamics of neural codes in the insect antennal lobe

    PubMed Central

    Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan

    2014-01-01

    The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442

  15. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    PubMed

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  16. Optimization Methods for Spiking Neurons and Networks

    PubMed Central

    Russell, Alexander; Orchard, Garrick; Dong, Yi; Mihalaş, Ştefan; Niebur, Ernst; Tapson, Jonathan; Etienne-Cummings, Ralph

    2011-01-01

    Spiking neurons and spiking neural circuits are finding uses in a multitude of tasks such as robotic locomotion control, neuroprosthetics, visual sensory processing, and audition. The desired neural output is achieved through the use of complex neuron models, or by combining multiple simple neurons into a network. In either case, a means for configuring the neuron or neural circuit is required. Manual manipulation of parameters is both time consuming and non-intuitive due to the nonlinear relationship between parameters and the neuron’s output. The complexity rises even further as the neurons are networked and the systems often become mathematically intractable. In large circuits, the desired behavior and timing of action potential trains may be known but the timing of the individual action potentials is unknown and unimportant, whereas in single neuron systems the timing of individual action potentials is critical. In this paper, we automate the process of finding parameters. To configure a single neuron we derive a maximum likelihood method for configuring a neuron model, specifically the Mihalas–Niebur Neuron. Similarly, to configure neural circuits, we show how we use genetic algorithms (GAs) to configure parameters for a network of simple integrate and fire with adaptation neurons. The GA approach is demonstrated both in software simulation and hardware implementation on a reconfigurable custom very large scale integration chip. PMID:20959265

  17. Information-geometric measures as robust estimators of connection strengths and external inputs.

    PubMed

    Tatsuno, Masami; Fellous, Jean-Marc; Amari, Shun-Ichi

    2009-08-01

    Information geometry has been suggested to provide a powerful tool for analyzing multineuronal spike trains. Among several advantages of this approach, a significant property is the close link between information-geometric measures and neural network architectures. Previous modeling studies established that the first- and second-order information-geometric measures corresponded to the number of external inputs and the connection strengths of the network, respectively. This relationship was, however, limited to a symmetrically connected network, and the number of neurons used in the parameter estimation of the log-linear model needed to be known. Recently, simulation studies of biophysical model neurons have suggested that information geometry can estimate the relative change of connection strengths and external inputs even with asymmetric connections. Inspired by these studies, we analytically investigated the link between the information-geometric measures and the neural network structure with asymmetrically connected networks of N neurons. We focused on the information-geometric measures of orders one and two, which can be derived from the two-neuron log-linear model, because unlike higher-order measures, they can be easily estimated experimentally. Considering the equilibrium state of a network of binary model neurons that obey stochastic dynamics, we analytically showed that the corrected first- and second-order information-geometric measures provided robust and consistent approximation of the external inputs and connection strengths, respectively. These results suggest that information-geometric measures provide useful insights into the neural network architecture and that they will contribute to the study of system-level neuroscience.

  18. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    PubMed

    Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram

    2017-04-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  19. Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons

    NASA Astrophysics Data System (ADS)

    Costa, Ariadne; Brochini, Ludmila; Kinouchi, Osame

    2017-08-01

    Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.

  20. A Markovian event-based framework for stochastic spiking neural networks.

    PubMed

    Touboul, Jonathan D; Faugeras, Olivier D

    2011-11-01

    In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.

  1. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    PubMed Central

    Jie, Shao

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  2. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    PubMed

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity.

  3. Orientation Selectivity in Inhibition-Dominated Networks of Spiking Neurons: Effect of Single Neuron Properties and Network Dynamics

    PubMed Central

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not depend on the state of network dynamics, and hold equally well for mean-driven and fluctuation-driven regimes of activity. PMID:25569445

  4. Enhanced storage capacity with errors in scale-free Hopfield neural networks: An analytical study.

    PubMed

    Kim, Do-Hyun; Park, Jinha; Kahng, Byungnam

    2017-01-01

    The Hopfield model is a pioneering neural network model with associative memory retrieval. The analytical solution of the model in mean field limit revealed that memories can be retrieved without any error up to a finite storage capacity of O(N), where N is the system size. Beyond the threshold, they are completely lost. Since the introduction of the Hopfield model, the theory of neural networks has been further developed toward realistic neural networks using analog neurons, spiking neurons, etc. Nevertheless, those advances are based on fully connected networks, which are inconsistent with recent experimental discovery that the number of connections of each neuron seems to be heterogeneous, following a heavy-tailed distribution. Motivated by this observation, we consider the Hopfield model on scale-free networks and obtain a different pattern of associative memory retrieval from that obtained on the fully connected network: the storage capacity becomes tremendously enhanced but with some error in the memory retrieval, which appears as the heterogeneity of the connections is increased. Moreover, the error rates are also obtained on several real neural networks and are indeed similar to that on scale-free model networks.

  5. A new cross-correlation algorithm for the analysis of "in vitro" neuronal network activity aimed at pharmacological studies.

    PubMed

    Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A

    2011-08-15

    Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Reinforcement Learning of Two-Joint Virtual Arm Reaching in a Computer Model of Sensorimotor Cortex

    PubMed Central

    Neymotin, Samuel A.; Chadderdon, George L.; Kerr, Cliff C.; Francis, Joseph T.; Lytton, William W.

    2014-01-01

    Neocortical mechanisms of learning sensorimotor control involve a complex series of interactions at multiple levels, from synaptic mechanisms to cellular dynamics to network connectomics. We developed a model of sensory and motor neocortex consisting of 704 spiking model neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a two-joint virtual arm to reach to a fixed target. For each of 125 trained networks, we used 200 training sessions, each involving 15 s reaches to the target from 16 starting positions. Learning altered network dynamics, with enhancements to neuronal synchrony and behaviorally relevant information flow between neurons. After learning, networks demonstrated retention of behaviorally relevant memories by using proprioceptive information to perform reach-to-target from multiple starting positions. Networks dynamically controlled which joint rotations to use to reach a target, depending on current arm position. Learning-dependent network reorganization was evident in both sensory and motor populations: learned synaptic weights showed target-specific patterning optimized for particular reach movements. Our model embodies an integrative hypothesis of sensorimotor cortical learning that could be used to interpret future electrophysiological data recorded in vivo from sensorimotor learning experiments. We used our model to make the following predictions: learning enhances synchrony in neuronal populations and behaviorally relevant information flow across neuronal populations, enhanced sensory processing aids task-relevant motor performance and the relative ease of a particular movement in vivo depends on the amount of sensory information required to complete the movement. PMID:24047323

  7. Synchronization in neural nets

    NASA Technical Reports Server (NTRS)

    Vidal, Jacques J.; Haggerty, John

    1988-01-01

    The paper presents an artificial neural network concept (the Synchronizable Oscillator Networks) where the instants of individual firings in the form of point processes constitute the only form of information transmitted between joining neurons. In the model, neurons fire spontaneously and regularly in the absence of perturbation. When interaction is present, the scheduled firings are advanced or delayed by the firing of neighboring neurons. Networks of such neurons become global oscillators which exhibit multiple synchronizing attractors. From arbitrary initial states, energy minimization learning procedures can make the network converge to oscillatory modes that satisfy multi-dimensional constraints. Such networks can directly represent routing and scheduling problems that consist of ordering sequences of events.

  8. Neural networks with local receptive fields and superlinear VC dimension.

    PubMed

    Schmitt, Michael

    2002-04-01

    Local receptive field neurons comprise such well-known and widely used unit types as radial basis function (RBF) neurons and neurons with center-surround receptive field. We study the Vapnik-Chervonenkis (VC) dimension of feedforward neural networks with one hidden layer of these units. For several variants of local receptive field neurons, we show that the VC dimension of these networks is superlinear. In particular, we establish the bound Omega(W log k) for any reasonably sized network with W parameters and k hidden nodes. This bound is shown to hold for discrete center-surround receptive field neurons, which are physiologically relevant models of cells in the mammalian visual system, for neurons computing a difference of gaussians, which are popular in computational vision, and for standard RBF neurons, a major alternative to sigmoidal neurons in artificial neural networks. The result for RBF neural networks is of particular interest since it answers a question that has been open for several years. The results also give rise to lower bounds for networks with fixed input dimension. Regarding constants, all bounds are larger than those known thus far for similar architectures with sigmoidal neurons. The superlinear lower bounds contrast with linear upper bounds for single local receptive field neurons also derived here.

  9. Qualitative-Modeling-Based Silicon Neurons and Their Networks

    PubMed Central

    Kohno, Takashi; Sekikawa, Munehisa; Li, Jing; Nanami, Takuya; Aihara, Kazuyuki

    2016-01-01

    The ionic conductance models of neuronal cells can finely reproduce a wide variety of complex neuronal activities. However, the complexity of these models has prompted the development of qualitative neuron models. They are described by differential equations with a reduced number of variables and their low-dimensional polynomials, which retain the core mathematical structures. Such simple models form the foundation of a bottom-up approach in computational and theoretical neuroscience. We proposed a qualitative-modeling-based approach for designing silicon neuron circuits, in which the mathematical structures in the polynomial-based qualitative models are reproduced by differential equations with silicon-native expressions. This approach can realize low-power-consuming circuits that can be configured to realize various classes of neuronal cells. In this article, our qualitative-modeling-based silicon neuron circuits for analog and digital implementations are quickly reviewed. One of our CMOS analog silicon neuron circuits can realize a variety of neuronal activities with a power consumption less than 72 nW. The square-wave bursting mode of this circuit is explained. Another circuit can realize Class I and II neuronal activities with about 3 nW. Our digital silicon neuron circuit can also realize these classes. An auto-associative memory realized on an all-to-all connected network of these silicon neurons is also reviewed, in which the neuron class plays important roles in its performance. PMID:27378842

  10. Multiscale modeling of brain dynamics: from single neurons and networks to mathematical tools.

    PubMed

    Siettos, Constantinos; Starke, Jens

    2016-09-01

    The extreme complexity of the brain naturally requires mathematical modeling approaches on a large variety of scales; the spectrum ranges from single neuron dynamics over the behavior of groups of neurons to neuronal network activity. Thus, the connection between the microscopic scale (single neuron activity) to macroscopic behavior (emergent behavior of the collective dynamics) and vice versa is a key to understand the brain in its complexity. In this work, we attempt a review of a wide range of approaches, ranging from the modeling of single neuron dynamics to machine learning. The models include biophysical as well as data-driven phenomenological models. The discussed models include Hodgkin-Huxley, FitzHugh-Nagumo, coupled oscillators (Kuramoto oscillators, Rössler oscillators, and the Hindmarsh-Rose neuron), Integrate and Fire, networks of neurons, and neural field equations. In addition to the mathematical models, important mathematical methods in multiscale modeling and reconstruction of the causal connectivity are sketched. The methods include linear and nonlinear tools from statistics, data analysis, and time series analysis up to differential equations, dynamical systems, and bifurcation theory, including Granger causal connectivity analysis, phase synchronization connectivity analysis, principal component analysis (PCA), independent component analysis (ICA), and manifold learning algorithms such as ISOMAP, and diffusion maps and equation-free techniques. WIREs Syst Biol Med 2016, 8:438-458. doi: 10.1002/wsbm.1348 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  11. Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in Up states.

    PubMed

    Dao Duc, Khanh; Parutto, Pierre; Chen, Xiaowei; Epsztein, Jérôme; Konnerth, Arthur; Holcman, David

    2015-01-01

    The dynamics of neuronal networks connected by synaptic dynamics can sustain long periods of depolarization that can last for hundreds of milliseconds such as Up states recorded during sleep or anesthesia. Yet the underlying mechanism driving these periods remain unclear. We show here within a mean-field model that the residence time of the neuronal membrane potential in cortical Up states does not follow a Poissonian law, but presents several peaks. Furthermore, the present modeling approach allows extracting some information about the neuronal network connectivity from the time distribution histogram. Based on a synaptic-depression model, we find that these peaks, that can be observed in histograms of patch-clamp recordings are not artifacts of electrophysiological measurements, but rather are an inherent property of the network dynamics. Analysis of the equations reveals a stable focus located close to the unstable limit cycle, delimiting a region that defines the Up state. The model further shows that the peaks observed in the Up state time distribution are due to winding around the focus before escaping from the basin of attraction. Finally, we use in vivo recordings of intracellular membrane potential and we recover from the peak distribution, some information about the network connectivity. We conclude that it is possible to recover the network connectivity from the distribution of times that the neuronal membrane voltage spends in Up states.

  12. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    PubMed Central

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  13. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.

    PubMed

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.

  14. A computational model of the respiratory network challenged and optimized by data from optogenetic manipulation of glycinergic neurons.

    PubMed

    Oku, Yoshitaka; Hülsmann, Swen

    2017-04-07

    The topology of the respiratory network in the brainstem has been addressed using different computational models, which help to understand the functional properties of the system. We tested a neural mass model by comparing the result of activation and inhibition of inhibitory neurons in silico with recently published results of optogenetic manipulation of glycinergic neurons [Sherman, et al. (2015) Nat Neurosci 18:408]. The comparison revealed that a five-cell type model consisting of three classes of inhibitory neurons [I-DEC, E-AUG, E-DEC (PI)] and two excitatory populations (pre-I/I) and (I-AUG) neurons can be applied to explain experimental observations made by stimulating or inhibiting inhibitory neurons by light sensitive ion channels. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks

    PubMed Central

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments. PMID:28293163

  16. Mean Field Analysis of Large-Scale Interacting Populations of Stochastic Conductance-Based Spiking Neurons Using the Klimontovich Method

    NASA Astrophysics Data System (ADS)

    Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.

    2017-03-01

    We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.

  17. Clustering promotes switching dynamics in networks of noisy neurons

    NASA Astrophysics Data System (ADS)

    Franović, Igor; Klinshov, Vladimir

    2018-02-01

    Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.

  18. Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment

    PubMed Central

    Legenstein, Robert; Maass, Wolfgang

    2014-01-01

    It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information. PMID:25340749

  19. Temporal neural networks and transient analysis of complex engineering systems

    NASA Astrophysics Data System (ADS)

    Uluyol, Onder

    A theory is introduced for a multi-layered Local Output Gamma Feedback (LOGF) neural network within the paradigm of Locally-Recurrent Globally-Feedforward neural networks. It is developed for the identification, prediction, and control tasks of spatio-temporal systems and allows for the presentation of different time scales through incorporation of a gamma memory. It is initially applied to the tasks of sunspot and Mackey-Glass series prediction as benchmarks, then it is extended to the task of power level control of a nuclear reactor at different fuel cycle conditions. The developed LOGF neuron model can also be viewed as a Transformed Input and State (TIS) Gamma memory for neural network architectures for temporal processing. The novel LOGF neuron model extends the static neuron model by incorporating into it a short-term memory structure in the form of a digital gamma filter. A feedforward neural network made up of LOGF neurons can thus be used to model dynamic systems. A learning algorithm based upon the Backpropagation-Through-Time (BTT) approach is derived. It is applicable for training a general L-layer LOGF neural network. The spatial and temporal weights and parameters of the network are iteratively optimized for a given problem using the derived learning algorithm.

  20. Network algorithmics and the emergence of the cortical synaptic-weight distribution

    NASA Astrophysics Data System (ADS)

    Nathan, Andre; Barbosa, Valmir C.

    2010-02-01

    When a neuron fires and the resulting action potential travels down its axon toward other neurons’ dendrites, the effect on each of those neurons is mediated by the strength of the synapse that separates it from the firing neuron. This strength, in turn, is affected by the postsynaptic neuron’s response through a mechanism that is thought to underlie important processes such as learning and memory. Although of difficult quantification, cortical synaptic strengths have been found to obey a long-tailed unimodal distribution peaking near the lowest values (approximately lognormal), thus confirming some of the predictive models built previously. Most of these models are causally local, in the sense that they refer to the situation in which a number of neurons all fire directly at the same postsynaptic neuron. Consequently, they necessarily embody assumptions regarding the generation of action potentials by the presynaptic neurons that have little biological interpretability. We introduce a network model of large groups of interconnected neurons and demonstrate, making none of the assumptions that characterize the causally local models, that its long-term behavior gives rise to a distribution of synaptic weights (the mathematical surrogates of synaptic strengths) with the same properties that were experimentally observed. In our model, the action potentials that create a neuron’s input are, ultimately, the product of network-wide causal chains relating what happens at a neuron to the firings of others. Our model is then of a causally global nature and predicates the emergence of the synaptic-weight distribution on network structure and function. As such, it has the potential to become instrumental also in the study of other emergent cortical phenomena.

  1. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957

  2. Leader neurons in leaky integrate and fire neural network simulations.

    PubMed

    Zbinden, Cyrille

    2011-10-01

    In this paper, we highlight the topological properties of leader neurons whose existence is an experimental fact. Several experimental studies show the existence of leader neurons in population bursts of activity in 2D living neural networks (Eytan and Marom, J Neurosci 26(33):8465-8476, 2006; Eckmann et al., New J Phys 10(015011), 2008). A leader neuron is defined as a neuron which fires at the beginning of a burst (respectively network spike) more often than we expect by chance considering its mean firing rate. This means that leader neurons have some burst triggering power beyond a chance-level statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model (Gerstner and Kistler 2002; Cessac, J Math Biol 56(3):311-345, 2008), which allows fast simulations (Izhikevich, IEEE Trans Neural Netw 15(5):1063-1070, 2004; Gerstner and Naud, Science 326:379-380, 2009). The dynamics of our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themselves. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends signals to many excitatory neurons as well as to few inhibitory neurons and a leader neuron receives only signals from few other excitatory neurons. Our linear analysis exhibits five essential properties of leader neurons each with different relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron is a good leader neuron and which is not. Our prediction formula correctly assesses leadership for at least ninety percent of neurons.

  3. Neuronal network model of interictal and recurrent ictal activity

    NASA Astrophysics Data System (ADS)

    Lopes, M. A.; Lee, K.-E.; Goltsev, A. V.

    2017-12-01

    We propose a neuronal network model which undergoes a saddle node on an invariant circle bifurcation as the mechanism of the transition from the interictal to the ictal (seizure) state. In the vicinity of this transition, the model captures important dynamical features of both interictal and ictal states. We study the nature of interictal spikes and early warnings of the transition predicted by this model. We further demonstrate that recurrent seizures emerge due to the interaction between two networks.

  4. Modelling Feedback Excitation, Pacemaker Properties and Sensory Switching of Electrically Coupled Brainstem Neurons Controlling Rhythmic Activity

    PubMed Central

    Hull, Michael J.; Soffe, Stephen R.; Willshaw, David J.; Roberts, Alan

    2016-01-01

    What cellular and network properties allow reliable neuronal rhythm generation or firing that can be started and stopped by brief synaptic inputs? We investigate rhythmic activity in an electrically-coupled population of brainstem neurons driving swimming locomotion in young frog tadpoles, and how activity is switched on and off by brief sensory stimulation. We build a computational model of 30 electrically-coupled conditional pacemaker neurons on one side of the tadpole hindbrain and spinal cord. Based on experimental estimates for neuron properties, population sizes, synapse strengths and connections, we show that: long-lasting, mutual, glutamatergic excitation between the neurons allows the network to sustain rhythmic pacemaker firing at swimming frequencies following brief synaptic excitation; activity persists but rhythm breaks down without electrical coupling; NMDA voltage-dependency doubles the range of synaptic feedback strengths generating sustained rhythm. The network can be switched on and off at short latency by brief synaptic excitation and inhibition. We demonstrate that a population of generic Hodgkin-Huxley type neurons coupled by glutamatergic excitatory feedback can generate sustained asynchronous firing switched on and off synaptically. We conclude that networks of neurons with NMDAR mediated feedback excitation can generate self-sustained activity following brief synaptic excitation. The frequency of activity is limited by the kinetics of the neuron membrane channels and can be stopped by brief inhibitory input. Network activity can be rhythmic at lower frequencies if the neurons are electrically coupled. Our key finding is that excitatory synaptic feedback within a population of neurons can produce switchable, stable, sustained firing without synaptic inhibition. PMID:26824331

  5. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    PubMed

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  6. Firing patterns in the adaptive exponential integrate-and-fire model.

    PubMed

    Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram

    2008-11-01

    For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.

  7. Introduction to Concepts in Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  8. Reconfiguration of the pontomedullary respiratory network: a computational modeling study with coordinated in vivo experiments.

    PubMed

    Rybak, I A; O'Connor, R; Ross, A; Shevtsova, N A; Nuding, S C; Segers, L S; Shannon, R; Dick, T E; Dunin-Barkowski, W L; Orem, J M; Solomon, I C; Morris, K F; Lindsey, B G

    2008-10-01

    A large body of data suggests that the pontine respiratory group (PRG) is involved in respiratory phase-switching and the reconfiguration of the brain stem respiratory network. However, connectivity between the PRG and ventral respiratory column (VRC) in computational models has been largely ad hoc. We developed a network model with PRG-VRC connectivity inferred from coordinated in vivo experiments. Neurons were modeled in the "integrate-and-fire" style; some neurons had pacemaker properties derived from the model of Breen et al. We recapitulated earlier modeling results, including reproduction of activity profiles of different respiratory neurons and motor outputs, and their changes under different conditions (vagotomy, pontine lesions, etc.). The model also reproduced characteristic changes in neuronal and motor patterns observed in vivo during fictive cough and during hypoxia in non-rapid eye movement sleep. Our simulations suggested possible mechanisms for respiratory pattern reorganization during these behaviors. The model predicted that network- and pacemaker-generated rhythms could be co-expressed during the transition from gasping to eupnea, producing a combined "burst-ramp" pattern of phrenic discharges. To test this prediction, phrenic activity and multiple single neuron spike trains were monitored in vagotomized, decerebrate, immobilized, thoracotomized, and artificially ventilated cats during hypoxia and recovery. In most experiments, phrenic discharge patterns during recovery from hypoxia were similar to those predicted by the model. We conclude that under certain conditions, e.g., during recovery from severe brain hypoxia, components of a distributed network activity present during eupnea can be co-expressed with gasp patterns generated by a distinct, functionally "simplified" mechanism.

  9. Clique of Functional Hubs Orchestrates Population Bursts in Developmentally Regulated Neural Networks

    PubMed Central

    Luccioli, Stefano; Ben-Jacob, Eshel; Barzilai, Ari; Bonifazi, Paolo; Torcini, Alessandro

    2014-01-01

    It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate population activity. PMID:25255443

  10. Spiking neuron network Helmholtz machine.

    PubMed

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  11. Spiking neuron network Helmholtz machine

    PubMed Central

    Sountsov, Pavel; Miller, Paul

    2015-01-01

    An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. PMID:25954191

  12. The emergence of spontaneous activity in neuronal cultures

    NASA Astrophysics Data System (ADS)

    Orlandi, J. G.; Alvarez-Lacalle, E.; Teller, S.; Soriano, J.; Casademunt, J.

    2013-01-01

    In vitro neuronal networks of dissociated hippocampal or cortical tissues are one of the most attractive model systems for the physics and neuroscience communities. Cultured neurons grow and mature, develop axons and dendrites, and quickly connect to their neighbors to establish a spontaneously active network within a week. The resulting neuronal network is characterized by a combination of excitatory and inhibitory neurons coupled through synaptic connections that interact in a highly nonlinear manner. The nonlinear behavior emerges from the dynamics of both the neurons' spiking activity and synaptic transmission, together with biological noise. These ingredients give rise to a rich repertoire of phenomena that are still poorly understood, including the emergence and maintenance of periodic spontaneous activity, avalanches, propagation of fronts and synchronization. In this work we present an overview on the rich activity of cultured neuronal networks, and detail the minimal theoretical considerations needed to describe experimental observations.

  13. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    PubMed

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.

  14. Effect of acute stretch injury on action potential and network activity of rat neocortical neurons in culture.

    PubMed

    Magou, George C; Pfister, Bryan J; Berlin, Joshua R

    2015-10-22

    The basis for acute seizures following traumatic brain injury (TBI) remains unclear. Animal models of TBI have revealed acute hyperexcitablility in cortical neurons that could underlie seizure activity, but studying initiating events causing hyperexcitability is difficult in these models. In vitro models of stretch injury with cultured cortical neurons, a surrogate for TBI, allow facile investigation of cellular changes after injury but they have only demonstrated post-injury hypoexcitability. The goal of this study was to determine if neuronal hyperexcitability could be triggered by in vitro stretch injury. Controlled uniaxial stretch injury was delivered to a spatially delimited region of a spontaneously active network of cultured rat cortical neurons, yielding a region of stretch-injured neurons and adjacent regions of non-stretched neurons that did not directly experience stretch injury. Spontaneous electrical activity was measured in non-stretched and stretch-injured neurons, and in control neuronal networks not subjected to stretch injury. Non-stretched neurons in stretch-injured cultures displayed a three-fold increase in action potential firing rate and bursting activity 30-60 min post-injury. Stretch-injured neurons, however, displayed dramatically lower rates of action potential firing and bursting. These results demonstrate that acute hyperexcitability can be observed in non-stretched neurons located in regions adjacent to the site of stretch injury, consistent with reports that seizure activity can arise from regions surrounding the site of localized brain injury. Thus, this in vitro procedure for localized neuronal stretch injury may provide a model to study the earliest cellular changes in neuronal function associated with acute post-traumatic seizures. Copyright © 2015. Published by Elsevier B.V.

  15. Convergent neuromodulation onto a network neuron can have divergent effects at the network level.

    PubMed

    Kintos, Nickolas; Nusbaum, Michael P; Nadim, Farzan

    2016-04-01

    Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (I MI ) in the LG neuron. However, I MI-MCN1 , resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while I MI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, I MI-MCN1 and I MI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, I MI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas I MI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity.

  16. Convergent neuromodulation onto a network neuron can have divergent effects at the network level

    PubMed Central

    Kintos, Nickolas; Nusbaum, Michael P.; Nadim, Farzan

    2016-01-01

    Different neuromodulators often target the same ion channel. When such modulators act on different neuron types, this convergent action can enable a rhythmic network to produce distinct outputs. Less clear are the functional consequences when two neuromodulators influence the same ion channel in the same neuron. We examine the consequences of this seeming redundancy using a mathematical model of the crab gastric mill (chewing) network. This network is activated in vitro by the projection neuron MCN1, which elicits a half-center bursting oscillation between the reciprocally-inhibitory neurons LG and Int1. We focus on two neuropeptides which modulate this network, including a MCN1 neurotransmitter and the hormone crustacean cardioactive peptide (CCAP). Both activate the same voltage-gated current (IMI) in the LG neuron. However, IMI-MCN1, resulting from MCN1 released neuropeptide, has phasic dynamics in its maximal conductance due to LG presynaptic inhibition of MCN1, while IMI-CCAP retains the same maximal conductance in both phases of the gastric mill rhythm. Separation of time scales allows us to produce a 2D model from which phase plane analysis shows that, as in the biological system, IMI-MCN1 and IMI-CCAP primarily influence the durations of opposing phases of this rhythm. Furthermore, IMI-MCN1 influences the rhythmic output in a manner similar to the Int1-to-LG synapse, whereas IMI-CCAP has an influence similar to the LG-to-Int1 synapse. These results show that distinct neuromodulators which target the same voltage-gated ion channel in the same network neuron can nevertheless produce distinct effects at the network level, providing divergent neuromodulator actions on network activity. PMID:26798029

  17. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation

    PubMed Central

    Bayati, Mehdi; Valizadeh, Alireza; Abbassian, Abdolhossein; Cheng, Sen

    2015-01-01

    Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence. PMID:26089794

  18. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    PubMed

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  19. Functional connectivity and dynamics of cortical-thalamic networks co-cultured in a dual compartment device

    NASA Astrophysics Data System (ADS)

    Kanagasabapathi, Thirukumaran T.; Massobrio, Paolo; Barone, Rocco Andrea; Tedesco, Mariateresa; Martinoia, Sergio; Wadman, Wytse J.; Decré, Michel M. J.

    2012-06-01

    Co-cultures containing dissociated cortical and thalamic cells may provide a unique model for understanding the pathophysiology in the respective neuronal sub-circuitry. In addition, developing an in vitro dissociated co-culture model offers the possibility of studying the system without influence from other neuronal sub-populations. Here we demonstrate a dual compartment system coupled to microelectrode arrays (MEAs) for co-culturing and recording spontaneous activities from neuronal sub-populations. Propagation of electrical activities between cortical and thalamic regions and their interdependence in connectivity is verified by means of a cross-correlation algorithm. We found that burst events originate in the cortical region and drive the entire cortical-thalamic network bursting behavior while mutually weak thalamic connections play a relevant role in sustaining longer burst events in cortical cells. To support these experimental findings, a neuronal network model was developed and used to investigate the interplay between network dynamics and connectivity in the cortical-thalamic system.

  20. Intelligent Network Management and Functional Cerebellum Synthesis

    NASA Technical Reports Server (NTRS)

    Loebner, Egon E.

    1989-01-01

    Transdisciplinary modeling of the cerebellum across histology, physiology, and network engineering provides preliminary results at three organization levels: input/output links to central nervous system networks; links between the six neuron populations in the cerebellum; and computation among the neurons of the populations. Older models probably underestimated the importance and role of climbing fiber input which seems to supply write as well as read signals, not just to Purkinje but also to basket and stellate neurons. The well-known mossy fiber-granule cell-Golgi cell system should also respond to inputs originating from climbing fibers. Corticonuclear microcomplexing might be aided by stellate and basket computation and associate processing. Technological and scientific implications of the proposed cerebellum model are discussed.

  1. Interfacing 3D Engineered Neuronal Cultures to Micro-Electrode Arrays: An Innovative In Vitro Experimental Model.

    PubMed

    Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo

    2015-10-18

    Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks.

  2. Interfacing 3D Engineered Neuronal Cultures to Micro-Electrode Arrays: An Innovative In Vitro Experimental Model

    PubMed Central

    Tedesco, Mariateresa; Frega, Monica; Martinoia, Sergio; Pesce, Mattia; Massobrio, Paolo

    2015-01-01

    Currently, large-scale networks derived from dissociated neurons growing and developing in vitro on extracellular micro-transducer devices are the gold-standard experimental model to study basic neurophysiological mechanisms involved in the formation and maintenance of neuronal cell assemblies. However, in vitro studies have been limited to the recording of the electrophysiological activity generated by bi-dimensional (2D) neural networks. Nonetheless, given the intricate relationship between structure and dynamics, a significant improvement is necessary to investigate the formation and the developing dynamics of three-dimensional (3D) networks. In this work, a novel experimental platform in which 3D hippocampal or cortical networks are coupled to planar Micro-Electrode Arrays (MEAs) is presented. 3D networks are realized by seeding neurons in a scaffold constituted of glass microbeads (30-40 µm in diameter) on which neurons are able to grow and form complex interconnected 3D assemblies. In this way, it is possible to design engineered 3D networks made up of 5-8 layers with an expected final cell density. The increasing complexity in the morphological organization of the 3D assembly induces an enhancement of the electrophysiological patterns displayed by this type of networks. Compared with the standard 2D networks, where highly stereotyped bursting activity emerges, the 3D structure alters the bursting activity in terms of duration and frequency, as well as it allows observation of more random spiking activity. In this sense, the developed 3D model more closely resembles in vivo neural networks. PMID:26554533

  3. Establishment of a Human Neuronal Network Assessment System by Using a Human Neuron/Astrocyte Co-Culture Derived from Fetal Neural Stem/Progenitor Cells.

    PubMed

    Fukushima, Kazuyuki; Miura, Yuji; Sawada, Kohei; Yamazaki, Kazuto; Ito, Masashi

    2016-01-01

    Using human cell models mimicking the central nervous system (CNS) provides a better understanding of the human CNS, and it is a key strategy to improve success rates in CNS drug development. In the CNS, neurons function as networks in which astrocytes play important roles. Thus, an assessment system of neuronal network functions in a co-culture of human neurons and astrocytes has potential to accelerate CNS drug development. We previously demonstrated that human hippocampus-derived neural stem/progenitor cells (HIP-009 cells) were a novel tool to obtain human neurons and astrocytes in the same culture. In this study, we applied HIP-009 cells to a multielectrode array (MEA) system to detect neuronal signals as neuronal network functions. We observed spontaneous firings of HIP-009 neurons, and validated functional formation of neuronal networks pharmacologically. By using this assay system, we investigated effects of several reference compounds, including agonists and antagonists of glutamate and γ-aminobutyric acid receptors, and sodium, potassium, and calcium channels, on neuronal network functions using firing and burst numbers, and synchrony as readouts. These results indicate that the HIP-009/MEA assay system is applicable to the pharmacological assessment of drug candidates affecting synaptic functions for CNS drug development. © 2015 Society for Laboratory Automation and Screening.

  4. Brian: a simulator for spiking neural networks in python.

    PubMed

    Goodman, Dan; Brette, Romain

    2008-01-01

    "Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  5. The dynamical analysis of modified two-compartment neuron model and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao

    2017-10-01

    The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.

  6. Neuronal network models of epileptogenesis

    PubMed Central

    Abdullahi, Aminu T.; Adamu, Lawan H.

    2017-01-01

    Epilepsy is a chronic neurological condition, following some trigger, transforming a normal brain to one that produces recurrent unprovoked seizures. In the search for the mechanisms that best explain the epileptogenic process, there is a growing body of evidence suggesting that the epilepsies are network level disorders. In this review, we briefly describe the concept of neuronal networks and highlight 2 methods used to analyse such networks. The first method, graph theory, is used to describe general characteristics of a network to facilitate comparison between normal and abnormal networks. The second, dynamic causal modelling, is useful in the analysis of the pathways of seizure spread. We concluded that the end results of the epileptogenic process are best understood as abnormalities of neuronal circuitry and not simply as molecular or cellular abnormalities. The network approach promises to generate new understanding and more targeted treatment of epilepsy. PMID:28416779

  7. Identification of the connections in biologically inspired neural networks

    NASA Technical Reports Server (NTRS)

    Demuth, H.; Leung, K.; Beale, M.; Hicklin, J.

    1990-01-01

    We developed an identification method to find the strength of the connections between neurons from their behavior in small biologically-inspired artificial neural networks. That is, given the network external inputs and the temporal firing pattern of the neurons, we can calculate a solution for the strengths of the connections between neurons and the initial neuron activations if a solution exists. The method determines directly if there is a solution to a particular neural network problem. No training of the network is required. It should be noted that this is a first pass at the solution of a difficult problem. The neuron and network models chosen are related to biology but do not contain all of its complexities, some of which we hope to add to the model in future work. A variety of new results have been obtained. First, the method has been tailored to produce connection weight matrix solutions for networks with important features of biological neural (bioneural) networks. Second, a computationally efficient method of finding a robust central solution has been developed. This later method also enables us to find the most consistent solution in the presence of noisy data. Prospects of applying our method to identify bioneural network connections are exciting because such connections are almost impossible to measure in the laboratory. Knowledge of such connections would facilitate an understanding of bioneural networks and would allow the construction of the electronic counterparts of bioneural networks on very large scale integrated (VLSI) circuits.

  8. PTEN Loss Increases the Connectivity of Fast Synaptic Motifs and Functional Connectivity in a Developing Hippocampal Network.

    PubMed

    Barrows, Caitlynn M; McCabe, Matthew P; Chen, Hongmei; Swann, John W; Weston, Matthew C

    2017-09-06

    Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten -deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten -deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. Copyright © 2017 the authors 0270-6474/17/378595-17$15.00/0.

  9. PTEN Loss Increases the Connectivity of Fast Synaptic Motifs and Functional Connectivity in a Developing Hippocampal Network

    PubMed Central

    McCabe, Matthew P.; Chen, Hongmei; Swann, John W.

    2017-01-01

    Changes in synaptic strength and connectivity are thought to be a major mechanism through which many gene variants cause neurological disease. Hyperactivation of the PI3K-mTOR signaling network, via loss of function of repressors such as PTEN, causes epilepsy in humans and animal models, and altered mTOR signaling may contribute to a broad range of neurological diseases. Changes in synaptic transmission have been reported in animal models of PTEN loss; however, the full extent of these changes, and their effect on network function, is still unknown. To better understand the scope of these changes, we recorded from pairs of mouse hippocampal neurons cultured in a two-neuron microcircuit configuration that allowed us to characterize all four major connection types within the hippocampus. Loss of PTEN caused changes in excitatory and inhibitory connectivity, and these changes were postsynaptic, presynaptic, and transynaptic, suggesting that disruption of PTEN has the potential to affect most connection types in the hippocampal circuit. Given the complexity of the changes at the synaptic level, we measured changes in network behavior after deleting Pten from neurons in an organotypic hippocampal slice network. Slices containing Pten-deleted neurons showed increased recruitment of neurons into network bursts. Importantly, these changes were not confined to Pten-deleted neurons, but involved the entire network, suggesting that the extensive changes in synaptic connectivity rewire the entire network in such a way that promotes a widespread increase in functional connectivity. SIGNIFICANCE STATEMENT Homozygous deletion of the Pten gene in neuronal subpopulations in the mouse serves as a valuable model of epilepsy caused by mTOR hyperactivation. To better understand how gene deletions lead to altered neuronal activity, we investigated the synaptic and network effects that occur 1 week after Pten deletion. PTEN loss increased the connectivity of all four types of hippocampal synaptic connections, including two forms of increased inhibition of inhibition, and increased network functional connectivity. These data suggest that single gene mutations that cause neurological diseases such as epilepsy may affect a surprising range of connection types. Moreover, given the robustness of homeostatic plasticity, these diverse effects on connection types may be necessary to cause network phenotypes such as increased synchrony. PMID:28751459

  10. Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics

    DTIC Science & Technology

    2016-02-29

    Performance/Technic~ 02-01-2016- 02-29-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of Neuronal...neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this approach, time step can be made...propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for studies of large-scale neuronal

  11. Experiments in clustered neuronal networks: A paradigm for complex modular dynamics

    NASA Astrophysics Data System (ADS)

    Teller, Sara; Soriano, Jordi

    2016-06-01

    Uncovering the interplay activity-connectivity is one of the major challenges in neuroscience. To deepen in the understanding of how a neuronal circuit shapes network dynamics, neuronal cultures have emerged as remarkable systems given their accessibility and easy manipulation. An attractive configuration of these in vitro systems consists in an ensemble of interconnected clusters of neurons. Using calcium fluorescence imaging to monitor spontaneous activity in these clustered neuronal networks, we were able to draw functional maps and reveal their topological features. We also observed that these networks exhibit a hierarchical modular dynamics, in which clusters fire in small groups that shape characteristic communities in the network. The structure and stability of these communities is sensitive to chemical or physical action, and therefore their analysis may serve as a proxy for network health. Indeed, the combination of all these approaches is helping to develop models to quantify damage upon network degradation, with promising applications for the study of neurological disorders in vitro.

  12. Perspectives for computational modeling of cell replacement for neurological disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James B.; Weick, Jason P.

    In mathematical modeling of anatomically-constrained neural networks we provide significant insights regarding the response of networks to neurological disorders or injury. Furthermore, a logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impactmore » circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less

  13. Perspectives for computational modeling of cell replacement for neurological disorders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aimone, James B.; Weick, Jason P.

    Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behaviormore » in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.« less

  14. Inhibitory neurons promote robust critical firing dynamics in networks of integrate-and-fire neurons.

    PubMed

    Lu, Zhixin; Squires, Shane; Ott, Edward; Girvan, Michelle

    2016-12-01

    We study the firing dynamics of a discrete-state and discrete-time version of an integrate-and-fire neuronal network model with both excitatory and inhibitory neurons. When the integer-valued state of a neuron exceeds a threshold value, the neuron fires, sends out state-changing signals to its connected neurons, and returns to the resting state. In this model, a continuous phase transition from non-ceaseless firing to ceaseless firing is observed. At criticality, power-law distributions of avalanche size and duration with the previously derived exponents, -3/2 and -2, respectively, are observed. Using a mean-field approach, we show analytically how the critical point depends on model parameters. Our main result is that the combined presence of both inhibitory neurons and integrate-and-fire dynamics greatly enhances the robustness of critical power-law behavior (i.e., there is an increased range of parameters, including both sub- and supercritical values, for which several decades of power-law behavior occurs).

  15. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    PubMed

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  16. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons

    PubMed Central

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013

  17. Forecasting PM10 in Algiers: efficacy of multilayer perceptron networks.

    PubMed

    Abderrahim, Hamza; Chellali, Mohammed Reda; Hamou, Ahmed

    2016-01-01

    Air quality forecasting system has acquired high importance in atmospheric pollution due to its negative impacts on the environment and human health. The artificial neural network is one of the most common soft computing methods that can be pragmatic for carving such complex problem. In this paper, we used a multilayer perceptron neural network to forecast the daily averaged concentration of the respirable suspended particulates with aerodynamic diameter of not more than 10 μm (PM10) in Algiers, Algeria. The data for training and testing the network are based on the data sampled from 2002 to 2006 collected by SAMASAFIA network center at El Hamma station. The meteorological data, air temperature, relative humidity, and wind speed, are used as inputs network parameters in the formation of model. The training patterns used correspond to 41 days data. The performance of the developed models was evaluated on the basis index of agreement and other statistical parameters. It was seen that the overall performance of model with 15 neurons is better than the ones with 5 and 10 neurons. The results of multilayer network with as few as one hidden layer and 15 neurons were quite reasonable than the ones with 5 and 10 neurons. Finally, an error around 9% has been reached.

  18. Realistic modeling of neurons and networks: towards brain simulation.

    PubMed

    D'Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    2013-01-01

    Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field.

  19. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  20. Goal-seeking neural net for recall and recognition

    NASA Astrophysics Data System (ADS)

    Omidvar, Omid M.

    1990-07-01

    Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.

  1. Models and simulation of 3D neuronal dendritic trees using Bayesian networks.

    PubMed

    López-Cruz, Pedro L; Bielza, Concha; Larrañaga, Pedro; Benavides-Piccione, Ruth; DeFelipe, Javier

    2011-12-01

    Neuron morphology is crucial for neuronal connectivity and brain information processing. Computational models are important tools for studying dendritic morphology and its role in brain function. We applied a class of probabilistic graphical models called Bayesian networks to generate virtual dendrites from layer III pyramidal neurons from three different regions of the neocortex of the mouse. A set of 41 morphological variables were measured from the 3D reconstructions of real dendrites and their probability distributions used in a machine learning algorithm to induce the model from the data. A simulation algorithm is also proposed to obtain new dendrites by sampling values from Bayesian networks. The main advantage of this approach is that it takes into account and automatically locates the relationships between variables in the data instead of using predefined dependencies. Therefore, the methodology can be applied to any neuronal class while at the same time exploiting class-specific properties. Also, a Bayesian network was defined for each part of the dendrite, allowing the relationships to change in the different sections and to model heterogeneous developmental factors or spatial influences. Several univariate statistical tests and a novel multivariate test based on Kullback-Leibler divergence estimation confirmed that virtual dendrites were similar to real ones. The analyses of the models showed relationships that conform to current neuroanatomical knowledge and support model correctness. At the same time, studying the relationships in the models can help to identify new interactions between variables related to dendritic morphology.

  2. A biophysical observation model for field potentials of networks of leaky integrate-and-fire neurons.

    PubMed

    Beim Graben, Peter; Rodrigues, Serafim

    2012-01-01

    We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced three-compartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential (DFP) that contributes to the local field potential (LFP) of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the "open-field" configuration of the DFP around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire (LIF) models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. (2008), and conclude that our biophysically motivated approach yields substantial improvement.

  3. A biophysical observation model for field potentials of networks of leaky integrate-and-fire neurons

    PubMed Central

    beim Graben, Peter; Rodrigues, Serafim

    2013-01-01

    We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced three-compartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential (DFP) that contributes to the local field potential (LFP) of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the “open-field” configuration of the DFP around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire (LIF) models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. (2008), and conclude that our biophysically motivated approach yields substantial improvement. PMID:23316157

  4. Phase-locking and bistability in neuronal networks with synaptic depression

    NASA Astrophysics Data System (ADS)

    Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha

    2018-02-01

    We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.

  5. Statistical Neurodynamics.

    NASA Astrophysics Data System (ADS)

    Paine, Gregory Harold

    1982-03-01

    The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better understanding of the behavior of these systems.

  6. Modeling fluctuations in default-mode brain network using a spiking neural network.

    PubMed

    Yamanishi, Teruya; Liu, Jian-Qin; Nishimura, Haruhiko

    2012-08-01

    Recently, numerous attempts have been made to understand the dynamic behavior of complex brain systems using neural network models. The fluctuations in blood-oxygen-level-dependent (BOLD) brain signals at less than 0.1 Hz have been observed by functional magnetic resonance imaging (fMRI) for subjects in a resting state. This phenomenon is referred to as a "default-mode brain network." In this study, we model the default-mode brain network by functionally connecting neural communities composed of spiking neurons in a complex network. Through computational simulations of the model, including transmission delays and complex connectivity, the network dynamics of the neural system and its behavior are discussed. The results show that the power spectrum of the modeled fluctuations in the neuron firing patterns is consistent with the default-mode brain network's BOLD signals when transmission delays, a characteristic property of the brain, have finite values in a given range.

  7. Irregular behavior in an excitatory-inhibitory neuronal network

    NASA Astrophysics Data System (ADS)

    Park, Choongseok; Terman, David

    2010-06-01

    Excitatory-inhibitory networks arise in many regions throughout the central nervous system and display complex spatiotemporal firing patterns. These neuronal activity patterns (of individual neurons and/or the whole network) are closely related to the functional status of the system and differ between normal and pathological states. For example, neurons within the basal ganglia, a group of subcortical nuclei that are responsible for the generation of movement, display a variety of dynamic behaviors such as correlated oscillatory activity and irregular, uncorrelated spiking. Neither the origins of these firing patterns nor the mechanisms that underlie the patterns are well understood. We consider a biophysical model of an excitatory-inhibitory network in the basal ganglia and explore how specific biophysical properties of the network contribute to the generation of irregular spiking. We use geometric dynamical systems and singular perturbation methods to systematically reduce the model to a simpler set of equations, which is suitable for analysis. The results specify the dependence on the strengths of synaptic connections and the intrinsic firing properties of the cells in the irregular regime when applied to the subthalamopallidal network of the basal ganglia.

  8. On the performance of voltage stepping for the simulation of adaptive, nonlinear integrate-and-fire neuronal networks.

    PubMed

    Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique

    2011-05-01

    In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.

  9. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  10. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  11. Ergodic properties of spiking neuronal networks with delayed interactions

    NASA Astrophysics Data System (ADS)

    Palmigiano, Agostina; Wolf, Fred

    The dynamical stability of neuronal networks, and the possibility of chaotic dynamics in the brain pose profound questions to the mechanisms underlying perception. Here we advance on the tractability of large neuronal networks of exactly solvable neuronal models with delayed pulse-coupled interactions. Pulse coupled delayed systems with an infinite dimensional phase space can be studied in equivalent systems of fixed and finite degrees of freedom by introducing a delayer variable for each neuron. A Jacobian of the equivalent system can be analytically obtained, and numerically evaluated. We find that depending on the action potential onset rapidness and the level of heterogeneities, the asynchronous irregular regime characteristic of balanced state networks loses stability with increasing delays to either a slow synchronous irregular or a fast synchronous irregular state. In networks of neurons with slow action potential onset, the transition to collective oscillations leads to an increase of the exponential rate of divergence of nearby trajectories and of the entropy production rate of the chaotic dynamics. The attractor dimension, instead of increasing linearly with increasing delay as reported in many other studies, decreases until eventually the network reaches full synchrony

  12. Characterization of emergent synaptic topologies in noisy neural networks

    NASA Astrophysics Data System (ADS)

    Miller, Aaron James

    Learned behaviors are one of the key contributors to an animal's ultimate survival. It is widely believed that the brain's microcircuitry undergoes structural changes when a new behavior is learned. In particular, motor learning, during which an animal learns a sequence of muscular movements, often requires precisely-timed coordination between muscles and becomes very natural once ingrained. Experiments show that neurons in the motor cortex exhibit precisely-timed spike activity when performing a learned motor behavior, and constituent stereotypical elements of the behavior can last several hundred milliseconds. The subject of this manuscript concerns how organized synaptic structures that produce stereotypical spike sequences emerge from random, dynamical networks. After a brief introduction in Chapter 1, we begin Chapter 2 by introducing a spike-timing-dependent plasticity (STDP) rule that defines how the activity of the network drives changes in network topology. The rule is then applied to idealized networks of leaky integrate-and-fire neurons (LIF). These neurons are not subjected to the variability that typically characterize neurons in vivo. In noiseless networks, synapses develop closed loops of strong connectivity that reproduce stereotypical, precisely-timed spike patterns from an initially random network. We demonstrate the characteristics of the asymptotic synaptic configuration are dependent on the statistics of the initial random network. The spike timings of the neurons simulated in Chapter 2 are generated exactly by a computationally economical, nonlinear mapping which is extended to LIF neurons injected with fluctuating current in Chapter 3. Development of an economical mapping that incorporates noise provides a practical solution to the long simulation times required to produce asymptotic synaptic topologies in networks with STDP in the presence of realistic neuronal variability. The mapping relies on generating numerical solutions to the dynamics of a LIF neuron subjected to Gaussian white noise (GWN). The system reduces to the Ornstein-Uhlenbeck first passage time problem, the solution of which we build into the mapping method of Chapter 2. We demonstrate that simulations using the stochastic mapping have reduced computation time compared to traditional Runge-Kutta methods by more than a factor of 150. In Chapter 4, we use the stochastic mapping to study the dynamics of emerging synaptic topologies in noisy networks. With the addition of membrane noise, networks with dynamical synapses can admit states in which the distribution of the synaptic weights is static under spontaneous activity, but the random connectivity between neurons is dynamical. The widely cited problem of instabilities in networks with STDP is avoided with the implementation of a synaptic decay and an activation threshold on each synapse. When such networks are presented with stimulus modeled by a focused excitatory current, chain-like networks can emerge with the addition of an axon-remodeling plasticity rule, a topological constraint on the connectivity modeling the finite resources available to each neuron. The emergent topologies are the result of an iterative stochastic process. The dynamics of the growth process suggest a strong interplay between the network topology and the spike sequences they produce during development. Namely, the existence of an embedded spike sequence alters the distribution of synaptic weights through the entire network. The roles of model parameters that affect the interplay between network structure and activity are elucidated. Finally, we propose two mathematical growth models, which are complementary, that capture the essence of the growth dynamics observed in simulations. In Chapter 5, we present an extension of the stochastic mapping that allows the possibility of neuronal cooperation. We demonstrate that synaptic topologies admitting stereotypical sequences can emerge in yet higher, biologically realistic levels of membrane potential variability when neurons cooperate to innervate shared targets. The structure that is most robust to the variability is that of a synfire chain. The principles of growth dynamics detailed in Chapter 4 are the same that sculpt the emergent synfire topologies. We conclude by discussing avenues for extensions of these results.

  13. Synchronization transition in neuronal networks composed of chaotic or non-chaotic oscillators.

    PubMed

    Xu, Kesheng; Maidana, Jean Paul; Castro, Samy; Orio, Patricio

    2018-05-30

    Chaotic dynamics has been shown in the dynamics of neurons and neural networks, in experimental data and numerical simulations. Theoretical studies have proposed an underlying role of chaos in neural systems. Nevertheless, whether chaotic neural oscillators make a significant contribution to network behaviour and whether the dynamical richness of neural networks is sensitive to the dynamics of isolated neurons, still remain open questions. We investigated synchronization transitions in heterogeneous neural networks of neurons connected by electrical coupling in a small world topology. The nodes in our model are oscillatory neurons that - when isolated - can exhibit either chaotic or non-chaotic behaviour, depending on conductance parameters. We found that the heterogeneity of firing rates and firing patterns make a greater contribution than chaos to the steepness of the synchronization transition curve. We also show that chaotic dynamics of the isolated neurons do not always make a visible difference in the transition to full synchrony. Moreover, macroscopic chaos is observed regardless of the dynamics nature of the neurons. However, performing a Functional Connectivity Dynamics analysis, we show that chaotic nodes can promote what is known as multi-stable behaviour, where the network dynamically switches between a number of different semi-synchronized, metastable states.

  14. Biological conservation law as an emerging functionality in dynamical neuronal networks.

    PubMed

    Podobnik, Boris; Jusup, Marko; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M; Stanley, H Eugene

    2017-11-07

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law-the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective.

  15. Biological conservation law as an emerging functionality in dynamical neuronal networks

    PubMed Central

    Podobnik, Boris; Tiganj, Zoran; Wang, Wen-Xu; Buldú, Javier M.

    2017-01-01

    Scientists strive to understand how functionalities, such as conservation laws, emerge in complex systems. Living complex systems in particular create high-ordered functionalities by pairing up low-ordered complementary processes, e.g., one process to build and the other to correct. We propose a network mechanism that demonstrates how collective statistical laws can emerge at a macro (i.e., whole-network) level even when they do not exist at a unit (i.e., network-node) level. Drawing inspiration from neuroscience, we model a highly stylized dynamical neuronal network in which neurons fire either randomly or in response to the firing of neighboring neurons. A synapse connecting two neighboring neurons strengthens when both of these neurons are excited and weakens otherwise. We demonstrate that during this interplay between the synaptic and neuronal dynamics, when the network is near a critical point, both recurrent spontaneous and stimulated phase transitions enable the phase-dependent processes to replace each other and spontaneously generate a statistical conservation law—the conservation of synaptic strength. This conservation law is an emerging functionality selected by evolution and is thus a form of biological self-organized criticality in which the key dynamical modes are collective. PMID:29078286

  16. Network dynamics of 3D engineered neuronal cultures: a new experimental model for in-vitro electrophysiology.

    PubMed

    Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio

    2014-06-30

    Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems.

  17. Network dynamics of 3D engineered neuronal cultures: a new experimental model for in-vitro electrophysiology

    PubMed Central

    Frega, Monica; Tedesco, Mariateresa; Massobrio, Paolo; Pesce, Mattia; Martinoia, Sergio

    2014-01-01

    Despite the extensive use of in-vitro models for neuroscientific investigations and notwithstanding the growing field of network electrophysiology, all studies on cultured cells devoted to elucidate neurophysiological mechanisms and computational properties, are based on 2D neuronal networks. These networks are usually grown onto specific rigid substrates (also with embedded electrodes) and lack of most of the constituents of the in-vivo like environment: cell morphology, cell-to-cell interaction and neuritic outgrowth in all directions. Cells in a brain region develop in a 3D space and interact with a complex multi-cellular environment and extracellular matrix. Under this perspective, 3D networks coupled to micro-transducer arrays, represent a new and powerful in-vitro model capable of better emulating in-vivo physiology. In this work, we present a new experimental paradigm constituted by 3D hippocampal networks coupled to Micro-Electrode-Arrays (MEAs) and we show how the features of the recorded network dynamics differ from the corresponding 2D network model. Further development of the proposed 3D in-vitro model by adding embedded functionalized scaffolds might open new prospects for manipulating, stimulating and recording the neuronal activity to elucidate neurophysiological mechanisms and to design bio-hybrid microsystems. PMID:24976386

  18. Synchronization behaviors of coupled neurons under electromagnetic radiation

    NASA Astrophysics Data System (ADS)

    Ma, Jun; Wu, Fuqiang; Wang, Chunni

    2017-01-01

    Based on an improved neuronal model, in which the effect of magnetic flux is considered during the fluctuation and change of ion concentration in cells, the transition of synchronization is investigated by imposing external electromagnetic radiation on the coupled neurons, and networks, respectively. It is found that the synchronization degree depends on the coupling intensity and the intensity of external electromagnetic radiation. Indeed, appropriate intensity of electromagnetic radiation could be effective to realize intermittent synchronization, while stronger intensity of electromagnetic radiation can induce disorder of coupled neurons and network. Neurons show rhythm synchronization in the electrical activities by increasing the coupling intensity under electromagnetic radiation, and spatial patterns can be formed in the network under smaller factor of synchronization.

  19. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV: structuring synaptic pathways among recurrent connections.

    PubMed

    Gilson, Matthieu; Burkitt, Anthony N; Grayden, David B; Thomas, Doreen A; van Hemmen, J Leo

    2009-12-01

    In neuronal networks, the changes of synaptic strength (or weight) performed by spike-timing-dependent plasticity (STDP) are hypothesized to give rise to functional network structure. This article investigates how this phenomenon occurs for the excitatory recurrent connections of a network with fixed input weights that is stimulated by external spike trains. We develop a theoretical framework based on the Poisson neuron model to analyze the interplay between the neuronal activity (firing rates and the spike-time correlations) and the learning dynamics, when the network is stimulated by correlated pools of homogeneous Poisson spike trains. STDP can lead to both a stabilization of all the neuron firing rates (homeostatic equilibrium) and a robust weight specialization. The pattern of specialization for the recurrent weights is determined by a relationship between the input firing-rate and correlation structures, the network topology, the STDP parameters and the synaptic response properties. We find conditions for feed-forward pathways or areas with strengthened self-feedback to emerge in an initially homogeneous recurrent network.

  20. Autonomous Optimization of Targeted Stimulation of Neuronal Networks

    PubMed Central

    Kumar, Sreedhar S.; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin

    2016-01-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable ‘state’ to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative relationships underlying activity-response interaction in biological neuronal networks to choose optimal actions. Simple phenomenological models can be useful to validate the quality of the resulting controllers. PMID:27509295

  1. Autonomous Optimization of Targeted Stimulation of Neuronal Networks.

    PubMed

    Kumar, Sreedhar S; Wülfing, Jan; Okujeni, Samora; Boedecker, Joschka; Riedmiller, Martin; Egert, Ulrich

    2016-08-01

    Driven by clinical needs and progress in neurotechnology, targeted interaction with neuronal networks is of increasing importance. Yet, the dynamics of interaction between intrinsic ongoing activity in neuronal networks and their response to stimulation is unknown. Nonetheless, electrical stimulation of the brain is increasingly explored as a therapeutic strategy and as a means to artificially inject information into neural circuits. Strategies using regular or event-triggered fixed stimuli discount the influence of ongoing neuronal activity on the stimulation outcome and are therefore not optimal to induce specific responses reliably. Yet, without suitable mechanistic models, it is hardly possible to optimize such interactions, in particular when desired response features are network-dependent and are initially unknown. In this proof-of-principle study, we present an experimental paradigm using reinforcement-learning (RL) to optimize stimulus settings autonomously and evaluate the learned control strategy using phenomenological models. We asked how to (1) capture the interaction of ongoing network activity, electrical stimulation and evoked responses in a quantifiable 'state' to formulate a well-posed control problem, (2) find the optimal state for stimulation, and (3) evaluate the quality of the solution found. Electrical stimulation of generic neuronal networks grown from rat cortical tissue in vitro evoked bursts of action potentials (responses). We show that the dynamic interplay of their magnitudes and the probability to be intercepted by spontaneous events defines a trade-off scenario with a network-specific unique optimal latency maximizing stimulus efficacy. An RL controller was set to find this optimum autonomously. Across networks, stimulation efficacy increased in 90% of the sessions after learning and learned latencies strongly agreed with those predicted from open-loop experiments. Our results show that autonomous techniques can exploit quantitative relationships underlying activity-response interaction in biological neuronal networks to choose optimal actions. Simple phenomenological models can be useful to validate the quality of the resulting controllers.

  2. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.

    PubMed

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks.

  3. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State

    PubMed Central

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a general approach to study the dynamics of interacting populations of spiking networks. PMID:26407178

  4. Electrophysiological models of neural processing.

    PubMed

    Nelson, Mark E

    2011-01-01

    The brain is an amazing information processing system that allows organisms to adaptively monitor and control complex dynamic interactions with their environment across multiple spatial and temporal scales. Mathematical modeling and computer simulation techniques have become essential tools in understanding diverse aspects of neural processing ranging from sub-millisecond temporal coding in the sound localization circuity of barn owls to long-term memory storage and retrieval in humans that can span decades. The processing capabilities of individual neurons lie at the core of these models, with the emphasis shifting upward and downward across different levels of biological organization depending on the nature of the questions being addressed. This review provides an introduction to the techniques for constructing biophysically based models of individual neurons and local networks. Topics include Hodgkin-Huxley-type models of macroscopic membrane currents, Markov models of individual ion-channel currents, compartmental models of neuronal morphology, and network models involving synaptic interactions among multiple neurons.

  5. Axon and dendrite geography predict the specificity of synaptic connections in a functioning spinal cord network.

    PubMed

    Li, Wen-Chang; Cooke, Tom; Sautois, Bart; Soffe, Stephen R; Borisyuk, Roman; Roberts, Alan

    2007-09-10

    How specific are the synaptic connections formed as neuronal networks develop and can simple rules account for the formation of functioning circuits? These questions are assessed in the spinal circuits controlling swimming in hatchling frog tadpoles. This is possible because detailed information is now available on the identity and synaptic connections of the main types of neuron. The probabilities of synapses between 7 types of identified spinal neuron were measured directly by making electrical recordings from 500 pairs of neurons. For the same neuron types, the dorso-ventral distributions of axons and dendrites were measured and then used to calculate the probabilities that axons would encounter particular dendrites and so potentially form synaptic connections. Surprisingly, synapses were found between all types of neuron but contact probabilities could be predicted simply by the anatomical overlap of their axons and dendrites. These results suggested that synapse formation may not require axons to recognise specific, correct dendrites. To test the plausibility of simpler hypotheses, we first made computational models that were able to generate longitudinal axon growth paths and reproduce the axon distribution patterns and synaptic contact probabilities found in the spinal cord. To test if probabilistic rules could produce functioning spinal networks, we then made realistic computational models of spinal cord neurons, giving them established cell-specific properties and connecting them into networks using the contact probabilities we had determined. A majority of these networks produced robust swimming activity. Simple factors such as morphogen gradients controlling dorso-ventral soma, dendrite and axon positions may sufficiently constrain the synaptic connections made between different types of neuron as the spinal cord first develops and allow functional networks to form. Our analysis implies that detailed cellular recognition between spinal neuron types may not be necessary for the reliable formation of functional networks to generate early behaviour like swimming.

  6. How noise affects the synchronization properties of recurrent networks of inhibitory neurons.

    PubMed

    Brunel, Nicolas; Hansel, David

    2006-05-01

    GABAergic interneurons play a major role in the emergence of various types of synchronous oscillatory patterns of activity in the central nervous system. Motivated by these experimental facts, modeling studies have investigated mechanisms for the emergence of coherent activity in networks of inhibitory neurons. However, most of these studies have focused either when the noise in the network is absent or weak or in the opposite situation when it is strong. Hence, a full picture of how noise affects the dynamics of such systems is still lacking. The aim of this letter is to provide a more comprehensive understanding of the mechanisms by which the asynchronous states in large, fully connected networks of inhibitory neurons are destabilized as a function of the noise level. Three types of single neuron models are considered: the leaky integrate-and-fire (LIF) model, the exponential integrate-and-fire (EIF), model and conductance-based models involving sodium and potassium Hodgkin-Huxley (HH) currents. We show that in all models, the instabilities of the asynchronous state can be classified in two classes. The first one consists of clustering instabilities, which exist in a restricted range of noise. These instabilities lead to synchronous patterns in which the population of neurons is broken into clusters of synchronously firing neurons. The irregularity of the firing patterns of the neurons is weak. The second class of instabilities, termed oscillatory firing rate instabilities, exists at any value of noise. They lead to cluster state at low noise. As the noise is increased, the instability occurs at larger coupling, and the pattern of firing that emerges becomes more irregular. In the regime of high noise and strong coupling, these instabilities lead to stochastic oscillations in which neurons fire in an approximately Poisson way with a common instantaneous probability of firing that oscillates in time.

  7. In vitro large-scale experimental and theoretical studies for the realization of bi-directional brain-prostheses.

    PubMed

    Bonifazi, Paolo; Difato, Francesco; Massobrio, Paolo; Breschi, Gian L; Pasquale, Valentina; Levi, Timothée; Goldin, Miri; Bornat, Yannick; Tedesco, Mariateresa; Bisio, Marta; Kanner, Sivan; Galron, Ronit; Tessadori, Jacopo; Taverna, Stefano; Chiappalone, Michela

    2013-01-01

    Brain-machine interfaces (BMI) were born to control "actions from thoughts" in order to recover motor capability of patients with impaired functional connectivity between the central and peripheral nervous system. The final goal of our studies is the development of a new proof-of-concept BMI-a neuromorphic chip for brain repair-to reproduce the functional organization of a damaged part of the central nervous system. To reach this ambitious goal, we implemented a multidisciplinary "bottom-up" approach in which in vitro networks are the paradigm for the development of an in silico model to be incorporated into a neuromorphic device. In this paper we present the overall strategy and focus on the different building blocks of our studies: (i) the experimental characterization and modeling of "finite size networks" which represent the smallest and most general self-organized circuits capable of generating spontaneous collective dynamics; (ii) the induction of lesions in neuronal networks and the whole brain preparation with special attention on the impact on the functional organization of the circuits; (iii) the first production of a neuromorphic chip able to implement a real-time model of neuronal networks. A dynamical characterization of the finite size circuits with single cell resolution is provided. A neural network model based on Izhikevich neurons was able to replicate the experimental observations. Changes in the dynamics of the neuronal circuits induced by optical and ischemic lesions are presented respectively for in vitro neuronal networks and for a whole brain preparation. Finally the implementation of a neuromorphic chip reproducing the network dynamics in quasi-real time (10 ns precision) is presented.

  8. Distal gap junctions and active dendrites can tune network dynamics.

    PubMed

    Saraga, Fernanda; Ng, Leo; Skinner, Frances K

    2006-03-01

    Gap junctions allow direct electrical communication between CNS neurons. From theoretical and modeling studies, it is well known that although gap junctions can act to synchronize network output, they can also give rise to many other dynamic patterns including antiphase and other phase-locked states. The particular network pattern that arises depends on cellular, intrinsic properties that affect firing frequencies as well as the strength and location of the gap junctions. Interneurons or GABAergic neurons in hippocampus are diverse in their cellular characteristics and have been shown to have active dendrites. Furthermore, parvalbumin-positive GABAergic neurons, also known as basket cells, can contact one another via gap junctions on their distal dendrites. Using two-cell network models, we explore how distal electrical connections affect network output. We build multi-compartment models of hippocampal basket cells using NEURON and endow them with varying amounts of active dendrites. Two-cell networks of these model cells as well as reduced versions are explored. The relationship between intrinsic frequency and the level of active dendrites allows us to define three regions based on what sort of network dynamics occur with distal gap junction coupling. Weak coupling theory is used to predict the delineation of these regions as well as examination of phase response curves and distal dendritic polarization levels. We find that a nonmonotonic dependence of network dynamic characteristics (phase lags) on gap junction conductance occurs. This suggests that distal electrical coupling and active dendrite levels can control how sensitive network dynamics are to gap junction modulation. With the extended geometry, gap junctions located at more distal locations must have larger conductances for pure synchrony to occur. Furthermore, based on simulations with heterogeneous networks, it may be that one requires active dendrites if phase-locking is to occur in networks formed with distal gap junctions.

  9. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

  10. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    PubMed

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  11. Modeling of synchronization behavior of bursting neurons at nonlinearly coupled dynamical networks.

    PubMed

    Çakir, Yüksel

    2016-01-01

    Synchronization behaviors of bursting neurons coupled through electrical and dynamic chemical synapses are investigated. The Izhikevich model is used with random and small world network of bursting neurons. Various currents which consist of diffusive electrical and time-delayed dynamic chemical synapses are used in the simulations to investigate the influences of synaptic currents and couplings on synchronization behavior of bursting neurons. The effects of parameters, such as time delay, inhibitory synaptic strengths, and decay time on synchronization behavior are investigated. It is observed that in random networks with no delay, bursting synchrony is established with the electrical synapse alone, single spiking synchrony is observed with hybrid coupling. In small world network with no delay, periodic bursting behavior with multiple spikes is observed when only chemical and only electrical synapse exist. Single-spike and multiple-spike bursting are established with hybrid couplings. A decrease in the synchronization measure is observed with zero time delay, as the decay time is increased in random network. For synaptic delays which are above active phase period, synchronization measure increases with an increase in synaptic strength and time delay in small world network. However, in random network, it increases with only an increase in synaptic strength.

  12. Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models

    PubMed Central

    Cowley, Benjamin R.; Doiron, Brent; Kohn, Adam

    2016-01-01

    Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction—shared dimensionality and percent shared variance—with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure. PMID:27926936

  13. Structural and Functional Alterations in Neocortical Circuits after Mild Traumatic Brain Injury

    NASA Astrophysics Data System (ADS)

    Vascak, Michal

    National concern over traumatic brain injury (TBI) is growing rapidly. Recent focus is on mild TBI (mTBI), which is the most prevalent injury level in both civilian and military demographics. A preeminent sequelae of mTBI is cognitive network disruption. Advanced neuroimaging of mTBI victims supports this premise, revealing alterations in activation and structure-function of excitatory and inhibitory neuronal systems, which are essential for network processing. However, clinical neuroimaging cannot resolve the cellular and molecular substrates underlying such changes. Therefore, to understand the full scope of mTBI-induced alterations it is necessary to study cortical networks on the microscopic level, where neurons form local networks that are the fundamental computational modules supporting cognition. Recently, in a well-controlled animal model of mTBI, we demonstrated in the excitatory pyramidal neuron system, isolated diffuse axonal injury (DAI), in concert with electrophysiological abnormalities in nearby intact (non-DAI) neurons. These findings were consistent with altered axon initial segment (AIS) intrinsic activity functionally associated with structural plasticity, and/or disturbances in extrinsic systems related to parvalbumin (PV)-expressing interneurons that form GABAergic synapses along the pyramidal neuron perisomatic/AIS domains. The AIS and perisomatic GABAergic synapses are domains critical for regulating neuronal activity and E-I balance. In this dissertation, we focus on the neocortical excitatory pyramidal neuron/inhibitory PV+ interneuron local network following mTBI. Our central hypothesis is that mTBI disrupts neuronal network structure and function causing imbalance of excitatory and inhibitory systems. To address this hypothesis we exploited transgenic and cre/lox mouse models of mTBI, employing approaches that couple state-of-the-art bioimaging with electrophysiology to determine the structuralfunctional alterations of excitatory and inhibitory systems in the neocortex.

  14. Neuronal avalanches and learning

    NASA Astrophysics Data System (ADS)

    de Arcangelis, Lucilla

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  15. The neural representation of the gender of faces in the primate visual system: A computer modeling study.

    PubMed

    Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M

    2017-03-01

    We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Multiple mechanisms switch an electrically coupled, synaptically inhibited neuron between competing rhythmic oscillators.

    PubMed

    Gutierrez, Gabrielle J; O'Leary, Timothy; Marder, Eve

    2013-03-06

    Rhythmic oscillations are common features of nervous systems. One of the fundamental questions posed by these rhythms is how individual neurons or groups of neurons are recruited into different network oscillations. We modeled competing fast and slow oscillators connected to a hub neuron with electrical and inhibitory synapses. We explore the patterns of coordination shown in the network as a function of the electrical coupling and inhibitory synapse strengths with the help of a novel visualization method that we call the "parameterscape." The hub neuron can be switched between the fast and slow oscillators by multiple network mechanisms, indicating that a given change in network state can be achieved by degenerate cellular mechanisms. These results have importance for interpreting experiments employing optogenetic, genetic, and pharmacological manipulations to understand circuit dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Synchronization and coordination of sequences in two neural ensembles

    NASA Astrophysics Data System (ADS)

    Venaille, Antoine; Varona, Pablo; Rabinovich, Mikhail I.

    2005-06-01

    There are many types of neural networks involved in the sequential motor behavior of animals. For high species, the control and coordination of the network dynamics is a function of the higher levels of the central nervous system, in particular the cerebellum. However, in many cases, especially for invertebrates, such coordination is the result of direct synaptic connections between small circuits. We show here that even the chaotic sequential activity of small model networks can be coordinated by electrotonic synapses connecting one or several pairs of neurons that belong to two different networks. As an example, we analyzed the coordination and synchronization of the sequential activity of two statocyst model networks of the marine mollusk Clione. The statocysts are gravity sensory organs that play a key role in postural control of the animal and the generation of a complex hunting motor program. Each statocyst network was modeled by a small ensemble of neurons with Lotka-Volterra type dynamics and nonsymmetric inhibitory interactions. We studied how two such networks were synchronized by electrical coupling in the presence of an external signal which lead to winnerless competition among the neurons. We found that as a function of the number and the strength of connections between the two networks, it is possible to coordinate and synchronize the sequences that each network generates with its own chaotic dynamics. In spite of the chaoticity, the coordination of the signals is established through an activation sequence lock for those neurons that are active at a particular instant of time.

  18. The Ising Decision Maker: a binary stochastic network for choice response time.

    PubMed

    Verdonck, Stijn; Tuerlinckx, Francis

    2014-07-01

    The Ising Decision Maker (IDM) is a new formal model for speeded two-choice decision making derived from the stochastic Hopfield network or dynamic Ising model. On a microscopic level, it consists of 2 pools of binary stochastic neurons with pairwise interactions. Inside each pool, neurons excite each other, whereas between pools, neurons inhibit each other. The perceptual input is represented by an external excitatory field. Using methods from statistical mechanics, the high-dimensional network of neurons (microscopic level) is reduced to a two-dimensional stochastic process, describing the evolution of the mean neural activity per pool (macroscopic level). The IDM can be seen as an abstract, analytically tractable multiple attractor network model of information accumulation. In this article, the properties of the IDM are studied, the relations to existing models are discussed, and it is shown that the most important basic aspects of two-choice response time data can be reproduced. In addition, the IDM is shown to predict a variety of observed psychophysical relations such as Piéron's law, the van der Molen-Keuss effect, and Weber's law. Using Bayesian methods, the model is fitted to both simulated and real data, and its performance is compared to the Ratcliff diffusion model. (c) 2014 APA, all rights reserved.

  19. Brain mechanisms for perceptual and reward-related decision-making.

    PubMed

    Deco, Gustavo; Rolls, Edmund T; Albantakis, Larissa; Romo, Ranulfo

    2013-04-01

    Phenomenological models of decision-making, including the drift-diffusion and race models, are compared with mechanistic, biologically plausible models, such as integrate-and-fire attractor neuronal network models. The attractor network models show how decision confidence is an emergent property; and make testable predictions about the neural processes (including neuronal activity and fMRI signals) involved in decision-making which indicate that the medial prefrontal cortex is involved in reward value-based decision-making. Synaptic facilitation in these models can help to account for sequential vibrotactile decision-making, and for how postponed decision-related responses are made. The randomness in the neuronal spiking-related noise that makes the decision-making probabilistic is shown to be increased by the graded firing rate representations found in the brain, to be decreased by the diluted connectivity, and still to be significant in biologically large networks with thousands of synapses onto each neuron. The stability of these systems is shown to be influenced in different ways by glutamatergic and GABAergic efficacy, leading to a new field of dynamical neuropsychiatry with applications to understanding schizophrenia and obsessive-compulsive disorder. The noise in these systems is shown to be advantageous, and to apply to similar attractor networks involved in short-term memory, long-term memory, attention, and associative thought processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Network synchronization in hippocampal neurons.

    PubMed

    Penn, Yaron; Segal, Menahem; Moses, Elisha

    2016-03-22

    Oscillatory activity is widespread in dynamic neuronal networks. The main paradigm for the origin of periodicity consists of specialized pacemaking elements that synchronize and drive the rest of the network; however, other models exist. Here, we studied the spontaneous emergence of synchronized periodic bursting in a network of cultured dissociated neurons from rat hippocampus and cortex. Surprisingly, about 60% of all active neurons were self-sustained oscillators when disconnected, each with its own natural frequency. The individual neuron's tendency to oscillate and the corresponding oscillation frequency are controlled by its excitability. The single neuron intrinsic oscillations were blocked by riluzole, and are thus dependent on persistent sodium leak currents. Upon a gradual retrieval of connectivity, the synchrony evolves: Loose synchrony appears already at weak connectivity, with the oscillators converging to one common oscillation frequency, yet shifted in phase across the population. Further strengthening of the connectivity causes a reduction in the mean phase shifts until zero-lag is achieved, manifested by synchronous periodic network bursts. Interestingly, the frequency of network bursting matches the average of the intrinsic frequencies. Overall, the network behaves like other universal systems, where order emerges spontaneously by entrainment of independent rhythmic units. Although simplified with respect to circuitry in the brain, our results attribute a basic functional role for intrinsic single neuron excitability mechanisms in driving the network's activity and dynamics, contributing to our understanding of developing neural circuits.

  1. Importance of being Nernst: Synaptic activity and functional relevance in stem cell-derived neurons

    PubMed Central

    Bradford, Aaron B; McNutt, Patrick M

    2015-01-01

    Functional synaptogenesis and network emergence are signature endpoints of neurogenesis. These behaviors provide higher-order confirmation that biochemical and cellular processes necessary for neurotransmitter release, post-synaptic detection and network propagation of neuronal activity have been properly expressed and coordinated among cells. The development of synaptic neurotransmission can therefore be considered a defining property of neurons. Although dissociated primary neuron cultures readily form functioning synapses and network behaviors in vitro, continuously cultured neurogenic cell lines have historically failed to meet these criteria. Therefore, in vitro-derived neuron models that develop synaptic transmission are critically needed for a wide array of studies, including molecular neuroscience, developmental neurogenesis, disease research and neurotoxicology. Over the last decade, neurons derived from various stem cell lines have shown varying ability to develop into functionally mature neurons. In this review, we will discuss the neurogenic potential of various stem cells populations, addressing strengths and weaknesses of each, with particular attention to the emergence of functional behaviors. We will propose methods to functionally characterize new stem cell-derived neuron (SCN) platforms to improve their reliability as physiological relevant models. Finally, we will review how synaptically active SCNs can be applied to accelerate research in a variety of areas. Ultimately, emphasizing the critical importance of synaptic activity and network responses as a marker of neuronal maturation is anticipated to result in in vitro findings that better translate to efficacious clinical treatments. PMID:26240679

  2. A novel enteric neuron-glia coculture system reveals the role of glia in neuronal development.

    PubMed

    Le Berre-Scoul, Catherine; Chevalier, Julien; Oleynikova, Elena; Cossais, François; Talon, Sophie; Neunlist, Michel; Boudin, Hélène

    2017-01-15

    Unlike astrocytes in the brain, the potential role of enteric glial cells (EGCs) in the formation of the enteric neuronal circuit is currently unknown. To examine the role of EGCs in the formation of the neuronal network, we developed a novel neuron-enriched culture model from embryonic rat intestine grown in indirect coculture with EGCs. We found that EGCs shape axonal complexity and synapse density in enteric neurons, through purinergic- and glial cell line-derived neurotrophic factor-dependent pathways. Using a novel and valuable culture model to study enteric neuron-glia interactions, our study identified EGCs as a key cellular actor regulating neuronal network maturation. In the nervous system, the formation of neuronal circuitry results from a complex and coordinated action of intrinsic and extrinsic factors. In the CNS, extrinsic mediators derived from astrocytes have been shown to play a key role in neuronal maturation, including dendritic shaping, axon guidance and synaptogenesis. In the enteric nervous system (ENS), the potential role of enteric glial cells (EGCs) in the maturation of developing enteric neuronal circuit is currently unknown. A major obstacle in addressing this question is the difficulty in obtaining a valuable experimental model in which enteric neurons could be isolated and maintained without EGCs. We adapted a cell culture method previously developed for CNS neurons to establish a neuron-enriched primary culture from embryonic rat intestine which was cultured in indirect coculture with EGCs. We demonstrated that enteric neurons grown in such conditions showed several structural, phenotypic and functional hallmarks of proper development and maturation. However, when neurons were grown without EGCs, the complexity of the axonal arbour and the density of synapses were markedly reduced, suggesting that glial-derived factors contribute strongly to the formation of the neuronal circuitry. We found that these effects played by EGCs were mediated in part through purinergic P2Y 1 receptor- and glial cell line-derived neurotrophic factor-dependent pathways. Using a novel and valuable culture model to study enteric neuron-glia interactions, our study identified EGCs as a key cellular actor required for neuronal network maturation. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  3. Irregular Collective Behavior of Heterogeneous Neural Networks

    NASA Astrophysics Data System (ADS)

    Luccioli, Stefano; Politi, Antonio

    2010-10-01

    We investigate a network of integrate-and-fire neurons characterized by a distribution of spiking frequencies. Upon increasing the coupling strength, the model exhibits a transition from an asynchronous regime to a nontrivial collective behavior. Numerical simulations of large systems indicate that, at variance with the Kuramoto model, (i) the macroscopic dynamics stays irregular and (ii) the microscopic (single-neuron) evolution is linearly stable.

  4. Multi-level characterization of balanced inhibitory-excitatory cortical neuron network derived from human pluripotent stem cells.

    PubMed

    Nadadhur, Aishwarya G; Emperador Melero, Javier; Meijer, Marieke; Schut, Desiree; Jacobs, Gerbren; Li, Ka Wan; Hjorth, J J Johannes; Meredith, Rhiannon M; Toonen, Ruud F; Van Kesteren, Ronald E; Smit, August B; Verhage, Matthijs; Heine, Vivi M

    2017-01-01

    Generation of neuronal cultures from induced pluripotent stem cells (hiPSCs) serve the studies of human brain disorders. However we lack neuronal networks with balanced excitatory-inhibitory activities, which are suitable for single cell analysis. We generated low-density networks of hPSC-derived GABAergic and glutamatergic cortical neurons. We used two different co-culture models with astrocytes. We show that these cultures have balanced excitatory-inhibitory synaptic identities using confocal microscopy, electrophysiological recordings, calcium imaging and mRNA analysis. These simple and robust protocols offer the opportunity for single-cell to multi-level analysis of patient hiPSC-derived cortical excitatory-inhibitory networks; thereby creating advanced tools to study disease mechanisms underlying neurodevelopmental disorders.

  5. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations.

    PubMed

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.

  6. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    PubMed Central

    Hahne, Jan; Helias, Moritz; Kunkel, Susanne; Igarashi, Jun; Bolten, Matthias; Frommer, Andreas; Diesmann, Markus

    2015-01-01

    Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology. PMID:26441628

  7. Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain

    PubMed Central

    Fernando, Chrisantha; Vasas, Vera; Szathmáry, Eörs; Husbands, Phil

    2011-01-01

    We propose a previously unrecognized kind of informational entity in the brain that is capable of acting as the basis for unlimited hereditary variation in neuronal networks. This unit is a path of activity through a network of neurons, analogous to a path taken through a hidden Markov model. To prove in principle the capabilities of this new kind of informational substrate, we show how a population of paths can be used as the hereditary material for a neuronally implemented genetic algorithm, (the swiss-army knife of black-box optimization techniques) which we have proposed elsewhere could operate at somatic timescales in the brain. We compare this to the same genetic algorithm that uses a standard ‘genetic’ informational substrate, i.e. non-overlapping discrete genotypes, on a range of optimization problems. A path evolution algorithm (PEA) is defined as any algorithm that implements natural selection of paths in a network substrate. A PEA is a previously unrecognized type of natural selection that is well suited for implementation by biological neuronal networks with structural plasticity. The important similarities and differences between a standard genetic algorithm and a PEA are considered. Whilst most experiments are conducted on an abstract network model, at the conclusion of the paper a slightly more realistic neuronal implementation of a PEA is outlined based on Izhikevich spiking neurons. Finally, experimental predictions are made for the identification of such informational paths in the brain. PMID:21887266

  8. Cell Assembly Dynamics of Sparsely-Connected Inhibitory Networks: A Simple Model for the Collective Activity of Striatal Projection Neurons.

    PubMed

    Angulo-Garcia, David; Berke, Joshua D; Torcini, Alessandro

    2016-02-01

    Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We find that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.

  9. Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy

    PubMed Central

    Li, Zhaohui; Li, Xiaoli

    2013-01-01

    Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding. PMID:23940662

  10. Identified Serotonergic Modulatory Neurons Have Heterogeneous Synaptic Connectivity within the Olfactory System of Drosophila.

    PubMed

    Coates, Kaylynn E; Majot, Adam T; Zhang, Xiaonan; Michael, Cole T; Spitzer, Stacy L; Gaudry, Quentin; Dacks, Andrew M

    2017-08-02

    Modulatory neurons project widely throughout the brain, dynamically altering network processing based on an animal's physiological state. The connectivity of individual modulatory neurons can be complex, as they often receive input from a variety of sources and are diverse in their physiology, structure, and gene expression profiles. To establish basic principles about the connectivity of individual modulatory neurons, we examined a pair of identified neurons, the "contralaterally projecting, serotonin-immunoreactive deutocerebral neurons" (CSDns), within the olfactory system of Drosophila Specifically, we determined the neuronal classes providing synaptic input to the CSDns within the antennal lobe (AL), an olfactory network targeted by the CSDns, and the degree to which CSDn active zones are uniformly distributed across the AL. Using anatomical techniques, we found that the CSDns received glomerulus-specific input from olfactory receptor neurons (ORNs) and projection neurons (PNs), and networkwide input from local interneurons (LNs). Furthermore, we quantified the number of CSDn active zones in each glomerulus and found that CSDn output is not uniform, but rather heterogeneous, across glomeruli and stereotyped from animal to animal. Finally, we demonstrate that the CSDns synapse broadly onto LNs and PNs throughout the AL but do not synapse upon ORNs. Our results demonstrate that modulatory neurons do not necessarily provide purely top-down input but rather receive neuron class-specific input from the networks that they target, and that even a two cell modulatory network has highly heterogeneous, yet stereotyped, pattern of connectivity. SIGNIFICANCE STATEMENT Modulatory neurons often project broadly throughout the brain to alter processing based on physiological state. However, the connectivity of individual modulatory neurons to their target networks is not well understood, as modulatory neuron populations are heterogeneous in their physiology, morphology, and gene expression. In this study, we use a pair of identified serotonergic neurons within the Drosophila olfactory system as a model to establish a framework for modulatory neuron connectivity. We demonstrate that individual modulatory neurons can integrate neuron class-specific input from their target network, which is often nonreciprocal. Additionally, modulatory neuron output can be stereotyped, yet nonuniform, across network regions. Our results provide new insight into the synaptic relationships that underlie network function of modulatory neurons. Copyright © 2017 the authors 0270-6474/17/377318-14$15.00/0.

  11. Synchronization in a noise-driven developing neural network

    NASA Astrophysics Data System (ADS)

    Lin, I.-H.; Wu, R.-K.; Chen, C.-M.

    2011-11-01

    We use computer simulations to investigate the structural and dynamical properties of a developing neural network whose activity is driven by noise. Structurally, the constructed neural networks in our simulations exhibit the small-world properties that have been observed in several neural networks. The dynamical change of neuronal membrane potential is described by the Hodgkin-Huxley model, and two types of learning rules, including spike-timing-dependent plasticity (STDP) and inverse STDP, are considered to restructure the synaptic strength between neurons. Clustered synchronized firing (SF) of the network is observed when the network connectivity (number of connections/maximal connections) is about 0.75, in which the firing rate of neurons is only half of the network frequency. At the connectivity of 0.86, all neurons fire synchronously at the network frequency. The network SF frequency increases logarithmically with the culturing time of a growing network and decreases exponentially with the delay time in signal transmission. These conclusions are consistent with experimental observations. The phase diagrams of SF in a developing network are investigated for both learning rules.

  12. Creation of defined single cell resolution neuronal circuits on microelectrode arrays

    NASA Astrophysics Data System (ADS)

    Pirlo, Russell Kirk

    2009-12-01

    The way cell-cell organization of neuronal networks influences activity and facilitates function is not well understood. Microelectrode arrays (MEAs) and advancing cell patterning technologies have enabled access to and control of in vitro neuronal networks spawning much new research in neuroscience and neuroengineering. We propose that small, simple networks of neurons with defined circuitry may serve as valuable research models where every connection can be analyzed, controlled and manipulated. Towards the goal of creating such neuronal networks we have applied microfabricated elastomeric membranes, surface modification and our unique laser cell patterning system to create defined neuronal circuits with single-cell precision on MEAs. Definition of synaptic connectivity was imposed by the 3D physical constraints of polydimethylsiloxane elastomeric membranes. The membranes had 20mum clear-through holes and 2-3mum deep channels which when applied to the surface of the MEA formed microwells to confine neurons to electrodes connected via shallow tunnels to direct neurite outgrowth. Tapering and turning of channels was used to influence neurite polarity. Biocompatibility of the membranes was increased by vacuum baking, oligomer extraction, and autoclaving. Membranes were bound to the MEA by oxygen plasma treatment and heated pressure. The MEA/membrane surface was treated with oxygen plasma, poly-D-lysine and laminin to improve neuron attachment, survival and neurite outgrowth. Prior to cell patterning the outer edge of culture area was seeded with 5x10 5 cells per cm and incubated for 2 days. Single embryonic day 7 chick forebrain neurons were then patterned into the microwells and onto the electrodes using our laser cell patterning system. Patterned neurons successfully attached to and were confined to the electrodes. Neurites extended through the interconnecting channels and connected with adjacent neurons. These results demonstrate that neuronal circuits can be created with clearly defined circuitry and a one-to-one neuron-electrode ratio. The techniques and processes described here may be used in future research to create defined neuronal circuits to model in vivo circuits and study neuronal network processing.

  13. Estimating network parameters from combined dynamics of firing rate and irregularity of single neurons.

    PubMed

    Hamaguchi, Kosuke; Riehle, Alexa; Brunel, Nicolas

    2011-01-01

    High firing irregularity is a hallmark of cortical neurons in vivo, and modeling studies suggest a balance of excitation and inhibition is necessary to explain this high irregularity. Such a balance must be generated, at least partly, from local interconnected networks of excitatory and inhibitory neurons, but the details of the local network structure are largely unknown. The dynamics of the neural activity depends on the local network structure; this in turn suggests the possibility of estimating network structure from the dynamics of the firing statistics. Here we report a new method to estimate properties of the local cortical network from the instantaneous firing rate and irregularity (CV(2)) under the assumption that recorded neurons are a part of a randomly connected sparse network. The firing irregularity, measured in monkey motor cortex, exhibits two features; many neurons show relatively stable firing irregularity in time and across different task conditions; the time-averaged CV(2) is widely distributed from quasi-regular to irregular (CV(2) = 0.3-1.0). For each recorded neuron, we estimate the three parameters of a local network [balance of local excitation-inhibition, number of recurrent connections per neuron, and excitatory postsynaptic potential (EPSP) size] that best describe the dynamics of the measured firing rates and irregularities. Our analysis shows that optimal parameter sets form a two-dimensional manifold in the three-dimensional parameter space that is confined for most of the neurons to the inhibition-dominated region. High irregularity neurons tend to be more strongly connected to the local network, either in terms of larger EPSP and inhibitory PSP size or larger number of recurrent connections, compared with the low irregularity neurons, for a given excitatory/inhibitory balance. Incorporating either synaptic short-term depression or conductance-based synapses leads many low CV(2) neurons to move to the excitation-dominated region as well as to an increase of EPSP size.

  14. Reciprocal cholinergic and GABAergic modulation of the small ventrolateral pacemaker neurons of Drosophila's circadian clock neuron network.

    PubMed

    Lelito, Katherine R; Shafer, Orie T

    2012-04-01

    The relatively simple clock neuron network of Drosophila is a valuable model system for the neuronal basis of circadian timekeeping. Unfortunately, many key neuronal classes of this network are inaccessible to electrophysiological analysis. We have therefore adopted the use of genetically encoded sensors to address the physiology of the fly's circadian clock network. Using genetically encoded Ca(2+) and cAMP sensors, we have investigated the physiological responses of two specific classes of clock neuron, the large and small ventrolateral neurons (l- and s-LN(v)s), to two neurotransmitters implicated in their modulation: acetylcholine (ACh) and γ-aminobutyric acid (GABA). Live imaging of l-LN(v) cAMP and Ca(2+) dynamics in response to cholinergic agonist and GABA application were well aligned with published electrophysiological data, indicating that our sensors were capable of faithfully reporting acute physiological responses to these transmitters within single adult clock neuron soma. We extended these live imaging methods to s-LN(v)s, critical neuronal pacemakers whose physiological properties in the adult brain are largely unknown. Our s-LN(v) experiments revealed the predicted excitatory responses to bath-applied cholinergic agonists and the predicted inhibitory effects of GABA and established that the antagonism of ACh and GABA extends to their effects on cAMP signaling. These data support recently published but physiologically untested models of s-LN(v) modulation and lead to the prediction that cholinergic and GABAergic inputs to s-LN(v)s will have opposing effects on the phase and/or period of the molecular clock within these critical pacemaker neurons.

  15. Maximization of Learning Speed in the Motor Cortex Due to Neuronal Redundancy

    PubMed Central

    Takiyama, Ken; Okada, Masato

    2012-01-01

    Many redundancies play functional roles in motor control and motor learning. For example, kinematic and muscle redundancies contribute to stabilizing posture and impedance control, respectively. Another redundancy is the number of neurons themselves; there are overwhelmingly more neurons than muscles, and many combinations of neural activation can generate identical muscle activity. The functional roles of this neuronal redundancy remains unknown. Analysis of a redundant neural network model makes it possible to investigate these functional roles while varying the number of model neurons and holding constant the number of output units. Our analysis reveals that learning speed reaches its maximum value if and only if the model includes sufficient neuronal redundancy. This analytical result does not depend on whether the distribution of the preferred direction is uniform or a skewed bimodal, both of which have been reported in neurophysiological studies. Neuronal redundancy maximizes learning speed, even if the neural network model includes recurrent connections, a nonlinear activation function, or nonlinear muscle units. Furthermore, our results do not rely on the shape of the generalization function. The results of this study suggest that one of the functional roles of neuronal redundancy is to maximize learning speed. PMID:22253586

  16. On the applicability of STDP-based learning mechanisms to spiking neuron network models

    NASA Astrophysics Data System (ADS)

    Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.

    2016-11-01

    The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.

  17. Structure of a randomly grown 2-d network.

    PubMed

    Ajazi, Fioralba; Napolitano, George M; Turova, Tatyana; Zaurbek, Izbassar

    2015-10-01

    We introduce a growing random network on a plane as a model of a growing neuronal network. The properties of the structure of the induced graph are derived. We compare our results with available data. In particular, it is shown that depending on the parameters of the model the system undergoes in time different phases of the structure. We conclude with a possible explanation of some empirical data on the connections between neurons. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Generalized activity equations for spiking neural network dynamics.

    PubMed

    Buice, Michael A; Chow, Carson C

    2013-01-01

    Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales-the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  19. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing

    PubMed Central

    Rusakov, Dmitri A.; Savtchenko, Leonid P.

    2017-01-01

    Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT). PMID:28362877

  20. A review of the integrate-and-fire neuron model: II. Inhomogeneous synaptic input and network properties.

    PubMed

    Burkitt, A N

    2006-08-01

    The integrate-and-fire neuron model describes the state of a neuron in terms of its membrane potential, which is determined by the synaptic inputs and the injected current that the neuron receives. When the membrane potential reaches a threshold, an action potential (spike) is generated. This review considers the model in which the synaptic input varies periodically and is described by an inhomogeneous Poisson process, with both current and conductance synapses. The focus is on the mathematical methods that allow the output spike distribution to be analyzed, including first passage time methods and the Fokker-Planck equation. Recent interest in the response of neurons to periodic input has in part arisen from the study of stochastic resonance, which is the noise-induced enhancement of the signal-to-noise ratio. Networks of integrate-and-fire neurons behave in a wide variety of ways and have been used to model a variety of neural, physiological, and psychological phenomena. The properties of the integrate-and-fire neuron model with synaptic input described as a temporally homogeneous Poisson process are reviewed in an accompanying paper (Burkitt in Biol Cybern, 2006).

  1. γ-Aminobutyric Acid Type A Receptor Potentiation Inhibits Learning in a Computational Network Model.

    PubMed

    Storer, Kingsley P; Reeke, George N

    2018-04-17

    Propofol produces memory impairment at concentrations well below those abolishing consciousness. Episodic memory, mediated by the hippocampus, is most sensitive. Two potentially overlapping scenarios may explain how γ-aminobutyric acid receptor type A (GABAA) potentiation by propofol disrupts episodic memory-the first mediated by shifting the balance from excitation to inhibition while the second involves disruption of rhythmic oscillations. We use a hippocampal network model to explore these scenarios. The basis for these experiments is the proposal that the brain represents memories as groups of anatomically dispersed strongly connected neurons. A neuronal network with connections modified by synaptic plasticity was exposed to patterned stimuli, after which spiking output demonstrated evidence of stimulus-related neuronal group development analogous to memory formation. The effect of GABAA potentiation on this memory model was studied in 100 unique networks. GABAA potentiation consistent with moderate propofol effects reduced neuronal group size formed in response to a patterned stimulus by around 70%. Concurrently, accuracy of a Bayesian classifier in identifying learned patterns in the network output was reduced. Greater potentiation led to near total failure of group formation. Theta rhythm variations had no effect on group size or classifier accuracy. Memory formation is widely thought to depend on changes in neuronal connection strengths during learning that enable neuronal groups to respond with greater facility to familiar stimuli. This experiment suggests the ability to form such groups is sensitive to alteration in the balance between excitation and inhibition such as that resulting from administration of a γ-aminobutyric acid-mediated anesthetic agent.

  2. Three-dimensional spatial modeling of spines along dendritic networks in human cortical pyramidal neurons

    PubMed Central

    Larrañaga, Pedro; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Bielza, Concha

    2017-01-01

    We modeled spine distribution along the dendritic networks of pyramidal neurons in both basal and apical dendrites. To do this, we applied network spatial analysis because spines can only lie on the dendritic shaft. We expanded the existing 2D computational techniques for spatial analysis along networks to perform a 3D network spatial analysis. We analyzed five detailed reconstructions of adult human pyramidal neurons of the temporal cortex with a total of more than 32,000 spines. We confirmed that there is a spatial variation in spine density that is dependent on the distance to the cell body in all dendrites. Considering the dendritic arborizations of each pyramidal cell as a group of instances of the same observation (the neuron), we used replicated point patterns together with network spatial analysis for the first time to search for significant differences in the spine distribution of basal dendrites between different cells and between all the basal and apical dendrites. To do this, we used a recent variant of Ripley’s K function defined to work along networks. The results showed that there were no significant differences in spine distribution along basal arbors of the same neuron and along basal arbors of different pyramidal neurons. This suggests that dendritic spine distribution in basal dendritic arbors adheres to common rules. However, we did find significant differences in spine distribution along basal versus apical networks. Therefore, not only do apical and basal dendritic arborizations have distinct morphologies but they also obey different rules of spine distribution. Specifically, the results suggested that spines are more clustered along apical than in basal dendrites. Collectively, the results further highlighted that synaptic input information processing is different between these two dendritic domains. PMID:28662210

  3. Three-dimensional spatial modeling of spines along dendritic networks in human cortical pyramidal neurons.

    PubMed

    Anton-Sanchez, Laura; Larrañaga, Pedro; Benavides-Piccione, Ruth; Fernaud-Espinosa, Isabel; DeFelipe, Javier; Bielza, Concha

    2017-01-01

    We modeled spine distribution along the dendritic networks of pyramidal neurons in both basal and apical dendrites. To do this, we applied network spatial analysis because spines can only lie on the dendritic shaft. We expanded the existing 2D computational techniques for spatial analysis along networks to perform a 3D network spatial analysis. We analyzed five detailed reconstructions of adult human pyramidal neurons of the temporal cortex with a total of more than 32,000 spines. We confirmed that there is a spatial variation in spine density that is dependent on the distance to the cell body in all dendrites. Considering the dendritic arborizations of each pyramidal cell as a group of instances of the same observation (the neuron), we used replicated point patterns together with network spatial analysis for the first time to search for significant differences in the spine distribution of basal dendrites between different cells and between all the basal and apical dendrites. To do this, we used a recent variant of Ripley's K function defined to work along networks. The results showed that there were no significant differences in spine distribution along basal arbors of the same neuron and along basal arbors of different pyramidal neurons. This suggests that dendritic spine distribution in basal dendritic arbors adheres to common rules. However, we did find significant differences in spine distribution along basal versus apical networks. Therefore, not only do apical and basal dendritic arborizations have distinct morphologies but they also obey different rules of spine distribution. Specifically, the results suggested that spines are more clustered along apical than in basal dendrites. Collectively, the results further highlighted that synaptic input information processing is different between these two dendritic domains.

  4. Simultaneous stability and sensitivity in model cortical networks is achieved through anti-correlations between the in- and out-degree of connectivity

    PubMed Central

    Vasquez, Juan C.; Houweling, Arthur R.; Tiesinga, Paul

    2013-01-01

    Neuronal networks in rodent barrel cortex are characterized by stable low baseline firing rates. However, they are sensitive to the action potentials of single neurons as suggested by recent single-cell stimulation experiments that reported quantifiable behavioral responses in response to short spike trains elicited in single neurons. Hence, these networks are stable against internally generated fluctuations in firing rate but at the same time remain sensitive to similarly-sized externally induced perturbations. We investigated stability and sensitivity in a simple recurrent network of stochastic binary neurons and determined numerically the effects of correlation between the number of afferent (“in-degree”) and efferent (“out-degree”) connections in neurons. The key advance reported in this work is that anti-correlation between in-/out-degree distributions increased the stability of the network in comparison to networks with no correlation or positive correlations, while being able to achieve the same level of sensitivity. The experimental characterization of degree distributions is difficult because all pre-synaptic and post-synaptic neurons have to be identified and counted. We explored whether the statistics of network motifs, which requires the characterization of connections between small subsets of neurons, could be used to detect evidence for degree anti-correlations. We find that the sample frequency of the 3-neuron “ring” motif (1→2→3→1), can be used to detect degree anti-correlation for sub-networks of size 30 using about 50 samples, which is of significance because the necessary measurements are achievable experimentally in the near future. Taken together, we hypothesize that barrel cortex networks exhibit degree anti-correlations and specific network motif statistics. PMID:24223550

  5. Changes in neural network homeostasis trigger neuropsychiatric symptoms.

    PubMed

    Winkelmann, Aline; Maggio, Nicola; Eller, Joanna; Caliskan, Gürsel; Semtner, Marcus; Häussler, Ute; Jüttner, René; Dugladze, Tamar; Smolinsky, Birthe; Kowalczyk, Sarah; Chronowska, Ewa; Schwarz, Günter; Rathjen, Fritz G; Rechavi, Gideon; Haas, Carola A; Kulik, Akos; Gloveli, Tengis; Heinemann, Uwe; Meier, Jochen C

    2014-02-01

    The mechanisms that regulate the strength of synaptic transmission and intrinsic neuronal excitability are well characterized; however, the mechanisms that promote disease-causing neural network dysfunction are poorly defined. We generated mice with targeted neuron type-specific expression of a gain-of-function variant of the neurotransmitter receptor for glycine (GlyR) that is found in hippocampectomies from patients with temporal lobe epilepsy. In this mouse model, targeted expression of gain-of-function GlyR in terminals of glutamatergic cells or in parvalbumin-positive interneurons persistently altered neural network excitability. The increased network excitability associated with gain-of-function GlyR expression in glutamatergic neurons resulted in recurrent epileptiform discharge, which provoked cognitive dysfunction and memory deficits without affecting bidirectional synaptic plasticity. In contrast, decreased network excitability due to gain-of-function GlyR expression in parvalbumin-positive interneurons resulted in an anxiety phenotype, but did not affect cognitive performance or discriminative associative memory. Our animal model unveils neuron type-specific effects on cognition, formation of discriminative associative memory, and emotional behavior in vivo. Furthermore, our data identify a presynaptic disease-causing molecular mechanism that impairs homeostatic regulation of neural network excitability and triggers neuropsychiatric symptoms.

  6. Mobility timing for agent communities, a cue for advanced connectionist systems.

    PubMed

    Apolloni, Bruno; Bassis, Simone; Pagani, Elena; Rossi, Gian Paolo; Valerio, Lorenzo

    2011-12-01

    We introduce a wait-and-chase scheme that models the contact times between moving agents within a connectionist construct. The idea that elementary processors move within a network to get a proper position is borne out both by biological neurons in the brain morphogenesis and by agents within social networks. From the former, we take inspiration to devise a medium-term project for new artificial neural network training procedures where mobile neurons exchange data only when they are close to one another in a proper space (are in contact). From the latter, we accumulate mobility tracks experience. We focus on the preliminary step of characterizing the elapsed time between neuron contacts, which results from a spatial process fitting in the family of random processes with memory, where chasing neurons are stochastically driven by the goal of hitting target neurons. Thus, we add an unprecedented mobility model to the literature in the field, introducing a distribution law of the intercontact times that merges features of both negative exponential and Pareto distribution laws. We give a constructive description and implementation of our model, as well as a short analytical form whose parameters are suitably estimated in terms of confidence intervals from experimental data. Numerical experiments show the model and related inference tools to be sufficiently robust to cope with two main requisites for its exploitation in a neural network: the nonindependence of the observed intercontact times and the feasibility of the model inversion problem to infer suitable mobility parameters.

  7. Inference of neuronal network spike dynamics and topology from calcium imaging data

    PubMed Central

    Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof

    2013-01-01

    Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence (“spike trains”) from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties. PMID:24399936

  8. The circadian rhythm induced by the heterogeneous network structure of the suprachiasmatic nucleus

    NASA Astrophysics Data System (ADS)

    Gu, Changgui; Yang, Huijie

    2016-05-01

    In mammals, the master clock is located in the suprachiasmatic nucleus (SCN), which is composed of about 20 000 nonidentical neuronal oscillators expressing different intrinsic periods. These neurons are coupled through neurotransmitters to form a network consisting of two subgroups, i.e., a ventrolateral (VL) subgroup and a dorsomedial (DM) subgroup. The VL contains about 25% SCN neurons that receive photic input from the retina, and the DM comprises the remaining 75% SCN neurons which are coupled to the VL. The synapses from the VL to the DM are evidently denser than that from the DM to the VL, in which the VL dominates the DM. Therefore, the SCN is a heterogeneous network where the neurons of the VL are linked with a large number of SCN neurons. In the present study, we mimicked the SCN network based on Goodwin model considering four types of networks including an all-to-all network, a Newman-Watts (NW) small world network, an Erdös-Rényi (ER) random network, and a Barabási-Albert (BA) scale free network. We found that the circadian rhythm was induced in the BA, ER, and NW networks, while the circadian rhythm was absent in the all-to-all network with weak cellular coupling, where the amplitude of the circadian rhythm is largest in the BA network which is most heterogeneous in the network structure. Our finding provides an alternative explanation for the induction or enhancement of circadian rhythm by the heterogeneity of the network structure.

  9. Dynamic neural networking as a basis for plasticity in the control of heart rate.

    PubMed

    Kember, G; Armour, J A; Zamir, M

    2013-01-21

    A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. The transfer and transformation of collective network information in gene-matched networks.

    PubMed

    Kitsukawa, Takashi; Yagi, Takeshi

    2015-10-09

    Networks, such as the human society network, social and professional networks, and biological system networks, contain vast amounts of information. Information signals in networks are distributed over nodes and transmitted through intricately wired links, making the transfer and transformation of such information difficult to follow. Here we introduce a novel method for describing network information and its transfer using a model network, the Gene-matched network (GMN), in which nodes (neurons) possess attributes (genes). In the GMN, nodes are connected according to their expression of common genes. Because neurons have multiple genes, the GMN is cluster-rich. We show that, in the GMN, information transfer and transformation were controlled systematically, according to the activity level of the network. Furthermore, information transfer and transformation could be traced numerically with a vector using genes expressed in the activated neurons, the active-gene array, which was used to assess the relative activity among overlapping neuronal groups. Interestingly, this coding style closely resembles the cell-assembly neural coding theory. The method introduced here could be applied to many real-world networks, since many systems, including human society and various biological systems, can be represented as a network of this type.

  11. Interplay of intrinsic and synaptic conductances in the generation of high-frequency oscillations in interneuronal networks with irregular spiking.

    PubMed

    Baroni, Fabiano; Burkitt, Anthony N; Grayden, David B

    2014-05-01

    High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of hyperpolarizing, post-inhibitory rebound is not elicited and factors i) and ii) dominate, yielding lower synchrony in GIF networks than in IF networks.

  12. Interplay of Intrinsic and Synaptic Conductances in the Generation of High-Frequency Oscillations in Interneuronal Networks with Irregular Spiking

    PubMed Central

    Baroni, Fabiano; Burkitt, Anthony N.; Grayden, David B.

    2014-01-01

    High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of hyperpolarizing, post-inhibitory rebound is not elicited and factors i) and ii) dominate, yielding lower synchrony in GIF networks than in IF networks. PMID:24784237

  13. Macroscopic self-oscillations and aging transition in a network of synaptically coupled quadratic integrate-and-fire neurons.

    PubMed

    Ratas, Irmantas; Pyragas, Kestutis

    2016-09-01

    We analyze the dynamics of a large network of coupled quadratic integrate-and-fire neurons, which represent the canonical model for class I neurons near the spiking threshold. The network is heterogeneous in that it includes both inherently spiking and excitable neurons. The coupling is global via synapses that take into account the finite width of synaptic pulses. Using a recently developed reduction method based on the Lorentzian ansatz, we derive a closed system of equations for the neuron's firing rate and the mean membrane potential, which are exact in the infinite-size limit. The bifurcation analysis of the reduced equations reveals a rich scenario of asymptotic behavior, the most interesting of which is the macroscopic limit-cycle oscillations. It is shown that the finite width of synaptic pulses is a necessary condition for the existence of such oscillations. The robustness of the oscillations against aging damage, which transforms spiking neurons into nonspiking neurons, is analyzed. The validity of the reduced equations is confirmed by comparing their solutions with the solutions of microscopic equations for the finite-size networks.

  14. Continuous attractor network models of grid cell firing based on excitatory–inhibitory interactions

    PubMed Central

    Shipston‐Sharman, Oliver; Solanka, Lukas

    2016-01-01

    Abstract Neurons in the medial entorhinal cortex encode location through spatial firing fields that have a grid‐like organisation. The challenge of identifying mechanisms for grid firing has been addressed through experimental and theoretical investigations of medial entorhinal circuits. Here, we discuss evidence for continuous attractor network models that account for grid firing by synaptic interactions between excitatory and inhibitory cells. These models assume that grid‐like firing patterns are the result of computation of location from velocity inputs, with additional spatial input required to oppose drift in the attractor state. We focus on properties of continuous attractor networks that are revealed by explicitly considering excitatory and inhibitory neurons, their connectivity and their membrane potential dynamics. Models at this level of detail can account for theta‐nested gamma oscillations as well as grid firing, predict spatial firing of interneurons as well as excitatory cells, show how gamma oscillations can be modulated independently from spatial computations, reveal critical roles for neuronal noise, and demonstrate that only a subset of excitatory cells in a network need have grid‐like firing fields. Evaluating experimental data against predictions from detailed network models will be important for establishing the mechanisms mediating grid firing. PMID:27870120

  15. Synchronization from Second Order Network Connectivity Statistics

    PubMed Central

    Zhao, Liqiong; Beverlin, Bryce; Netoff, Theoden; Nykamp, Duane Q.

    2011-01-01

    We investigate how network structure can influence the tendency for a neuronal network to synchronize, or its synchronizability, independent of the dynamical model for each neuron. The synchrony analysis takes advantage of the framework of second order networks, which defines four second order connectivity statistics based on the relative frequency of two-connection network motifs. The analysis identifies two of these statistics, convergent connections, and chain connections, as highly influencing the synchrony. Simulations verify that synchrony decreases with the frequency of convergent connections and increases with the frequency of chain connections. These trends persist with simulations of multiple models for the neuron dynamics and for different types of networks. Surprisingly, divergent connections, which determine the fraction of shared inputs, do not strongly influence the synchrony. The critical role of chains, rather than divergent connections, in influencing synchrony can be explained by their increasing the effective coupling strength. The decrease of synchrony with convergent connections is primarily due to the resulting heterogeneity in firing rates. PMID:21779239

  16. Geometric properties-dependent neural synchrony modulated by extracellular subthreshold electric field

    NASA Astrophysics Data System (ADS)

    Wei, Xile; Si, Kaili; Yi, Guosheng; Wang, Jiang; Lu, Meili

    2016-07-01

    In this paper, we use a reduced two-compartment neuron model to investigate the interaction between extracellular subthreshold electric field and synchrony in small world networks. It is observed that network synchronization is closely related to the strength of electric field and geometric properties of the two-compartment model. Specifically, increasing the electric field induces a gradual improvement in network synchrony, while increasing the geometric factor results in an abrupt decrease in synchronization of network. In addition, increasing electric field can make the network become synchronous from asynchronous when the geometric parameter is set to a given value. Furthermore, it is demonstrated that network synchrony can also be affected by the firing frequency and dynamical bifurcation feature of single neuron. These results highlight the effect of weak field on network synchrony from the view of biophysical model, which may contribute to further understanding the effect of electric field on network activity.

  17. Distributed Bandpass Filtering and Signal Demodulation in Cortical Network Models

    NASA Astrophysics Data System (ADS)

    McDonnell, Mark D.

    Experimental recordings of cortical activity often exhibit narrowband oscillations, at various center frequencies ranging in the order of 1-200 Hz. Many neuronal mechanisms are known to give rise to oscillations, but here we focus on a population effect known as sparsely synchronised oscillations. In this effect, individual neurons in a cortical network fire irregularly at slow average spike rates (1-10 Hz), but the population spike rate oscillates at gamma frequencies (greater than 40 Hz) in response to spike bombardment from the thalamus. These cortical networks form recurrent (feedback) synapses. Here we describe a model of sparsely synchronized population oscillations using the language of feedback control engineering, where we treat spiking as noisy feedback. We show, using a biologically realistic model of synaptic current that includes a delayed response to inputs, that the collective behavior of the neurons in the network is like a distributed bandpass filter acting on the network inputs. Consequently, the population response has the character of narrowband random noise, and therefore has an envelope and instantaneous frequency with lowpass characteristics. Given that there exist biologically plausible neuronal mechanisms for demodulating the envelope and instantaneous frequency, we suggest there is potential for similar effects to be exploited in nanoscale electronics implementations of engineered communications receivers.

  18. Simulator for neural networks and action potentials.

    PubMed

    Baxter, Douglas A; Byrne, John H

    2007-01-01

    A key challenge for neuroinformatics is to devise methods for representing, accessing, and integrating vast amounts of diverse and complex data. A useful approach to represent and integrate complex data sets is to develop mathematical models [Arbib (The Handbook of Brain Theory and Neural Networks, pp. 741-745, 2003); Arbib and Grethe (Computing the Brain: A Guide to Neuroinformatics, 2001); Ascoli (Computational Neuroanatomy: Principles and Methods, 2002); Bower and Bolouri (Computational Modeling of Genetic and Biochemical Networks, 2001); Hines et al. (J. Comput. Neurosci. 17, 7-11, 2004); Shepherd et al. (Trends Neurosci. 21, 460-468, 1998); Sivakumaran et al. (Bioinformatics 19, 408-415, 2003); Smolen et al. (Neuron 26, 567-580, 2000); Vadigepalli et al. (OMICS 7, 235-252, 2003)]. Models of neural systems provide quantitative and modifiable frameworks for representing data and analyzing neural function. These models can be developed and solved using neurosimulators. One such neurosimulator is simulator for neural networks and action potentials (SNNAP) [Ziv (J. Neurophysiol. 71, 294-308, 1994)]. SNNAP is a versatile and user-friendly tool for developing and simulating models of neurons and neural networks. SNNAP simulates many features of neuronal function, including ionic currents and their modulation by intracellular ions and/or second messengers, and synaptic transmission and synaptic plasticity. SNNAP is written in Java and runs on most computers. Moreover, SNNAP provides a graphical user interface (GUI) and does not require programming skills. This chapter describes several capabilities of SNNAP and illustrates methods for simulating neurons and neural networks. SNNAP is available at http://snnap.uth.tmc.edu .

  19. Exact subthreshold integration with continuous spike times in discrete-time neural network simulations.

    PubMed

    Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus

    2007-01-01

    Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.

  20. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.

    PubMed

    Kazantsev, V B; Asatryan, S Yu

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  1. Structured networks support sparse traveling waves in rodent somatosensory cortex.

    PubMed

    Moldakarimov, Samat; Bazhenov, Maxim; Feldman, Daniel E; Sejnowski, Terrence J

    2018-05-15

    Neurons responding to different whiskers are spatially intermixed in the superficial layer 2/3 (L2/3) of the rodent barrel cortex, where a single whisker deflection activates a sparse, distributed neuronal population that spans multiple cortical columns. How the superficial layer of the rodent barrel cortex is organized to support such distributed sensory representations is not clear. In a computer model, we tested the hypothesis that sensory representations in L2/3 of the rodent barrel cortex are formed by activity propagation horizontally within L2/3 from a site of initial activation. The model explained the observed properties of L2/3 neurons, including the low average response probability in the majority of responding L2/3 neurons, and the existence of a small subset of reliably responding L2/3 neurons. Sparsely propagating traveling waves similar to those observed in L2/3 of the rodent barrel cortex occurred in the model only when a subnetwork of strongly connected neurons was immersed in a much larger network of weakly connected neurons.

  2. Low Frequency Activity of Cortical Networks on Microelectrode Arrays is Differentially Altered by Bicuculline and Carbaryl

    EPA Science Inventory

    Thousands of chemicals need to be characterized for their neurotoxicity potential. Neurons grown on microelectrode arrays (MEAs) are an in vitro model used to screen chemicals for functional effects on neuronal networks. Typically, after removal of low frequency components, effec...

  3. Hyperconnectivity and slow synapses during early development of medial prefrontal cortex in a mouse model for mental retardation and autism.

    PubMed

    Testa-Silva, Guilherme; Loebel, Alex; Giugliano, Michele; de Kock, Christiaan P J; Mansvelder, Huibert D; Meredith, Rhiannon M

    2012-06-01

    Neuronal theories of neurodevelopmental disorders (NDDs) of autism and mental retardation propose that abnormal connectivity underlies deficits in attentional processing. We tested this theory by studying unitary synaptic connections between layer 5 pyramidal neurons within medial prefrontal cortex (mPFC) networks in the Fmr1-KO mouse model for mental retardation and autism. In line with predictions from neurocognitive theory, we found that neighboring pyramidal neurons were hyperconnected during a critical period in early mPFC development. Surprisingly, excitatory synaptic connections between Fmr1-KO pyramidal neurons were significantly slower and failed to recover from short-term depression as quickly as wild type (WT) synapses. By 4-5 weeks of mPFC development, connectivity rates were identical for both KO and WT pyramidal neurons and synapse dynamics changed from depressing to facilitating responses with similar properties in both groups. We propose that the early alteration in connectivity and synaptic recovery are tightly linked: using a network model, we show that slower synapses are essential to counterbalance hyperconnectivity in order to maintain a dynamic range of excitatory activity. However, the slow synaptic time constants induce decreased responsiveness to low-frequency stimulation, which may explain deficits in integration and early information processing in attentional neuronal networks in NDDs.

  4. Hyperconnectivity and Slow Synapses during Early Development of Medial Prefrontal Cortex in a Mouse Model for Mental Retardation and Autism

    PubMed Central

    Testa-Silva, Guilherme; Loebel, Alex; Giugliano, Michele; de Kock, Christiaan P.J.; Mansvelder, Huibert D.; Meredith, Rhiannon M.

    2013-01-01

    Neuronal theories of neurodevelopmental disorders (NDDs) of autism and mental retardation propose that abnormal connectivity underlies deficits in attentional processing. We tested this theory by studying unitary synaptic connections between layer 5 pyramidal neurons within medial prefrontal cortex (mPFC) networks in the Fmr1-KO mouse model for mental retardation and autism. In line with predictions from neurocognitive theory, we found that neighboring pyramidal neurons were hyperconnected during a critical period in early mPFC development. Surprisingly, excitatory synaptic connections between Fmr1-KO pyramidal neurons were significantly slower and failed to recover from short-term depression as quickly as wild type (WT) synapses. By 4--5 weeks of mPFC development, connectivity rates were identical for both KO and WT pyramidal neurons and synapse dynamics changed from depressing to facilitating responses with similar properties in both groups. We propose that the early alteration in connectivity and synaptic recovery are tightly linked: using a network model, we show that slower synapses are essential to counterbalance hyperconnectivity in order to maintain a dynamic range of excitatory activity. However, the slow synaptic time constants induce decreased responsiveness to low-frequency stimulation, which may explain deficits in integration and early information processing in attentional neuronal networks in NDDs. PMID:21856714

  5. Phase synchronization motion and neural coding in dynamic transmission of neural information.

    PubMed

    Wang, Rubin; Zhang, Zhikang; Qu, Jingyi; Cao, Jianting

    2011-07-01

    In order to explore the dynamic characteristics of neural coding in the transmission of neural information in the brain, a model of neural network consisting of three neuronal populations is proposed in this paper using the theory of stochastic phase dynamics. Based on the model established, the neural phase synchronization motion and neural coding under spontaneous activity and stimulation are examined, for the case of varying network structure. Our analysis shows that, under the condition of spontaneous activity, the characteristics of phase neural coding are unrelated to the number of neurons participated in neural firing within the neuronal populations. The result of numerical simulation supports the existence of sparse coding within the brain, and verifies the crucial importance of the magnitudes of the coupling coefficients in neural information processing as well as the completely different information processing capability of neural information transmission in both serial and parallel couplings. The result also testifies that under external stimulation, the bigger the number of neurons in a neuronal population, the more the stimulation influences the phase synchronization motion and neural coding evolution in other neuronal populations. We verify numerically the experimental result in neurobiology that the reduction of the coupling coefficient between neuronal populations implies the enhancement of lateral inhibition function in neural networks, with the enhancement equivalent to depressing neuronal excitability threshold. Thus, the neuronal populations tend to have a stronger reaction under the same stimulation, and more neurons get excited, leading to more neurons participating in neural coding and phase synchronization motion.

  6. Model of the Reticular Formation of the Brainstem Based on Glial-Neuronal Interactions.

    PubMed

    Mitterauer, Bernhard J

    A new model of the reticular formation of the brainstem is proposed. It refers to the neuronal and glial cell systems. Thus, it is biomimetically founded. The reticular formation generates modes of behavior (sleeping, eating, etc.) and commands all behavior according to the most appropriate environmental information. The reticular formation works on an abductive logic and is dominated by a redundancy of potential command. Formally, a special mode of behavior is represented by a comprehensive cycle (Hamilton loop) located in the glial network (syncytium) and embodied in gap junctional plaques. Whereas for the neuronal network of the reticular formation, a computer simulation has already been presented; here, the necessary devices for computation in the whole network are outlined.

  7. Theory of correlation in a network with synaptic depression

    NASA Astrophysics Data System (ADS)

    Igarashi, Yasuhiko; Oizumi, Masafumi; Okada, Masato

    2012-01-01

    Synaptic depression affects not only the mean responses of neurons but also the correlation of response variability in neural populations. Although previous studies have constructed a theory of correlation in a spiking neuron model by using the mean-field theory framework, synaptic depression has not been taken into consideration. We expanded the previous theoretical framework in this study to spiking neuron models with short-term synaptic depression. On the basis of this theory we analytically calculated neural correlations in a ring attractor network with Mexican-hat-type connectivity, which was used as a model of the primary visual cortex. The results revealed that synaptic depression reduces neural correlation, which could be beneficial for sensory coding. Furthermore, our study opens the way for theoretical studies on the effect of interaction change on the linear response function in large stochastic networks.

  8. Stochastic IMT (Insulator-Metal-Transition) Neurons: An Interplay of Thermal and Threshold Noise at Bifurcation

    PubMed Central

    Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2018-01-01

    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms. PMID:29670508

  9. Stochastic IMT (Insulator-Metal-Transition) Neurons: An Interplay of Thermal and Threshold Noise at Bifurcation.

    PubMed

    Parihar, Abhinav; Jerry, Matthew; Datta, Suman; Raychowdhury, Arijit

    2018-01-01

    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Boltzmann machines and other stochastic neural networks have been shown to outperform their deterministic counterparts by allowing dynamical systems to escape local energy minima. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO 2 ) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. The moments of interspike intervals are calculated analytically by extending the first-passage-time (FPT) models for Ornstein-Uhlenbeck (OU) process to include a fluctuating boundary. We find that the coefficient of variation of interspike intervals depend on the relative proportion of thermal and threshold noise, where threshold noise is the dominant source in the current experimental demonstrations. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.

  10. Properties of Neurons in External Globus Pallidus Can Support Optimal Action Selection

    PubMed Central

    Bogacz, Rafal; Martin Moraud, Eduardo; Abdi, Azzedine; Magill, Peter J.; Baufreton, Jérôme

    2016-01-01

    The external globus pallidus (GPe) is a key nucleus within basal ganglia circuits that are thought to be involved in action selection. A class of computational models assumes that, during action selection, the basal ganglia compute for all actions available in a given context the probabilities that they should be selected. These models suggest that a network of GPe and subthalamic nucleus (STN) neurons computes the normalization term in Bayes’ equation. In order to perform such computation, the GPe needs to send feedback to the STN equal to a particular function of the activity of STN neurons. However, the complex form of this function makes it unlikely that individual GPe neurons, or even a single GPe cell type, could compute it. Here, we demonstrate how this function could be computed within a network containing two types of GABAergic GPe projection neuron, so-called ‘prototypic’ and ‘arkypallidal’ neurons, that have different response properties in vivo and distinct connections. We compare our model predictions with the experimentally-reported connectivity and input-output functions (f-I curves) of the two populations of GPe neurons. We show that, together, these dichotomous cell types fulfil the requirements necessary to compute the function needed for optimal action selection. We conclude that, by virtue of their distinct response properties and connectivities, a network of arkypallidal and prototypic GPe neurons comprises a neural substrate capable of supporting the computation of the posterior probabilities of actions. PMID:27389780

  11. Phase-locked cluster oscillations in periodically forced integrate-and-fire-or-burst neuronal populations.

    PubMed

    Langdon, Angela J; Breakspear, Michael; Coombes, Stephen

    2012-12-01

    The minimal integrate-and-fire-or-burst neuron model succinctly describes both tonic firing and postinhibitory rebound bursting of thalamocortical cells in the sensory relay. Networks of integrate-and-fire-or-burst (IFB) neurons with slow inhibitory synaptic interactions have been shown to support stable rhythmic states, including globally synchronous and cluster oscillations, in which network-mediated inhibition cyclically generates bursting in coherent subgroups of neurons. In this paper, we introduce a reduced IFB neuronal population model to study synchronization of inhibition-mediated oscillatory bursting states to periodic excitatory input. Using numeric methods, we demonstrate the existence and stability of 1:1 phase-locked bursting oscillations in the sinusoidally forced IFB neuronal population model. Phase locking is shown to arise when periodic excitation is sufficient to pace the onset of bursting in an IFB cluster without counteracting the inhibitory interactions necessary for burst generation. Phase-locked bursting states are thus found to destabilize when periodic excitation increases in strength or frequency. Further study of the IFB neuronal population model with pulse-like periodic excitatory input illustrates that this synchronization mechanism generalizes to a broad range of n:m phase-locked bursting states across both globally synchronous and clustered oscillatory regimes.

  12. Intrinsic Neuronal Properties Switch the Mode of Information Transmission in Networks

    PubMed Central

    Gjorgjieva, Julijana; Mease, Rebecca A.; Moody, William J.; Fairhall, Adrienne L.

    2014-01-01

    Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission. PMID:25474701

  13. A neuro-computational model of economic decisions.

    PubMed

    Rustichini, Aldo; Padoa-Schioppa, Camillo

    2015-09-01

    Neuronal recordings and lesion studies indicate that key aspects of economic decisions take place in the orbitofrontal cortex (OFC). Previous work identified in this area three groups of neurons encoding the offer value, the chosen value, and the identity of the chosen good. An important and open question is whether and how decisions could emerge from a neural circuit formed by these three populations. Here we adapted a biophysically realistic neural network previously proposed for perceptual decisions (Wang XJ. Neuron 36: 955-968, 2002; Wong KF, Wang XJ. J Neurosci 26: 1314-1328, 2006). The domain of economic decisions is significantly broader than that for which the model was originally designed, yet the model performed remarkably well. The input and output nodes of the network were naturally mapped onto two groups of cells in OFC. Surprisingly, the activity of interneurons in the network closely resembled that of the third group of cells, namely, chosen value cells. The model reproduced several phenomena related to the neuronal origins of choice variability. It also generated testable predictions on the excitatory/inhibitory nature of different neuronal populations and on their connectivity. Some aspects of the empirical data were not reproduced, but simple extensions of the model could overcome these limitations. These results render a biologically credible model for the neuronal mechanisms of economic decisions. They demonstrate that choices could emerge from the activity of cells in the OFC, suggesting that chosen value cells directly participate in the decision process. Importantly, Wang's model provides a platform to investigate the implications of neuroscience results for economic theory. Copyright © 2015 the American Physiological Society.

  14. A unifying view of synchronization for data assimilation in complex nonlinear networks

    NASA Astrophysics Data System (ADS)

    Abarbanel, Henry D. I.; Shirman, Sasha; Breen, Daniel; Kadakia, Nirag; Rey, Daniel; Armstrong, Eve; Margoliash, Daniel

    2017-12-01

    Networks of nonlinear systems contain unknown parameters and dynamical degrees of freedom that may not be observable with existing instruments. From observable state variables, we want to estimate the connectivity of a model of such a network and determine the full state of the model at the termination of a temporal observation window during which measurements transfer information to a model of the network. The model state at the termination of a measurement window acts as an initial condition for predicting the future behavior of the network. This allows the validation (or invalidation) of the model as a representation of the dynamical processes producing the observations. Once the model has been tested against new data, it may be utilized as a predictor of responses to innovative stimuli or forcing. We describe a general framework for the tasks involved in the "inverse" problem of determining properties of a model built to represent measured output from physical, biological, or other processes when the measurements are noisy, the model has errors, and the state of the model is unknown when measurements begin. This framework is called statistical data assimilation and is the best one can do in estimating model properties through the use of the conditional probability distributions of the model state variables, conditioned on observations. There is a very broad arena of applications of the methods described. These include numerical weather prediction, properties of nonlinear electrical circuitry, and determining the biophysical properties of functional networks of neurons. Illustrative examples will be given of (1) estimating the connectivity among neurons with known dynamics in a network of unknown connectivity, and (2) estimating the biophysical properties of individual neurons in vitro taken from a functional network underlying vocalization in songbirds.

  15. Bifurcation of synchronous oscillations into torus in a system of two reciprocally inhibitory silicon neurons: experimental observation and modeling.

    PubMed

    Bondarenko, Vladimir E; Cymbalyuk, Gennady S; Patel, Girish; Deweerth, Stephen P; Calabrese, Ronald L

    2004-12-01

    Oscillatory activity in the central nervous system is associated with various functions, like motor control, memory formation, binding, and attention. Quasiperiodic oscillations are rarely discussed in the neurophysiological literature yet they may play a role in the nervous system both during normal function and disease. Here we use a physical system and a model to explore scenarios for how quasiperiodic oscillations might arise in neuronal networks. An oscillatory system of two mutually inhibitory neuronal units is a ubiquitous network module found in nervous systems and is called a half-center oscillator. Previously we created a half-center oscillator of two identical oscillatory silicon (analog Very Large Scale Integration) neurons and developed a mathematical model describing its dynamics. In the mathematical model, we have shown that an in-phase limit cycle becomes unstable through a subcritical torus bifurcation. However, the existence of this torus bifurcation in experimental silicon two-neuron system was not rigorously demonstrated or investigated. Here we demonstrate the torus predicted by the model for the silicon implementation of a half-center oscillator using complex time series analysis, including bifurcation diagrams, mapping techniques, correlation functions, amplitude spectra, and correlation dimensions, and we investigate how the properties of the quasiperiodic oscillations depend on the strengths of coupling between the silicon neurons. The potential advantages and disadvantages of quasiperiodic oscillations (torus) for biological neural systems and artificial neural networks are discussed.

  16. Model Of Neural Network With Creative Dynamics

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Barhen, Jacob

    1993-01-01

    Paper presents analysis of mathematical model of one-neuron/one-synapse neural network featuring coupled activation and learning dynamics and parametrical periodic excitation. Demonstrates self-programming, partly random behavior of suitable designed neural network; believed to be related to spontaneity and creativity of biological neural networks.

  17. Learning alters theta amplitude, theta-gamma coupling and neuronal synchronization in inferotemporal cortex.

    PubMed

    Kendrick, Keith M; Zhan, Yang; Fischer, Hanno; Nicol, Alister U; Zhang, Xuejuan; Feng, Jianfeng

    2011-06-09

    How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs.

  18. Learning alters theta amplitude, theta-gamma coupling and neuronal synchronization in inferotemporal cortex

    PubMed Central

    2011-01-01

    Background How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Results Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Conclusions Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs. PMID:21658251

  19. A Rotational Motion Perception Neural Network Based on Asymmetric Spatiotemporal Visual Information Processing.

    PubMed

    Hu, Bin; Yue, Shigang; Zhang, Zhuhong

    All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.All complex motion patterns can be decomposed into several elements, including translation, expansion/contraction, and rotational motion. In biological vision systems, scientists have found that specific types of visual neurons have specific preferences to each of the three motion elements. There are computational models on translation and expansion/contraction perceptions; however, little has been done in the past to create computational models for rotational motion perception. To fill this gap, we proposed a neural network that utilizes a specific spatiotemporal arrangement of asymmetric lateral inhibited direction selective neural networks (DSNNs) for rotational motion perception. The proposed neural network consists of two parts-presynaptic and postsynaptic parts. In the presynaptic part, there are a number of lateral inhibited DSNNs to extract directional visual cues. In the postsynaptic part, similar to the arrangement of the directional columns in the cerebral cortex, these direction selective neurons are arranged in a cyclic order to perceive rotational motion cues. In the postsynaptic network, the delayed excitation from each direction selective neuron is multiplied by the gathered excitation from this neuron and its unilateral counterparts depending on which rotation, clockwise (cw) or counter-cw (ccw), to perceive. Systematic experiments under various conditions and settings have been carried out and validated the robustness and reliability of the proposed neural network in detecting cw or ccw rotational motion. This research is a critical step further toward dynamic visual information processing.

  20. Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity

    PubMed Central

    Nessler, Bernhard; Pfeiffer, Michael; Buesing, Lars; Maass, Wolfgang

    2013-01-01

    The principles by which networks of neurons compute, and how spike-timing dependent plasticity (STDP) of synaptic weights generates and maintains their computational function, are unknown. Preceding work has shown that soft winner-take-all (WTA) circuits, where pyramidal neurons inhibit each other via interneurons, are a common motif of cortical microcircuits. We show through theoretical analysis and computer simulations that Bayesian computation is induced in these network motifs through STDP in combination with activity-dependent changes in the excitability of neurons. The fundamental components of this emergent Bayesian computation are priors that result from adaptation of neuronal excitability and implicit generative models for hidden causes that are created in the synaptic weights through STDP. In fact, a surprising result is that STDP is able to approximate a powerful principle for fitting such implicit generative models to high-dimensional spike inputs: Expectation Maximization. Our results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Furthermore it suggests networks of Bayesian computation modules as a new model for distributed information processing in the cortex. PMID:23633941

  1. Synchronous neural networks of nonlinear threshold elements with hysteresis.

    PubMed

    Wang, L; Ross, J

    1990-02-01

    We use Hoffmann's suggestion [Hoffmann, G. W. (1986) J. Theor. Biol. 122, 33-67] of hysteresis in a single neuron level and determine its consequences in a synchronous network made of such neurons. We show that the overall retrieval ability in the presence of noise and the memory capacity of the network in the present model are better than in conventional models without such hysteresis. Second-order interaction further improves the retrieval ability of the network and causes hysteresis in the retrieval-noise curve for any arbitrary width of the bistable region. The convergence rate is increased by the hysteresis at high noise levels but is reduced by the hysteresis at low noise levels. Explicit formulae are given for calculations of average final convergence and noise threshold as functions of the width of the bistable region. There is neurophysiological evidence for hysteresis in single neurons, and we propose optical implementations of the present model by using ZnSe interference filters to test the predictions of the theory.

  2. Computing by robust transience: How the fronto-parietal network performs sequential category-based decisions

    PubMed Central

    Chaisangmongkon, Warasinee; Swaminathan, Sruthi K.; Freedman, David J.; Wang, Xiao-Jing

    2017-01-01

    Summary Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels. Analysis of the trained networks elucidates that robust transient trajectories of the neural population are the key driver of sequential categorical decisions. The directions of trajectories are governed by network self-organized connectivity, defining a ‘neural landscape’, consisting of a task-tailored arrangement of slow states and dynamical tunnels. With this model, we can identify functionally-relevant circuit motifs and generalize the framework to solve other categorization tasks. PMID:28334612

  3. Efficient self-organizing multilayer neural network for nonlinear system modeling.

    PubMed

    Han, Hong-Gui; Wang, Li-Dan; Qiao, Jun-Fei

    2013-07-01

    It has been shown extensively that the dynamic behaviors of a neural system are strongly influenced by the network architecture and learning process. To establish an artificial neural network (ANN) with self-organizing architecture and suitable learning algorithm for nonlinear system modeling, an automatic axon-neural network (AANN) is investigated in the following respects. First, the network architecture is constructed automatically to change both the number of hidden neurons and topologies of the neural network during the training process. The approach introduced in adaptive connecting-and-pruning algorithm (ACP) is a type of mixed mode operation, which is equivalent to pruning or adding the connecting of the neurons, as well as inserting some required neurons directly. Secondly, the weights are adjusted, using a feedforward computation (FC) to obtain the information for the gradient during learning computation. Unlike most of the previous studies, AANN is able to self-organize the architecture and weights, and to improve the network performances. Also, the proposed AANN has been tested on a number of benchmark problems, ranging from nonlinear function approximating to nonlinear systems modeling. The experimental results show that AANN can have better performances than that of some existing neural networks. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  4. A Physiological Neural Controller of a Muscle Fiber Oculomotor Plant in Horizontal Monkey Saccades

    PubMed Central

    Enderle, John D.

    2014-01-01

    A neural network model of biophysical neurons in the midbrain is presented to drive a muscle fiber oculomotor plant during horizontal monkey saccades. Neural circuitry, including omnipause neuron, premotor excitatory and inhibitory burst neurons, long lead burst neuron, tonic neuron, interneuron, abducens nucleus, and oculomotor nucleus, is developed to examine saccade dynamics. The time-optimal control strategy by realization of agonist and antagonist controller models is investigated. In consequence, each agonist muscle fiber is stimulated by an agonist neuron, while an antagonist muscle fiber is unstimulated by a pause and step from the antagonist neuron. It is concluded that the neural network is constrained by a minimum duration of the agonist pulse and that the most dominant factor in determining the saccade magnitude is the number of active neurons for the small saccades. For the large saccades, however, the duration of agonist burst firing significantly affects the control of saccades. The proposed saccadic circuitry establishes a complete model of saccade generation since it not only includes the neural circuits at both the premotor and motor stages of the saccade generator, but also uses a time-optimal controller to yield the desired saccade magnitude. PMID:24944832

  5. One-to-one neuron-electrode interfacing.

    PubMed

    Greenbaum, Alon; Anava, Sarit; Ayali, Amir; Shein, Mark; David-Pur, Moshe; Ben-Jacob, Eshel; Hanein, Yael

    2009-09-15

    The question of neuronal network development and organization is a principle one, which is closely related to aspects of neuronal and network form-function interactions. In-vitro two-dimensional neuronal cultures have proved to be an attractive and successful model for the study of these questions. Research is constraint however by the search for techniques aimed at culturing stable networks, whose electrical activity can be reliably and consistently monitored. A simple approach to form small interconnected neuronal circuits while achieving one-to-one neuron-electrode interfacing is presented. Locust neurons were cultured on a novel bio-chip consisting of carbon-nanotube multi-electrode-arrays. The cells self-organized to position themselves in close proximity to the bio-chip electrodes. The organization of the cells on the electrodes was analyzed using time lapse microscopy, fluorescence imaging and scanning electron microscopy. Electrical recordings from well identified cells is presented and discussed. The unique properties of the bio-chip and the specific neuron-nanotube interactions, together with the use of relatively large insect ganglion cells, allowed long-term stabilization (as long as 10 days) of predefined neural network topology as well as high fidelity electrical recording of individual neuron firing. This novel preparation opens ample opportunity for future investigation into key neurobiological questions and principles.

  6. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity

    PubMed Central

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns—both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity. PMID:25566045

  7. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity.

    PubMed

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  8. Bayesian Networks Predict Neuronal Transdifferentiation.

    PubMed

    Ainsworth, Richard I; Ai, Rizi; Ding, Bo; Li, Nan; Zhang, Kai; Wang, Wei

    2018-05-30

    We employ the language of Bayesian networks to systematically construct gene-regulation topologies from deep-sequencing single-nucleus RNA-Seq data for human neurons. From the perspective of the cell-state potential landscape, we identify attractors that correspond closely to different neuron subtypes. Attractors are also recovered for cell states from an independent data set confirming our models accurate description of global genetic regulations across differing cell types of the neocortex (not included in the training data). Our model recovers experimentally confirmed genetic regulations and community analysis reveals genetic associations in common pathways. Via a comprehensive scan of all theoretical three-gene perturbations of gene knockout and overexpression, we discover novel neuronal trans-differrentiation recipes (including perturbations of SATB2, GAD1, POU6F2 and ADARB2) for excitatory projection neuron and inhibitory interneuron subtypes. Copyright © 2018, G3: Genes, Genomes, Genetics.

  9. Beyond blow-up in excitatory integrate and fire neuronal networks: Refractory period and spontaneous activity.

    PubMed

    Cáceres, María J; Perthame, Benoît

    2014-06-07

    The Network Noisy Leaky Integrate and Fire equation is among the simplest model allowing for a self-consistent description of neural networks and gives a rule to determine the probability to find a neuron at the potential v. However, its mathematical structure is still poorly understood and, concerning its solutions, very few results are available. In the midst of them, a recent result shows blow-up in finite time for fully excitatory networks. The intuitive explanation is that each firing neuron induces a discharge of the others; thus increases the activity and consequently the discharge rate of the full network. In order to better understand the details of the phenomena and show that the equation is more complex and fruitful than expected, we analyze further the model. We extend the finite time blow-up result to the case when neurons, after firing, enter a refractory state for a given period of time. We also show that spontaneous activity may occur when, additionally, randomness is included on the firing potential VF in regimes where blow-up occurs for a fixed value of VF. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. A novel enteric neuron–glia coculture system reveals the role of glia in neuronal development

    PubMed Central

    Le Berre‐Scoul, Catherine; Chevalier, Julien; Oleynikova, Elena; Cossais, François; Talon, Sophie; Neunlist, Michel

    2016-01-01

    Key points Unlike astrocytes in the brain, the potential role of enteric glial cells (EGCs) in the formation of the enteric neuronal circuit is currently unknown.To examine the role of EGCs in the formation of the neuronal network, we developed a novel neuron‐enriched culture model from embryonic rat intestine grown in indirect coculture with EGCs.We found that EGCs shape axonal complexity and synapse density in enteric neurons, through purinergic‐ and glial cell line‐derived neurotrophic factor‐dependent pathways.Using a novel and valuable culture model to study enteric neuron–glia interactions, our study identified EGCs as a key cellular actor regulating neuronal network maturation. Abstract In the nervous system, the formation of neuronal circuitry results from a complex and coordinated action of intrinsic and extrinsic factors. In the CNS, extrinsic mediators derived from astrocytes have been shown to play a key role in neuronal maturation, including dendritic shaping, axon guidance and synaptogenesis. In the enteric nervous system (ENS), the potential role of enteric glial cells (EGCs) in the maturation of developing enteric neuronal circuit is currently unknown. A major obstacle in addressing this question is the difficulty in obtaining a valuable experimental model in which enteric neurons could be isolated and maintained without EGCs. We adapted a cell culture method previously developed for CNS neurons to establish a neuron‐enriched primary culture from embryonic rat intestine which was cultured in indirect coculture with EGCs. We demonstrated that enteric neurons grown in such conditions showed several structural, phenotypic and functional hallmarks of proper development and maturation. However, when neurons were grown without EGCs, the complexity of the axonal arbour and the density of synapses were markedly reduced, suggesting that glial‐derived factors contribute strongly to the formation of the neuronal circuitry. We found that these effects played by EGCs were mediated in part through purinergic P2Y1 receptor‐ and glial cell line‐derived neurotrophic factor‐dependent pathways. Using a novel and valuable culture model to study enteric neuron–glia interactions, our study identified EGCs as a key cellular actor required for neuronal network maturation. PMID:27436013

  11. Dynamics of Multistable States during Ongoing and Evoked Cortical Activity

    PubMed Central

    Mazzucato, Luca

    2015-01-01

    Single-trial analyses of ensemble activity in alert animals demonstrate that cortical circuits dynamics evolve through temporal sequences of metastable states. Metastability has been studied for its potential role in sensory coding, memory, and decision-making. Yet, very little is known about the network mechanisms responsible for its genesis. It is often assumed that the onset of state sequences is triggered by an external stimulus. Here we show that state sequences can be observed also in the absence of overt sensory stimulation. Analysis of multielectrode recordings from the gustatory cortex of alert rats revealed ongoing sequences of states, where single neurons spontaneously attain several firing rates across different states. This single-neuron multistability represents a challenge to existing spiking network models, where typically each neuron is at most bistable. We present a recurrent spiking network model that accounts for both the spontaneous generation of state sequences and the multistability in single-neuron firing rates. Each state results from the activation of neural clusters with potentiated intracluster connections, with the firing rate in each cluster depending on the number of active clusters. Simulations show that the model's ensemble activity hops among the different states, reproducing the ongoing dynamics observed in the data. When probed with external stimuli, the model predicts the quenching of single-neuron multistability into bistability and the reduction of trial-by-trial variability. Both predictions were confirmed in the data. Together, these results provide a theoretical framework that captures both ongoing and evoked network dynamics in a single mechanistic model. PMID:26019337

  12. Self-organization in Balanced State Networks by STDP and Homeostatic Plasticity

    PubMed Central

    Effenberger, Felix; Jost, Jürgen; Levina, Anna

    2015-01-01

    Structural inhomogeneities in synaptic efficacies have a strong impact on population response dynamics of cortical networks and are believed to play an important role in their functioning. However, little is known about how such inhomogeneities could evolve by means of synaptic plasticity. Here we present an adaptive model of a balanced neuronal network that combines two different types of plasticity, STDP and synaptic scaling. The plasticity rules yield both long-tailed distributions of synaptic weights and firing rates. Simultaneously, a highly connected subnetwork of driver neurons with strong synapses emerges. Coincident spiking activity of several driver cells can evoke population bursts and driver cells have similar dynamical properties as leader neurons found experimentally. Our model allows us to observe the delicate interplay between structural and dynamical properties of the emergent inhomogeneities. It is simple, robust to parameter changes and able to explain a multitude of different experimental findings in one basic network. PMID:26335425

  13. Black Holes as Brains: Neural Networks with Area Law Entropy

    NASA Astrophysics Data System (ADS)

    Dvali, Gia

    2018-04-01

    Motivated by the potential similarities between the underlying mechanisms of the enhanced memory storage capacity in black holes and in brain networks, we construct an artificial quantum neural network based on gravity-like synaptic connections and a symmetry structure that allows to describe the network in terms of geometry of a d-dimensional space. We show that the network possesses a critical state in which the gapless neurons emerge that appear to inhabit a (d-1)-dimensional surface, with their number given by the surface area. In the excitations of these neurons, the network can store and retrieve an exponentially large number of patterns within an arbitrarily narrow energy gap. The corresponding micro-state entropy of the brain network exhibits an area law. The neural network can be described in terms of a quantum field, via identifying the different neurons with the different momentum modes of the field, while identifying the synaptic connections among the neurons with the interactions among the corresponding momentum modes. Such a mapping allows to attribute a well-defined sense of geometry to an intrinsically non-local system, such as the neural network, and vice versa, it allows to represent the quantum field model as a neural network.

  14. GeNN: a code generation framework for accelerated brain simulations

    NASA Astrophysics Data System (ADS)

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  15. GeNN: a code generation framework for accelerated brain simulations.

    PubMed

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-07

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/.

  16. GeNN: a code generation framework for accelerated brain simulations

    PubMed Central

    Yavuz, Esin; Turner, James; Nowotny, Thomas

    2016-01-01

    Large-scale numerical simulations of detailed brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. An ongoing challenge for simulating realistic models is, however, computational speed. In this paper, we present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale neuronal networks to address this challenge. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs, through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. We present performance benchmarks showing that 200-fold speedup compared to a single core of a CPU can be achieved for a network of one million conductance based Hodgkin-Huxley neurons but that for other models the speedup can differ. GeNN is available for Linux, Mac OS X and Windows platforms. The source code, user manual, tutorials, Wiki, in-depth example projects and all other related information can be found on the project website http://genn-team.github.io/genn/. PMID:26740369

  17. Neural control of heart rate: the role of neuronal networking.

    PubMed

    Kember, G; Armour, J A; Zamir, M

    2011-05-21

    Neural control of heart rate, particularly its sympathetic component, is generally thought to reside primarily in the central nervous system, though accumulating evidence suggests that intrathoracic extracardiac and intrinsic cardiac ganglia are also involved. We propose an integrated model in which the control of heart rate is achieved via three neuronal "levels" representing three control centers instead of the conventional one. Most importantly, in this model control is effected through networking between neuronal populations within and among these layers. The results obtained indicate that networking serves to process demands for systemic blood flow before transducing them to cardiac motor neurons. This provides the heart with a measure of protection against the possibility of "overdrive" implied by the currently held centrally driven system. The results also show that localized networking instabilities can lead to sporadic low frequency oscillations that have the characteristics of the well-known Mayer waves. The sporadic nature of Mayer waves has been unexplained so far and is of particular interest in clinical diagnosis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  19. [Hardware Implementation of Numerical Simulation Function of Hodgkin-Huxley Model Neurons Action Potential Based on Field Programmable Gate Array].

    PubMed

    Wang, Jinlong; Lu, Mai; Hu, Yanwen; Chen, Xiaoqiang; Pan, Qiangqiang

    2015-12-01

    Neuron is the basic unit of the biological neural system. The Hodgkin-Huxley (HH) model is one of the most realistic neuron models on the electrophysiological characteristic description of neuron. Hardware implementation of neuron could provide new research ideas to clinical treatment of spinal cord injury, bionics and artificial intelligence. Based on the HH model neuron and the DSP Builder technology, in the present study, a single HH model neuron hardware implementation was completed in Field Programmable Gate Array (FPGA). The neuron implemented in FPGA was stimulated by different types of current, the action potential response characteristics were analyzed, and the correlation coefficient between numerical simulation result and hardware implementation result were calculated. The results showed that neuronal action potential response of FPGA was highly consistent with numerical simulation result. This work lays the foundation for hardware implementation of neural network.

  20. Exact computation of the maximum-entropy potential of spiking neural-network models.

    PubMed

    Cofré, R; Cessac, B

    2014-05-01

    Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.

  1. Self-organized criticality occurs in non-conservative neuronal networks during `up' states

    NASA Astrophysics Data System (ADS)

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-10-01

    During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.

  2. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    PubMed Central

    Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco

    2016-01-01

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect. PMID:27403421

  3. Spiking and bursting patterns of fractional-order Izhikevich model

    NASA Astrophysics Data System (ADS)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  4. Multiple fMRI system-level baseline connectivity is disrupted in patients with consciousness alterations.

    PubMed

    Demertzi, Athena; Gómez, Francisco; Crone, Julia Sophia; Vanhaudenhuyse, Audrey; Tshibanda, Luaba; Noirhomme, Quentin; Thonnard, Marie; Charland-Verville, Vanessa; Kirsch, Murielle; Laureys, Steven; Soddu, Andrea

    2014-03-01

    In healthy conditions, group-level fMRI resting state analyses identify ten resting state networks (RSNs) of cognitive relevance. Here, we aim to assess the ten-network model in severely brain-injured patients suffering from disorders of consciousness and to identify those networks which will be most relevant to discriminate between patients and healthy subjects. 300 fMRI volumes were obtained in 27 healthy controls and 53 patients in minimally conscious state (MCS), vegetative state/unresponsive wakefulness syndrome (VS/UWS) and coma. Independent component analysis (ICA) reduced data dimensionality. The ten networks were identified by means of a multiple template-matching procedure and were tested on neuronality properties (neuronal vs non-neuronal) in a data-driven way. Univariate analyses detected between-group differences in networks' neuronal properties and estimated voxel-wise functional connectivity in the networks, which were significantly less identifiable in patients. A nearest-neighbor "clinical" classifier was used to determine the networks with high between-group discriminative accuracy. Healthy controls were characterized by more neuronal components compared to patients in VS/UWS and in coma. Compared to healthy controls, fewer patients in MCS and VS/UWS showed components of neuronal origin for the left executive control network, default mode network (DMN), auditory, and right executive control network. The "clinical" classifier indicated the DMN and auditory network with the highest accuracy (85.3%) in discriminating patients from healthy subjects. FMRI multiple-network resting state connectivity is disrupted in severely brain-injured patients suffering from disorders of consciousness. When performing ICA, multiple-network testing and control for neuronal properties of the identified RSNs can advance fMRI system-level characterization. Automatic data-driven patient classification is the first step towards future single-subject objective diagnostics based on fMRI resting state acquisitions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Application of Artificial Neural Network and Response Surface Methodology in Modeling of Surface Roughness in WS2 Solid Lubricant Assisted MQL Turning of Inconel 718

    NASA Astrophysics Data System (ADS)

    Maheshwera Reddy Paturi, Uma; Devarasetti, Harish; Abimbola Fadare, David; Reddy Narala, Suresh Kumar

    2018-04-01

    In the present paper, the artificial neural network (ANN) and response surface methodology (RSM) are used in modeling of surface roughness in WS2 (tungsten disulphide) solid lubricant assisted minimal quantity lubrication (MQL) machining. The real time MQL turning of Inconel 718 experimental data considered in this paper was available in the literature [1]. In ANN modeling, performance parameters such as mean square error (MSE), mean absolute percentage error (MAPE) and average error in prediction (AEP) for the experimental data were determined based on Levenberg–Marquardt (LM) feed forward back propagation training algorithm with tansig as transfer function. The MATLAB tool box has been utilized in training and testing of neural network model. Neural network model with three input neurons, one hidden layer with five neurons and one output neuron (3-5-1 architecture) is found to be most confidence and optimal. The coefficient of determination (R2) for both the ANN and RSM model were seen to be 0.998 and 0.982 respectively. The surface roughness predictions from ANN and RSM model were related with experimentally measured values and found to be in good agreement with each other. However, the prediction efficacy of ANN model is relatively high when compared with RSM model predictions.

  6. Impact of adaptation currents on synchronization of coupled exponential integrate-and-fire neurons.

    PubMed

    Ladenbauer, Josef; Augustin, Moritz; Shiau, LieJune; Obermayer, Klaus

    2012-01-01

    The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.

  7. Impact of Adaptation Currents on Synchronization of Coupled Exponential Integrate-and-Fire Neurons

    PubMed Central

    Ladenbauer, Josef; Augustin, Moritz; Shiau, LieJune; Obermayer, Klaus

    2012-01-01

    The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies. PMID:22511861

  8. Deep learning based state recognition of substation switches

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2018-06-01

    Different from the traditional method which recognize the state of substation switches based on the running rules of electrical power system, this work proposes a novel convolutional neuron network-based state recognition approach of substation switches. Inspired by the theory of transfer learning, we first establish a convolutional neuron network model trained on the large-scale image set ILSVRC2012, then the restricted Boltzmann machine is employed to replace the full connected layer of the convolutional neuron network and trained on our small image dataset of 110kV substation switches to get a stronger model. Experiments conducted on our image dataset of 110kV substation switches show that, the proposed approach can be applicable to the substation to reduce the running cost and implement the real unattended operation.

  9. 1D-3D hybrid modeling-from multi-compartment models to full resolution models in space and time.

    PubMed

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator-which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics.

  10. Oscillations contribute to memory consolidation by changing criticality and stability in the brain

    NASA Astrophysics Data System (ADS)

    Wu, Jiaxing; Skilling, Quinton; Ognjanovski, Nicolette; Aton, Sara; Zochowski, Michal

    Oscillations are a universal feature of every level of brain dynamics and have been shown to contribute to many brain functions. To investigate the fundamental mechanism underpinning oscillatory activity, the properties of heterogeneous networks are compared in situations with and without oscillations. Our results show that both network criticality and stability are changed in the presence of oscillations. Criticality describes the network state of neuronal avalanche, a cascade of bursts of action potential firing in neural network. Stability measures how stable the spike timing relationship between neuron pairs is over time. Using a detailed spiking model, we found that the branching parameter σ changes relative to oscillation and structural network properties, corresponding to transmission among different critical states. Also, analysis of functional network structures shows that the oscillation helps to stabilize neuronal representation of memory. Further, quantitatively similar results are observed in biological data recorded in vivo. In summary, we have observed that, by regulating the neuronal firing pattern, oscillations affect both criticality and stability properties of the network, and thus contribute to memory formation.

  11. Modeling the emergence of circadian rhythms in a clock neuron network.

    PubMed

    Diambra, Luis; Malta, Coraci P

    2012-01-01

    Circadian rhythms in pacemaker cells persist for weeks in constant darkness, while in other types of cells the molecular oscillations that underlie circadian rhythms damp rapidly under the same conditions. Although much progress has been made in understanding the biochemical and cellular basis of circadian rhythms, the mechanisms leading to damped or self-sustained oscillations remain largely unknown. There exist many mathematical models that reproduce the circadian rhythms in the case of a single cell of the Drosophila fly. However, not much is known about the mechanisms leading to coherent circadian oscillation in clock neuron networks. In this work we have implemented a model for a network of interacting clock neurons to describe the emergence (or damping) of circadian rhythms in Drosophila fly, in the absence of zeitgebers. Our model consists of an array of pacemakers that interact through the modulation of some parameters by a network feedback. The individual pacemakers are described by a well-known biochemical model for circadian oscillation, to which we have added degradation of PER protein by light and multiplicative noise. The network feedback is the PER protein level averaged over the whole network. In particular, we have investigated the effect of modulation of the parameters associated with (i) the control of net entrance of PER into the nucleus and (ii) the non-photic degradation of PER. Our results indicate that the modulation of PER entrance into the nucleus allows the synchronization of clock neurons, leading to coherent circadian oscillations under constant dark condition. On the other hand, the modulation of non-photic degradation cannot reset the phases of individual clocks subjected to intrinsic biochemical noise.

  12. Neuromorphic Computing for Temporal Scientific Data Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D.; Potok, Thomas E.; Young, Steven

    In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to an application in classifying temporal scientific data. We demonstrate that the spiking neural network model achieves comparable results to a previously reported convolutional neural network model, with significantly fewer neurons and synapses required.

  13. Failure tolerance of spike phase synchronization in coupled neural networks

    NASA Astrophysics Data System (ADS)

    Jalili, Mahdi

    2011-09-01

    Neuronal synchronization plays an important role in the various functionality of nervous system such as binding, cognition, information processing, and computation. In this paper, we investigated how random and intentional failures in the nodes of a network influence its phase synchronization properties. We considered both artificially constructed networks using models such as preferential attachment, Watts-Strogatz, and Erdős-Rényi as well as a number of real neuronal networks. The failure strategy was either random or intentional based on properties of the nodes such as degree, clustering coefficient, betweenness centrality, and vulnerability. Hindmarsh-Rose model was considered as the mathematical model for the individual neurons, and the phase synchronization of the spike trains was monitored as a function of the percentage/number of removed nodes. The numerical simulations were supplemented by considering coupled non-identical Kuramoto oscillators. Failures based on the clustering coefficient, i.e., removing the nodes with high values of the clustering coefficient, had the least effect on the spike synchrony in all of the networks. This was followed by errors where the nodes were removed randomly. However, the behavior of the other three attack strategies was not uniform across the networks, and different strategies were the most influential in different network structure.

  14. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs.

    PubMed

    Kerr, Robert R; Burkitt, Anthony N; Thomas, Doreen A; Gilson, Matthieu; Grayden, David B

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.

  15. Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs

    PubMed Central

    Kerr, Robert R.; Burkitt, Anthony N.; Thomas, Doreen A.; Gilson, Matthieu; Grayden, David B.

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. PMID:23408878

  16. Dynamic neural network models of the premotoneuronal circuitry controlling wrist movements in primates.

    PubMed

    Maier, M A; Shupe, L E; Fetz, E E

    2005-10-01

    Dynamic recurrent neural networks were derived to simulate neuronal populations generating bidirectional wrist movements in the monkey. The models incorporate anatomical connections of cortical and rubral neurons, muscle afferents, segmental interneurons and motoneurons; they also incorporate the response profiles of four populations of neurons observed in behaving monkeys. The networks were derived by gradient descent algorithms to generate the eight characteristic patterns of motor unit activations observed during alternating flexion-extension wrist movements. The resulting model generated the appropriate input-output transforms and developed connection strengths resembling those in physiological pathways. We found that this network could be further trained to simulate additional tasks, such as experimentally observed reflex responses to limb perturbations that stretched or shortened the active muscles, and scaling of response amplitudes in proportion to inputs. In the final comprehensive network, motor units are driven by the combined activity of cortical, rubral, spinal and afferent units during step tracking and perturbations. The model displayed many emergent properties corresponding to physiological characteristics. The resulting neural network provides a working model of premotoneuronal circuitry and elucidates the neural mechanisms controlling motoneuron activity. It also predicts several features to be experimentally tested, for example the consequences of eliminating inhibitory connections in cortex and red nucleus. It also reveals that co-contraction can be achieved by simultaneous activation of the flexor and extensor circuits without invoking features specific to co-contraction.

  17. Modeling extracellular fields for a three-dimensional network of cells using NEURON.

    PubMed

    Appukuttan, Shailesh; Brain, Keith L; Manchanda, Rohit

    2017-10-01

    Computational modeling of biological cells usually ignores their extracellular fields, assuming them to be inconsequential. Though such an assumption might be justified in certain cases, it is debatable for networks of tightly packed cells, such as in the central nervous system and the syncytial tissues of cardiac and smooth muscle. In the present work, we demonstrate a technique to couple the extracellular fields of individual cells within the NEURON simulation environment. The existing features of the simulator are extended by explicitly defining current balance equations, resulting in the coupling of the extracellular fields of adjacent cells. With this technique, we achieved continuity of extracellular space for a network model, thereby allowing the exploration of extracellular interactions computationally. Using a three-dimensional network model, passive and active electrical properties were evaluated under varying levels of extracellular volumes. Simultaneous intracellular and extracellular recordings for synaptic and action potentials were analyzed, and the potential of ephaptic transmission towards functional coupling of cells was explored. We have implemented a true bi-domain representation of a network of cells, with the extracellular domain being continuous throughout the entire model. This has hitherto not been achieved using NEURON, or other compartmental modeling platforms. We have demonstrated the coupling of the extracellular field of every cell in a three-dimensional model to obtain a continuous uniform extracellular space. This technique provides a framework for the investigation of interactions in tightly packed networks of cells via their extracellular fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  19. Iterative free-energy optimization for recurrent neural networks (INFERNO).

    PubMed

    Pitti, Alexandre; Gaussier, Philippe; Quoy, Mathias

    2017-01-01

    The intra-parietal lobe coupled with the Basal Ganglia forms a working memory that demonstrates strong planning capabilities for generating robust yet flexible neuronal sequences. Neurocomputational models however, often fails to control long range neural synchrony in recurrent spiking networks due to spontaneous activity. As a novel framework based on the free-energy principle, we propose to see the problem of spikes' synchrony as an optimization problem of the neurons sub-threshold activity for the generation of long neuronal chains. Using a stochastic gradient descent, a reinforcement signal (presumably dopaminergic) evaluates the quality of one input vector to move the recurrent neural network to a desired activity; depending on the error made, this input vector is strengthened to hill-climb the gradient or elicited to search for another solution. This vector can be learned then by one associative memory as a model of the basal-ganglia to control the recurrent neural network. Experiments on habit learning and on sequence retrieving demonstrate the capabilities of the dual system to generate very long and precise spatio-temporal sequences, above two hundred iterations. Its features are applied then to the sequential planning of arm movements. In line with neurobiological theories, we discuss its relevance for modeling the cortico-basal working memory to initiate flexible goal-directed neuronal chains of causation and its relation to novel architectures such as Deep Networks, Neural Turing Machines and the Free-Energy Principle.

  20. Functional Interactions between Mammalian Respiratory Rhythmogenic and Premotor Circuitry

    PubMed Central

    Song, Hanbing; Hayes, John A.; Vann, Nikolas C.; Wang, Xueying; LaMar, M. Drew

    2016-01-01

    Breathing in mammals depends on rhythms that originate from the preBötzinger complex (preBötC) of the ventral medulla and a network of brainstem and spinal premotor neurons. The rhythm-generating core of the preBötC, as well as some premotor circuits, consist of interneurons derived from Dbx1-expressing precursors (Dbx1 neurons), but the structure and function of these networks remain incompletely understood. We previously developed a cell-specific detection and laser ablation system to interrogate respiratory network structure and function in a slice model of breathing that retains the preBötC, the respiratory-related hypoglossal (XII) motor nucleus and XII premotor circuits. In spontaneously rhythmic slices, cumulative ablation of Dbx1 preBötC neurons decreased XII motor output by ∼50% after ∼15 cell deletions, and then decelerated and terminated rhythmic function altogether as the tally increased to ∼85 neurons. In contrast, cumulatively deleting Dbx1 XII premotor neurons decreased motor output monotonically but did not affect frequency nor stop XII output regardless of the ablation tally. Here, we couple an existing preBötC model with a premotor population in several topological configurations to investigate which one may replicate the laser ablation experiments best. If the XII premotor population is a “small-world” network (rich in local connections with sparse long-range connections among constituent premotor neurons) and connected with the preBötC such that the total number of incoming synapses remains fixed, then the in silico system successfully replicates the in vitro laser ablation experiments. This study proposes a feasible configuration for circuits consisting of Dbx1-derived interneurons that generate inspiratory rhythm and motor pattern. SIGNIFICANCE STATEMENT To produce a breathing-related motor pattern, a brainstem core oscillator circuit projects to a population of premotor interneurons, but the assemblage of this network remains incompletely understood. Here we applied network modeling and numerical simulation to discover respiratory circuit configurations that successfully replicate photonic cell ablation experiments targeting either the core oscillator or premotor network, respectively. If premotor neurons are interconnected in a so-called “small-world” network with a fixed number of incoming synapses balanced between premotor and rhythmogenic neurons, then our simulations match their experimental benchmarks. These results provide a framework of experimentally testable predictions regarding the rudimentary structure and function of respiratory rhythm- and pattern-generating circuits in the brainstem of mammals. PMID:27383596

  1. FPGA implementation of motifs-based neuronal network and synchronization analysis

    NASA Astrophysics Data System (ADS)

    Deng, Bin; Zhu, Zechen; Yang, Shuangming; Wei, Xile; Wang, Jiang; Yu, Haitao

    2016-06-01

    Motifs in complex networks play a crucial role in determining the brain functions. In this paper, 13 kinds of motifs are implemented with Field Programmable Gate Array (FPGA) to investigate the relationships between the networks properties and motifs properties. We use discretization method and pipelined architecture to construct various motifs with Hindmarsh-Rose (HR) neuron as the node model. We also build a small-world network based on these motifs and conduct the synchronization analysis of motifs as well as the constructed network. We find that the synchronization properties of motif determine that of motif-based small-world network, which demonstrates effectiveness of our proposed hardware simulation platform. By imitation of some vital nuclei in the brain to generate normal discharges, our proposed FPGA-based artificial neuronal networks have the potential to replace the injured nuclei to complete the brain function in the treatment of Parkinson's disease and epilepsy.

  2. A Model of Self-Organizing Head-Centered Visual Responses in Primate Parietal Areas

    PubMed Central

    Mender, Bedeho M. W.; Stringer, Simon M.

    2013-01-01

    We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization. PMID:24349064

  3. 1D-3D hybrid modeling—from multi-compartment models to full resolution models in space and time

    PubMed Central

    Grein, Stephan; Stepniewski, Martin; Reiter, Sebastian; Knodel, Markus M.; Queisser, Gillian

    2014-01-01

    Investigation of cellular and network dynamics in the brain by means of modeling and simulation has evolved into a highly interdisciplinary field, that uses sophisticated modeling and simulation approaches to understand distinct areas of brain function. Depending on the underlying complexity, these models vary in their level of detail, in order to cope with the attached computational cost. Hence for large network simulations, single neurons are typically reduced to time-dependent signal processors, dismissing the spatial aspect of each cell. For single cell or networks with relatively small numbers of neurons, general purpose simulators allow for space and time-dependent simulations of electrical signal processing, based on the cable equation theory. An emerging field in Computational Neuroscience encompasses a new level of detail by incorporating the full three-dimensional morphology of cells and organelles into three-dimensional, space and time-dependent, simulations. While every approach has its advantages and limitations, such as computational cost, integrated and methods-spanning simulation approaches, depending on the network size could establish new ways to investigate the brain. In this paper we present a hybrid simulation approach, that makes use of reduced 1D-models using e.g., the NEURON simulator—which couples to fully resolved models for simulating cellular and sub-cellular dynamics, including the detailed three-dimensional morphology of neurons and organelles. In order to couple 1D- and 3D-simulations, we present a geometry-, membrane potential- and intracellular concentration mapping framework, with which graph- based morphologies, e.g., in the swc- or hoc-format, are mapped to full surface and volume representations of the neuron and computational data from 1D-simulations can be used as boundary conditions for full 3D simulations and vice versa. Thus, established models and data, based on general purpose 1D-simulators, can be directly coupled to the emerging field of fully resolved, highly detailed 3D-modeling approaches. We present the developed general framework for 1D/3D hybrid modeling and apply it to investigate electrically active neurons and their intracellular spatio-temporal calcium dynamics. PMID:25120463

  4. Time evolution of coherent structures in networks of Hindmarch Rose neurons

    NASA Astrophysics Data System (ADS)

    Mainieri, M. S.; Erichsen, R.; Brunnet, L. G.

    2005-08-01

    In the regime of partial synchronization, networks of diffusively coupled Hindmarch-Rose neurons show coherent structures developing in a region of the phase space which is wider than in the correspondent single neuron. Such structures are kept, without important changes, during several bursting periods. In this work, we study the time evolution of these structures and their dynamical stability under damage. This system may model the behavior of ensembles of neurons coupled through a bidirectional gap junction or, in a broader sense, it could also account for the molecular cascades present in the formation of flash and short time memory.

  5. Synchronised firing patterns in a random network of adaptive exponential integrate-and-fire neuron model.

    PubMed

    Borges, F S; Protachevicz, P R; Lameu, E L; Bonetti, R C; Iarosz, K C; Caldas, I L; Baptista, M S; Batista, A M

    2017-06-01

    We have studied neuronal synchronisation in a random network of adaptive exponential integrate-and-fire neurons. We study how spiking or bursting synchronous behaviour appears as a function of the coupling strength and the probability of connections, by constructing parameter spaces that identify these synchronous behaviours from measurements of the inter-spike interval and the calculation of the order parameter. Moreover, we verify the robustness of synchronisation by applying an external perturbation to each neuron. The simulations show that bursting synchronisation is more robust than spike synchronisation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    PubMed

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  7. Defects formation and spiral waves in a network of neurons in presence of electromagnetic induction.

    PubMed

    Rostami, Zahra; Jafari, Sajad

    2018-04-01

    Complex anatomical and physiological structure of an excitable tissue (e.g., cardiac tissue) in the body can represent different electrical activities through normal or abnormal behavior. Abnormalities of the excitable tissue coming from different biological reasons can lead to formation of some defects. Such defects can cause some successive waves that may end up to some additional reorganizing beating behaviors like spiral waves or target waves. In this study, formation of defects and the resulting emitted waves in an excitable tissue are investigated. We have considered a square array network of neurons with nearest-neighbor connections to describe the excitable tissue. Fundamentally, electrophysiological properties of ion currents in the body are responsible for exhibition of electrical spatiotemporal patterns. More precisely, fluctuation of accumulated ions inside and outside of cell causes variable electrical and magnetic field. Considering undeniable mutual effects of electrical field and magnetic field, we have proposed the new Hindmarsh-Rose (HR) neuronal model for the local dynamics of each individual neuron in the network. In this new neuronal model, the influence of magnetic flow on membrane potential is defined. This improved model holds more bifurcation parameters. Moreover, the dynamical behavior of the tissue is investigated in different states of quiescent, spiking, bursting and even chaotic state. The resulting spatiotemporal patterns are represented and the time series of some sampled neurons are displayed, as well.

  8. Toward Petascale Biologically Plausible Neural Networks

    NASA Astrophysics Data System (ADS)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  9. Granger causality network reconstruction of conductance-based integrate-and-fire neuronal systems.

    PubMed

    Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David

    2014-01-01

    Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (I&F) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based I&F neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings.

  10. Granger Causality Network Reconstruction of Conductance-Based Integrate-and-Fire Neuronal Systems

    PubMed Central

    Zhou, Douglas; Xiao, Yanyang; Zhang, Yaoyu; Xu, Zhiqin; Cai, David

    2014-01-01

    Reconstruction of anatomical connectivity from measured dynamical activities of coupled neurons is one of the fundamental issues in the understanding of structure-function relationship of neuronal circuitry. Many approaches have been developed to address this issue based on either electrical or metabolic data observed in experiment. The Granger causality (GC) analysis remains one of the major approaches to explore the dynamical causal connectivity among individual neurons or neuronal populations. However, it is yet to be clarified how such causal connectivity, i.e., the GC connectivity, can be mapped to the underlying anatomical connectivity in neuronal networks. We perform the GC analysis on the conductance-based integrate-and-fire (IF) neuronal networks to obtain their causal connectivity. Through numerical experiments, we find that the underlying synaptic connectivity amongst individual neurons or subnetworks, can be successfully reconstructed by the GC connectivity constructed from voltage time series. Furthermore, this reconstruction is insensitive to dynamical regimes and can be achieved without perturbing systems and prior knowledge of neuronal model parameters. Surprisingly, the synaptic connectivity can even be reconstructed by merely knowing the raster of systems, i.e., spike timing of neurons. Using spike-triggered correlation techniques, we establish a direct mapping between the causal connectivity and the synaptic connectivity for the conductance-based IF neuronal networks, and show the GC is quadratically related to the coupling strength. The theoretical approach we develop here may provide a framework for examining the validity of the GC analysis in other settings. PMID:24586285

  11. Growth dynamics explain the development of spatiotemporal burst activity of young cultured neuronal networks in detail.

    PubMed

    Gritsun, Taras A; le Feber, Joost; Rutten, Wim L C

    2012-01-01

    A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies [26], [27] have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP) synapses (so, no long-term potentiation, LTP, or depression, LTD, was included). However, elevated pre-phases (burst leaders) and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.

  12. A neuron-astrocyte transistor-like model for neuromorphic dressed neurons.

    PubMed

    Valenza, G; Pioggia, G; Armato, A; Ferro, M; Scilingo, E P; De Rossi, D

    2011-09-01

    Experimental evidences on the role of the synaptic glia as an active partner together with the bold synapse in neuronal signaling and dynamics of neural tissue strongly suggest to investigate on a more realistic neuron-glia model for better understanding human brain processing. Among the glial cells, the astrocytes play a crucial role in the tripartite synapsis, i.e. the dressed neuron. A well-known two-way astrocyte-neuron interaction can be found in the literature, completely revising the purely supportive role for the glia. The aim of this study is to provide a computationally efficient model for neuron-glia interaction. The neuron-glia interactions were simulated by implementing the Li-Rinzel model for an astrocyte and the Izhikevich model for a neuron. Assuming the dressed neuron dynamics similar to the nonlinear input-output characteristics of a bipolar junction transistor, we derived our computationally efficient model. This model may represent the fundamental computational unit for the development of real-time artificial neuron-glia networks opening new perspectives in pattern recognition systems and in brain neurophysiology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Joint statistics of strongly correlated neurons via dimensionality reduction

    NASA Astrophysics Data System (ADS)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  14. Spiking neural network model for memorizing sequences with forward and backward recall.

    PubMed

    Borisyuk, Roman; Chik, David; Kazanovich, Yakov; da Silva Gomes, João

    2013-06-01

    We present an oscillatory network of conductance based spiking neurons of Hodgkin-Huxley type as a model of memory storage and retrieval of sequences of events (or objects). The model is inspired by psychological and neurobiological evidence on sequential memories. The building block of the model is an oscillatory module which contains excitatory and inhibitory neurons with all-to-all connections. The connection architecture comprises two layers. A lower layer represents consecutive events during their storage and recall. This layer is composed of oscillatory modules. Plastic excitatory connections between the modules are implemented using an STDP type learning rule for sequential storage. Excitatory neurons in the upper layer project star-like modifiable connections toward the excitatory lower layer neurons. These neurons in the upper layer are used to tag sequences of events represented in the lower layer. Computer simulations demonstrate good performance of the model including difficult cases when different sequences contain overlapping events. We show that the model with STDP type or anti-STDP type learning rules can be applied for the simulation of forward and backward replay of neural spikes respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Nonlinear Maps for Design of Discrete-Time Models of Neuronal Network Dynamics

    DTIC Science & Technology

    2016-03-31

    2016 Performance/Technic~ 03-01-2016- 03-31-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of...simulations is to design a neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this...responsive tiring patterns. We propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for

  16. NEVESIM: event-driven neural simulation framework with a Python interface.

    PubMed

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.

  17. NEVESIM: event-driven neural simulation framework with a Python interface

    PubMed Central

    Pecevski, Dejan; Kappel, David; Jonke, Zeno

    2014-01-01

    NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291

  18. Dynamics of neuromodulatory feedback determines frequency modulation in a reduced respiratory network: a computational study.

    PubMed

    Toporikova, Natalia; Butera, Robert J

    2013-02-01

    Neuromodulators, such as amines and neuropeptides, alter the activity of neurons and neuronal networks. In this work, we investigate how neuromodulators, which activate G(q)-protein second messenger systems, can modulate the bursting frequency of neurons in a critical portion of the respiratory neural network, the pre-Bötzinger complex (preBötC). These neurons are a vital part of the ponto-medullary neuronal network, which generates a stable respiratory rhythm whose frequency is regulated by neuromodulator release from the nearby Raphe nucleus. Using a simulated 50-cell network of excitatory preBötC neurons with a heterogeneous distribution of persistent sodium conductance and Ca(2+), we determined conditions for frequency modulation in such a network by simulating interaction between Raphe and preBötC nuclei. We found that the positive feedback between the Raphe excitability and preBötC activity induces frequency modulation in the preBötC neurons. In addition, the frequency of the respiratory rhythm can be regulated via phasic release of excitatory neuromodulators from the Raphe nucleus. We predict that the application of a G(q) antagonist will eliminate this frequency modulation by the Raphe and keep the network frequency constant and low. In contrast, application of a G(q) agonist will result in a high frequency for all levels of Raphe stimulation. Our modeling results also suggest that high [K(+)] requirement in respiratory brain slice experiments may serve as a compensatory mechanism for low neuromodulatory tone. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  20. Cortical network modeling: analytical methods for firing rates and some properties of networks of LIF neurons.

    PubMed

    Tuckwell, Henry C

    2006-01-01

    The circuitry of cortical networks involves interacting populations of excitatory (E) and inhibitory (I) neurons whose relationships are now known to a large extent. Inputs to E- and I-cells may have their origins in remote or local cortical areas. We consider a rudimentary model involving E- and I-cells. One of our goals is to test an analytic approach to finding firing rates in neural networks without using a diffusion approximation and to this end we consider in detail networks of excitatory neurons with leaky integrate-and-fire (LIF) dynamics. A simple measure of synchronization, denoted by S(q), where q is between 0 and 100 is introduced. Fully connected E-networks have a large tendency to become dominated by synchronously firing groups of cells, except when inputs are relatively weak. We observed random or asynchronous firing in such networks with diverse sets of parameter values. When such firing patterns were found, the analytical approach was often able to accurately predict average neuronal firing rates. We also considered several properties of E-E networks, distinguishing several kinds of firing pattern. Included were those with silences before or after periods of intense activity or with periodic synchronization. We investigated the occurrence of synchronized firing with respect to changes in the internal excitatory postsynaptic potential (EPSP) magnitude in a network of 100 neurons with fixed values of the remaining parameters. When the internal EPSP size was less than a certain value, synchronization was absent. The amount of synchronization then increased slowly as the EPSP amplitude increased until at a particular EPSP size the amount of synchronization abruptly increased, with S(5) attaining the maximum value of 100%. We also found network frequency transfer characteristics for various network sizes and found a linear dependence of firing frequency over wide ranges of the external afferent frequency, with non-linear effects at lower input frequencies. The theory may also be applied to sparsely connected networks, whose firing behaviour was found to change abruptly as the probability of a connection passed through a critical value. The analytical method was also found to be useful for a feed-forward excitatory network and a network of excitatory and inhibitory neurons.

  1. Single-hidden-layer feed-forward quantum neural network based on Grover learning.

    PubMed

    Liu, Cheng-Yi; Chen, Chein; Chang, Ching-Ter; Shih, Lun-Min

    2013-09-01

    In this paper, a novel single-hidden-layer feed-forward quantum neural network model is proposed based on some concepts and principles in the quantum theory. By combining the quantum mechanism with the feed-forward neural network, we defined quantum hidden neurons and connected quantum weights, and used them as the fundamental information processing unit in a single-hidden-layer feed-forward neural network. The quantum neurons make a wide range of nonlinear functions serve as the activation functions in the hidden layer of the network, and the Grover searching algorithm outstands the optimal parameter setting iteratively and thus makes very efficient neural network learning possible. The quantum neuron and weights, along with a Grover searching algorithm based learning, result in a novel and efficient neural network characteristic of reduced network, high efficient training and prospect application in future. Some simulations are taken to investigate the performance of the proposed quantum network and the result show that it can achieve accurate learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Dynamics of moment neuronal networks.

    PubMed

    Feng, Jianfeng; Deng, Yingchun; Rossoni, Enrico

    2006-04-01

    A theoretical framework is developed for moment neuronal networks (MNNs). Within this framework, the behavior of the system of spiking neurons is specified in terms of the first- and second-order statistics of their interspike intervals, i.e., the mean, the variance, and the cross correlations of spike activity. Since neurons emit and receive spike trains which can be described by renewal--but generally non-Poisson--processes, we first derive a suitable diffusion-type approximation of such processes. Two approximation schemes are introduced: the usual approximation scheme (UAS) and the Ornstein-Uhlenbeck scheme. It is found that both schemes approximate well the input-output characteristics of spiking models such as the IF and the Hodgkin-Huxley models. The MNN framework is then developed according to the UAS scheme, and its predictions are tested on a few examples.

  3. Towards deep learning with segregated dendrites

    PubMed Central

    Guerguiev, Jordan; Lillicrap, Timothy P

    2017-01-01

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations—the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons. PMID:29205151

  4. Towards deep learning with segregated dendrites.

    PubMed

    Guerguiev, Jordan; Lillicrap, Timothy P; Richards, Blake A

    2017-12-05

    Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the neocortex optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network learns to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful higher-order representations-the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the morphology of neocortical pyramidal neurons.

  5. Software for Brain Network Simulations: A Comparative Study

    PubMed Central

    Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.

    2017-01-01

    Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687

  6. A Small World of Neuronal Synchrony

    PubMed Central

    Yu, Shan; Huang, Debin; Singer, Wolf

    2008-01-01

    A small-world network has been suggested to be an efficient solution for achieving both modular and global processing—a property highly desirable for brain computations. Here, we investigated functional networks of cortical neurons using correlation analysis to identify functional connectivity. To reconstruct the interaction network, we applied the Ising model based on the principle of maximum entropy. This allowed us to assess the interactions by measuring pairwise correlations and to assess the strength of coupling from the degree of synchrony. Visual responses were recorded in visual cortex of anesthetized cats, simultaneously from up to 24 neurons. First, pairwise correlations captured most of the patterns in the population's activity and, therefore, provided a reliable basis for the reconstruction of the interaction networks. Second, and most importantly, the resulting networks had small-world properties; the average path lengths were as short as in simulated random networks, but the clustering coefficients were larger. Neurons differed considerably with respect to the number and strength of interactions, suggesting the existence of “hubs” in the network. Notably, there was no evidence for scale-free properties. These results suggest that cortical networks are optimized for the coexistence of local and global computations: feature detection and feature integration or binding. PMID:18400792

  7. Role of Ongoing, Intrinsic Activity of Neuronal Populations for Quantitative Neuroimaging of Functional Magnetic Resonance Imaging–Based Networks

    PubMed Central

    Herman, Peter; Sanganahalli, Basavaraju G.; Coman, Daniel; Blumenfeld, Hal; Rothman, Douglas L.

    2011-01-01

    Abstract A primary objective in neuroscience is to determine how neuronal populations process information within networks. In humans and animal models, functional magnetic resonance imaging (fMRI) is gaining increasing popularity for network mapping. Although neuroimaging with fMRI—conducted with or without tasks—is actively discovering new brain networks, current fMRI data analysis schemes disregard the importance of the total neuronal activity in a region. In task fMRI experiments, the baseline is differenced away to disclose areas of small evoked changes in the blood oxygenation level-dependent (BOLD) signal. In resting-state fMRI experiments, the spotlight is on regions revealed by correlations of tiny fluctuations in the baseline (or spontaneous) BOLD signal. Interpretation of fMRI-based networks is obscured further, because the BOLD signal indirectly reflects neuronal activity, and difference/correlation maps are thresholded. Since the small changes of BOLD signal typically observed in cognitive fMRI experiments represent a minimal fraction of the total energy/activity in a given area, the relevance of fMRI-based networks is uncertain, because the majority of neuronal energy/activity is ignored. Thus, another alternative for quantitative neuroimaging of fMRI-based networks is a perspective in which the activity of a neuronal population is accounted for by the demanded oxidative energy (CMRO2). In this article, we argue that network mapping can be improved by including neuronal energy/activity of both the information about baseline and small differences/fluctuations of BOLD signal. Thus, total energy/activity information can be obtained through use of calibrated fMRI to quantify differences of ΔCMRO2 and through resting-state positron emission tomography/magnetic resonance spectroscopy measurements for average CMRO2. PMID:22433047

  8. Transient sequences in a hypernetwork generated by an adaptive network of spiking neurons.

    PubMed

    Maslennikov, Oleg V; Shchapin, Dmitry S; Nekorkin, Vladimir I

    2017-06-28

    We propose a model of an adaptive network of spiking neurons that gives rise to a hypernetwork of its dynamic states at the upper level of description. Left to itself, the network exhibits a sequence of transient clustering which relates to a traffic in the hypernetwork in the form of a random walk. Receiving inputs the system is able to generate reproducible sequences corresponding to stimulus-specific paths in the hypernetwork. We illustrate these basic notions by a simple network of discrete-time spiking neurons together with its FPGA realization and analyse their properties.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  9. Fitting of dynamic recurrent neural network models to sensory stimulus-response data.

    PubMed

    Doruk, R Ozgur; Zhang, Kechen

    2018-06-02

    We present a theoretical study aiming at model fitting for sensory neurons. Conventional neural network training approaches are not applicable to this problem due to lack of continuous data. Although the stimulus can be considered as a smooth time-dependent variable, the associated response will be a set of neural spike timings (roughly the instants of successive action potential peaks) that have no amplitude information. A recurrent neural network model can be fitted to such a stimulus-response data pair by using the maximum likelihood estimation method where the likelihood function is derived from Poisson statistics of neural spiking. The universal approximation feature of the recurrent dynamical neuron network models allows us to describe excitatory-inhibitory characteristics of an actual sensory neural network with any desired number of neurons. The stimulus data are generated by a phased cosine Fourier series having a fixed amplitude and frequency but a randomly shot phase. Various values of amplitude, stimulus component size, and sample size are applied in order to examine the effect of the stimulus to the identification process. Results are presented in tabular and graphical forms at the end of this text. In addition, to demonstrate the success of this research, a study involving the same model, nominal parameters and stimulus structure, and another study that works on different models are compared to that of this research.

  10. A neural network model of ventriloquism effect and aftereffect.

    PubMed

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  11. Carbon nanotubes might improve neuronal performance by favouring electrical shortcuts.

    PubMed

    Cellot, Giada; Cilia, Emanuele; Cipollone, Sara; Rancic, Vladimir; Sucapane, Antonella; Giordani, Silvia; Gambazzi, Luca; Markram, Henry; Grandolfo, Micaela; Scaini, Denis; Gelain, Fabrizio; Casalis, Loredana; Prato, Maurizio; Giugliano, Michele; Ballerini, Laura

    2009-02-01

    Carbon nanotubes have been applied in several areas of nerve tissue engineering to probe and augment cell behaviour, to label and track subcellular components, and to study the growth and organization of neural networks. Recent reports show that nanotubes can sustain and promote neuronal electrical activity in networks of cultured cells, but the ways in which they affect cellular function are still poorly understood. Here, we show, using single-cell electrophysiology techniques, electron microscopy analysis and theoretical modelling, that nanotubes improve the responsiveness of neurons by forming tight contacts with the cell membranes that might favour electrical shortcuts between the proximal and distal compartments of the neuron. We propose the 'electrotonic hypothesis' to explain the physical interactions between the cell and nanotube, and the mechanisms of how carbon nanotubes might affect the collective electrical activity of cultured neuronal networks. These considerations offer a perspective that would allow us to predict or engineer interactions between neurons and carbon nanotubes.

  12. Untangling Basal Ganglia Network Dynamics and Function: Role of Dopamine Depletion and Inhibition Investigated in a Spiking Network Model.

    PubMed

    Lindahl, Mikael; Hellgren Kotaleski, Jeanette

    2016-01-01

    The basal ganglia are a crucial brain system for behavioral selection, and their function is disturbed in Parkinson's disease (PD), where neurons exhibit inappropriate synchronization and oscillations. We present a spiking neural model of basal ganglia including plausible details on synaptic dynamics, connectivity patterns, neuron behavior, and dopamine effects. Recordings of neuronal activity in the subthalamic nucleus and Type A (TA; arkypallidal) and Type I (TI; prototypical) neurons in globus pallidus externa were used to validate the model. Simulation experiments predict that both local inhibition in striatum and the existence of an indirect pathway are important for basal ganglia to function properly over a large range of cortical drives. The dopamine depletion-induced increase of AMPA efficacy in corticostriatal synapses to medium spiny neurons (MSNs) with dopamine receptor D2 synapses (CTX-MSN D2) and the reduction of MSN lateral connectivity (MSN-MSN) were found to contribute significantly to the enhanced synchrony and oscillations seen in PD. Additionally, reversing the dopamine depletion-induced changes to CTX-MSN D1, CTX-MSN D2, TA-MSN, and MSN-MSN couplings could improve or restore basal ganglia action selection ability. In summary, we found multiple changes of parameters for synaptic efficacy and neural excitability that could improve action selection ability and at the same time reduce oscillations. Identification of such targets could potentially generate ideas for treatments of PD and increase our understanding of the relation between network dynamics and network function.

  13. A Novel Form of Compensation in the Tg2576 Amyloid Mouse Model of Alzheimer’s Disease

    PubMed Central

    Somogyi, Attila; Katonai, Zoltán; Alpár, Alán; Wolf, Ervin

    2016-01-01

    One century after its first description, pathology of Alzheimer’s disease (AD) is still poorly understood. Amyloid-related dendritic atrophy and membrane alterations of susceptible brain neurons in AD, and in animal models of AD are widely recognized. However, little effort has been made to study the potential effects of combined morphological and membrane alterations on signal transfer and synaptic integration in neurons that build up affected neural networks in AD. In this study spatial reconstructions and electrophysiological measurements of layer II/III pyramidal neurons of the somatosensory cortex from wild-type (WT) and transgenic (TG) human amyloid precursor protein (hAPP) overexpressing Tg2576 mice were used to build faithful segmental cable models of these neurons. Local synaptic activities were simulated in various points of the dendritic arbors and properties of subthreshold dendritic impulse propagation and predictors of synaptic input pattern recognition ability were quantified and compared in modeled WT and TG neurons. Despite the widespread dendritic degeneration and membrane alterations in mutant mouse neurons, surprisingly little, or no change was detected in steady-state and 50 Hz sinusoidal voltage transfers, current transfers, and local and propagation delays of PSPs traveling along dendrites of TG neurons. Synaptic input pattern recognition ability was also predicted to be unaltered in TG neurons in two different soma-dendritic membrane models investigated. Our simulations predict the way how subthreshold dendritic signaling and pattern recognition are preserved in TG neurons: amyloid-related membrane alterations compensate for the pathological effects that dendritic atrophy has on subthreshold dendritic signal transfer and integration in layer II/III somatosensory neurons of this hAPP mouse model for AD. Since neither propagation of single PSPs nor integration of multiple PSPs (pattern recognition) changes in TG neurons, we conclude that AD-related neuronal hyperexcitability cannot be accounted for by altered subthreshold dendritic signaling in these neurons but hyperexcitability is related to changes in active membrane properties and network connectivity. PMID:27378850

  14. A Novel Form of Compensation in the Tg2576 Amyloid Mouse Model of Alzheimer's Disease.

    PubMed

    Somogyi, Attila; Katonai, Zoltán; Alpár, Alán; Wolf, Ervin

    2016-01-01

    One century after its first description, pathology of Alzheimer's disease (AD) is still poorly understood. Amyloid-related dendritic atrophy and membrane alterations of susceptible brain neurons in AD, and in animal models of AD are widely recognized. However, little effort has been made to study the potential effects of combined morphological and membrane alterations on signal transfer and synaptic integration in neurons that build up affected neural networks in AD. In this study spatial reconstructions and electrophysiological measurements of layer II/III pyramidal neurons of the somatosensory cortex from wild-type (WT) and transgenic (TG) human amyloid precursor protein (hAPP) overexpressing Tg2576 mice were used to build faithful segmental cable models of these neurons. Local synaptic activities were simulated in various points of the dendritic arbors and properties of subthreshold dendritic impulse propagation and predictors of synaptic input pattern recognition ability were quantified and compared in modeled WT and TG neurons. Despite the widespread dendritic degeneration and membrane alterations in mutant mouse neurons, surprisingly little, or no change was detected in steady-state and 50 Hz sinusoidal voltage transfers, current transfers, and local and propagation delays of PSPs traveling along dendrites of TG neurons. Synaptic input pattern recognition ability was also predicted to be unaltered in TG neurons in two different soma-dendritic membrane models investigated. Our simulations predict the way how subthreshold dendritic signaling and pattern recognition are preserved in TG neurons: amyloid-related membrane alterations compensate for the pathological effects that dendritic atrophy has on subthreshold dendritic signal transfer and integration in layer II/III somatosensory neurons of this hAPP mouse model for AD. Since neither propagation of single PSPs nor integration of multiple PSPs (pattern recognition) changes in TG neurons, we conclude that AD-related neuronal hyperexcitability cannot be accounted for by altered subthreshold dendritic signaling in these neurons but hyperexcitability is related to changes in active membrane properties and network connectivity.

  15. Emergent spatial synaptic structure from diffusive plasticity.

    PubMed

    Sweeney, Yann; Clopath, Claudia

    2017-04-01

    Some neurotransmitters can diffuse freely across cell membranes, influencing neighbouring neurons regardless of their synaptic coupling. This provides a means of neural communication, alternative to synaptic transmission, which can influence the way in which neural networks process information. Here, we ask whether diffusive neurotransmission can also influence the structure of synaptic connectivity in a network undergoing plasticity. We propose a form of Hebbian synaptic plasticity which is mediated by a diffusive neurotransmitter. Whenever a synapse is modified at an individual neuron through our proposed mechanism, similar but smaller modifications occur in synapses connecting to neighbouring neurons. The effects of this diffusive plasticity are explored in networks of rate-based neurons. This leads to the emergence of spatial structure in the synaptic connectivity of the network. We show that this spatial structure can coexist with other forms of structure in the synaptic connectivity, such as with groups of strongly interconnected neurons that form in response to correlated external drive. Finally, we explore diffusive plasticity in a simple feedforward network model of receptive field development. We show that, as widely observed across sensory cortex, the preferred stimulus identity of neurons in our network become spatially correlated due to diffusion. Our proposed mechanism of diffusive plasticity provides an efficient mechanism for generating these spatial correlations in stimulus preference which can flexibly interact with other forms of synaptic organisation. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. A role for the anterior insular cortex in the global neuronal workspace model of consciousness.

    PubMed

    Michel, Matthias

    2017-03-01

    According to the global neuronal workspace model of consciousness, consciousness results from the global broadcast of information throughout the brain. The global neuronal workspace is mainly constituted by a fronto-parietal network. The anterior insular cortex is part of this global neuronal workspace, but the function of this region has not yet been defined within the global neuronal workspace model of consciousness. In this review, I hypothesize that the anterior insular cortex implements a cross-modal priority map, the function of which is to determine priorities for the processing of information and subsequent entrance in the global neuronal workspace. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The role of propriospinal neuronal network in transmitting the alternating muscular activities of flexor and extensor in parkinsonian tremor.

    PubMed

    Hao, M; He, X; Lan, N

    2012-01-01

    It has been shown that normal cyclic movement of human arm and resting limb tremor in Parkinson's disease (PD) are associated with the oscillatory neuronal activities in different cerebral networks, which are transmitted to the antagonistic muscles via the same spinal pathway. There are mono-synaptic and multi-synaptic corticospinal pathways for conveying motor commands. This study investigates the plausible role of propriospinal neuronal (PN) network in the C3-C4 levels in multi-synaptic transmission of cortical commands for oscillatory movements. A PN network model is constructed based on known neurophysiological connections, and is hypothesized to achieve the conversion of cortical oscillations into alternating antagonistic muscle bursts. Simulations performed with a virtual arm (VA) model indicate that without the PN network, the alternating bursts of antagonistic muscle EMG could not be reliably generated, whereas with the PN network, the alternating pattern of bursts were naturally displayed in the three pairs of antagonist muscles. Thus, it is suggested that oscillations in the primary motor cortex (M1) of single and double tremor frequencies are processed at the PN network to compute the alternating burst pattern in the flexor and extensor muscles.

  18. Topographical maps as complex networks

    NASA Astrophysics Data System (ADS)

    da Fontoura Costa, Luciano; Diambra, Luis

    2005-02-01

    The neuronal networks in the mammalian cortex are characterized by the coexistence of hierarchy, modularity, short and long range interactions, spatial correlations, and topographical connections. Particularly interesting, the latter type of organization implies special demands on developing systems in order to achieve precise maps preserving spatial adjacencies, even at the expense of isometry. Although the object of intensive biological research, the elucidation of the main anatomic-functional purposes of the ubiquitous topographical connections in the mammalian brain remains an elusive issue. The present work reports on how recent results from complex network formalism can be used to quantify and model the effect of topographical connections between neuronal cells over the connectivity of the network. While the topographical mapping between two cortical modules is achieved by connecting nearest cells from each module, four kinds of network models are adopted for implementing intramodular connections, including random, preferential-attachment, short-range, and long-range networks. It is shown that, though spatially uniform and simple, topographical connections between modules can lead to major changes in the network properties in some specific cases, depending on intramodular connections schemes, fostering more effective intercommunication between the involved neuronal cells and modules. The possible implications of such effects on cortical operation are discussed.

  19. A model of metastable dynamics during ongoing and evoked cortical activity

    NASA Astrophysics Data System (ADS)

    La Camera, Giancarlo

    The dynamics of simultaneously recorded spike trains in alert animals often evolve through temporal sequences of metastable states. Little is known about the network mechanisms responsible for the genesis of such sequences, or their potential role in neural coding. In the gustatory cortex of alert rates, state sequences can be observed also in the absence of overt sensory stimulation, and thus form the basis of the so-called `ongoing activity'. This activity is characterized by a partial degree of coordination among neurons, sharp transitions among states, and multi-stability of single neurons' firing rates. A recurrent spiking network model with clustered topology can account for both the spontaneous generation of state sequences and the (network-generated) multi-stability. In the model, each network state results from the activation of specific neural clusters with potentiated intra-cluster connections. A mean field solution of the model shows a large number of stable states, each characterized by a subset of simultaneously active clusters. The firing rate in each cluster during ongoing activity depends on the number of active clusters, so that the same neuron can have different firing rates depending on the state of the network. Because of dense intra-cluster connectivity and recurrent inhibition, in finite networks the stable states lose stability due to finite size effects. Simulations of the dynamics show that the model ensemble activity continuously hops among the different states, reproducing the ongoing dynamics observed in the data. Moreover, when probed with external stimuli, the model correctly predicts the quenching of single neuron multi-stability into bi-stability, the reduction of dimensionality of the population activity, the reduction of trial-to-trial variability, and a potential role for metastable states in the anticipation of expected events. Altogether, these results provide a unified mechanistic model of ongoing and evoked cortical dynamics. NSF IIS-1161852, NIDCD K25-DC013557, NIDCD R01-DC010389.

  20. Development of pacemaker properties and rhythmogenic mechanisms in the mouse embryonic respiratory network

    PubMed Central

    Chevalier, Marc; Toporikova, Natalia; Simmers, John; Thoby-Brisson, Muriel

    2016-01-01

    Breathing is a vital rhythmic behavior generated by hindbrain neuronal circuitry, including the preBötzinger complex network (preBötC) that controls inspiration. The emergence of preBötC network activity during prenatal development has been described, but little is known regarding inspiratory neurons expressing pacemaker properties at embryonic stages. Here, we combined calcium imaging and electrophysiological recordings in mouse embryo brainstem slices together with computational modeling to reveal the existence of heterogeneous pacemaker oscillatory properties relying on distinct combinations of burst-generating INaP and ICAN conductances. The respective proportion of the different inspiratory pacemaker subtypes changes during prenatal development. Concomitantly, network rhythmogenesis switches from a purely INaP/ICAN-dependent mechanism at E16.5 to a combined pacemaker/network-driven process at E18.5. Our results provide the first description of pacemaker bursting properties in embryonic preBötC neurons and indicate that network rhythmogenesis undergoes important changes during prenatal development through alterations in both circuit properties and the biophysical characteristics of pacemaker neurons. DOI: http://dx.doi.org/10.7554/eLife.16125.001 PMID:27434668

  1. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    PubMed

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.

  2. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system

    PubMed Central

    Born, Jannis; Stringer, Simon M.

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618

  3. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons.

    PubMed

    Probst, Dimitri; Petrovici, Mihai A; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems.

  4. Probabilistic inference in discrete spaces can be implemented into networks of LIF neurons

    PubMed Central

    Probst, Dimitri; Petrovici, Mihai A.; Bytschok, Ilja; Bill, Johannes; Pecevski, Dejan; Schemmel, Johannes; Meier, Karlheinz

    2015-01-01

    The means by which cortical neural networks are able to efficiently solve inference problems remains an open question in computational neuroscience. Recently, abstract models of Bayesian computation in neural circuits have been proposed, but they lack a mechanistic interpretation at the single-cell level. In this article, we describe a complete theoretical framework for building networks of leaky integrate-and-fire neurons that can sample from arbitrary probability distributions over binary random variables. We test our framework for a model inference task based on a psychophysical phenomenon (the Knill-Kersten optical illusion) and further assess its performance when applied to randomly generated distributions. As the local computations performed by the network strongly depend on the interaction between neurons, we compare several types of couplings mediated by either single synapses or interneuron chains. Due to its robustness to substrate imperfections such as parameter noise and background noise correlations, our model is particularly interesting for implementation on novel, neuro-inspired computing architectures, which can thereby serve as a fast, low-power substrate for solving real-world inference problems. PMID:25729361

  5. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    PubMed

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.

  6. Analyzing neuronal networks using discrete-time dynamics

    NASA Astrophysics Data System (ADS)

    Ahn, Sungwoo; Smith, Brian H.; Borisyuk, Alla; Terman, David

    2010-05-01

    We develop mathematical techniques for analyzing detailed Hodgkin-Huxley like models for excitatory-inhibitory neuronal networks. Our strategy for studying a given network is to first reduce it to a discrete-time dynamical system. The discrete model is considerably easier to analyze, both mathematically and computationally, and parameters in the discrete model correspond directly to parameters in the original system of differential equations. While these networks arise in many important applications, a primary focus of this paper is to better understand mechanisms that underlie temporally dynamic responses in early processing of olfactory sensory information. The models presented here exhibit several properties that have been described for olfactory codes in an insect’s Antennal Lobe. These include transient patterns of synchronization and decorrelation of sensory inputs. By reducing the model to a discrete system, we are able to systematically study how properties of the dynamics, including the complex structure of the transients and attractors, depend on factors related to connectivity and the intrinsic and synaptic properties of cells within the network.

  7. Equation-oriented specification of neural models for simulations

    PubMed Central

    Stimberg, Marcel; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2013-01-01

    Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator. PMID:24550820

  8. Network feedback regulates motor output across a range of modulatory neuron activity

    PubMed Central

    Spencer, Robert M.

    2016-01-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5–35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. PMID:27030739

  9. Limits to high-speed simulations of spiking neural networks using general-purpose computers.

    PubMed

    Zenke, Friedemann; Gerstner, Wulfram

    2014-01-01

    To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed toward synaptic plasticity. In particular spike-timing-dependent plasticity (STDP) creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall, these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  10. Neural electrical activity and neural network growth.

    PubMed

    Gafarov, F M

    2018-05-01

    The development of central and peripheral neural system depends in part on the emergence of the correct functional connectivity in its input and output pathways. Now it is generally accepted that molecular factors guide neurons to establish a primary scaffold that undergoes activity-dependent refinement for building a fully functional circuit. However, a number of experimental results obtained recently shows that the neuronal electrical activity plays an important role in the establishing of initial interneuronal connections. Nevertheless, these processes are rather difficult to study experimentally, due to the absence of theoretical description and quantitative parameters for estimation of the neuronal activity influence on growth in neural networks. In this work we propose a general framework for a theoretical description of the activity-dependent neural network growth. The theoretical description incorporates a closed-loop growth model in which the neural activity can affect neurite outgrowth, which in turn can affect neural activity. We carried out the detailed quantitative analysis of spatiotemporal activity patterns and studied the relationship between individual cells and the network as a whole to explore the relationship between developing connectivity and activity patterns. The model, developed in this work will allow us to develop new experimental techniques for studying and quantifying the influence of the neuronal activity on growth processes in neural networks and may lead to a novel techniques for constructing large-scale neural networks by self-organization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Assessing sensory versus optogenetic network activation by combining (o)fMRI with optical Ca2+ recordings.

    PubMed

    Schmid, Florian; Wachsmuth, Lydia; Schwalm, Miriam; Prouvot, Pierre-Hugues; Jubal, Eduardo Rosales; Fois, Consuelo; Pramanik, Gautam; Zimmer, Claus; Faber, Cornelius; Stroh, Albrecht

    2016-11-01

    Encoding of sensory inputs in the cortex is characterized by sparse neuronal network activation. Optogenetic stimulation has previously been combined with fMRI (ofMRI) to probe functional networks. However, for a quantitative optogenetic probing of sensory-driven sparse network activation, the level of similarity between sensory and optogenetic network activation needs to be explored. Here, we complement ofMRI with optic fiber-based population Ca 2+ recordings for a region-specific readout of neuronal spiking activity in rat brain. Comparing Ca 2+ responses to the blood oxygenation level-dependent signal upon sensory stimulation with increasing frequencies showed adaptation of Ca 2+ transients contrasted by an increase of blood oxygenation level-dependent responses, indicating that the optical recordings convey complementary information on neuronal network activity to the corresponding hemodynamic response. To study the similarity of optogenetic and sensory activation, we quantified the density of cells expressing channelrhodopsin-2 and modeled light propagation in the tissue. We estimated the effectively illuminated volume and numbers of optogenetically stimulated neurons, being indicative of sparse activation. At the functional level, upon either sensory or optogenetic stimulation we detected single-peak short-latency primary Ca 2+ responses with similar amplitudes and found that blood oxygenation level-dependent responses showed similar time courses. These data suggest that ofMRI can serve as a representative model for functional brain mapping. © The Author(s) 2015.

  12. Biological modelling of a computational spiking neural network with neuronal avalanches.

    PubMed

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-06-28

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance.This article is part of the themed issue 'Mathematical methods in medicine: neuroscience, cardiology and pathology'. © 2017 The Author(s).

  13. Biological modelling of a computational spiking neural network with neuronal avalanches

    NASA Astrophysics Data System (ADS)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  14. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  15. Neural Network Development Tool (NETS)

    NASA Technical Reports Server (NTRS)

    Baffes, Paul T.

    1990-01-01

    Artificial neural networks formed from hundreds or thousands of simulated neurons, connected in manner similar to that in human brain. Such network models learning behavior. Using NETS involves translating problem to be solved into input/output pairs, designing network configuration, and training network. Written in C.

  16. Signal transfer within a cultured asymmetric cortical neuron circuit

    NASA Astrophysics Data System (ADS)

    Isomura, Takuya; Shimba, Kenta; Takayama, Yuzo; Takeuchi, Akimasa; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-12-01

    Objective. Simplified neuronal circuits are required for investigating information representation in nervous systems and for validating theoretical neural network models. Here, we developed patterned neuronal circuits using micro fabricated devices, comprising a micro-well array bonded to a microelectrode-array substrate. Approach. The micro-well array consisted of micrometre-scale wells connected by tunnels, all contained within a silicone slab called a micro-chamber. The design of the micro-chamber confined somata to the wells and allowed axons to grow through the tunnels bidirectionally but with a designed, unidirectional bias. We guided axons into the point of the arrow structure where one of the two tunnel entrances is located, making that the preferred direction. Main results. When rat cortical neurons were cultured in the wells, their axons grew through the tunnels and connected to neurons in adjoining wells. Unidirectional burst transfers and other asymmetric signal-propagation phenomena were observed via the substrate-embedded electrodes. Seventy-nine percent of burst transfers were in the forward direction. We also observed rapid propagation of activity from sites of local electrical stimulation, and significant effects of inhibitory synapse blockade on bursting activity. Significance. These results suggest that this simple, substrate-controlled neuronal circuit can be applied to develop in vitro models of the function of cortical microcircuits or deep neural networks, better to elucidate the laws governing the dynamics of neuronal networks.

  17. Signal transfer within a cultured asymmetric cortical neuron circuit.

    PubMed

    Isomura, Takuya; Shimba, Kenta; Takayama, Yuzo; Takeuchi, Akimasa; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-12-01

    Simplified neuronal circuits are required for investigating information representation in nervous systems and for validating theoretical neural network models. Here, we developed patterned neuronal circuits using micro fabricated devices, comprising a micro-well array bonded to a microelectrode-array substrate. The micro-well array consisted of micrometre-scale wells connected by tunnels, all contained within a silicone slab called a micro-chamber. The design of the micro-chamber confined somata to the wells and allowed axons to grow through the tunnels bidirectionally but with a designed, unidirectional bias. We guided axons into the point of the arrow structure where one of the two tunnel entrances is located, making that the preferred direction. When rat cortical neurons were cultured in the wells, their axons grew through the tunnels and connected to neurons in adjoining wells. Unidirectional burst transfers and other asymmetric signal-propagation phenomena were observed via the substrate-embedded electrodes. Seventy-nine percent of burst transfers were in the forward direction. We also observed rapid propagation of activity from sites of local electrical stimulation, and significant effects of inhibitory synapse blockade on bursting activity. These results suggest that this simple, substrate-controlled neuronal circuit can be applied to develop in vitro models of the function of cortical microcircuits or deep neural networks, better to elucidate the laws governing the dynamics of neuronal networks.

  18. A framework for analyzing the relationship between gene expression and morphological, topological, and dynamical patterns in neuronal networks.

    PubMed

    de Arruda, Henrique Ferraz; Comin, Cesar Henrique; Miazaki, Mauro; Viana, Matheus Palhares; Costa, Luciano da Fontoura

    2015-04-30

    A key point in developmental biology is to understand how gene expression influences the morphological and dynamical patterns that are observed in living beings. In this work we propose a methodology capable of addressing this problem that is based on estimating the mutual information and Pearson correlation between the intensity of gene expression and measurements of several morphological properties of the cells. A similar approach is applied in order to identify effects of gene expression over the system dynamics. Neuronal networks were artificially grown over a lattice by considering a reference model used to generate artificial neurons. The input parameters of the artificial neurons were determined according to two distinct patterns of gene expression and the dynamical response was assessed by considering the integrate-and-fire model. As far as single gene dependence is concerned, we found that the interaction between the gene expression and the network topology, as well as between the former and the dynamics response, is strongly affected by the gene expression pattern. In addition, we observed a high correlation between the gene expression and some topological measurements of the neuronal network for particular patterns of gene expression. To our best understanding, there are no similar analyses to compare with. A proper understanding of gene expression influence requires jointly studying the morphology, topology, and dynamics of neurons. The proposed framework represents a first step towards predicting gene expression patterns from morphology and connectivity. Copyright © 2015. Published by Elsevier B.V.

  19. Functional network inference of the suprachiasmatic nucleus

    PubMed Central

    Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.

    2016-01-01

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085

  20. Neuronal replacement therapy: previous achievements and challenges ahead

    NASA Astrophysics Data System (ADS)

    Grade, Sofia; Götz, Magdalena

    2017-10-01

    Lifelong neurogenesis and incorporation of newborn neurons into mature neuronal circuits operates in specialized niches of the mammalian brain and serves as role model for neuronal replacement strategies. However, to which extent can the remaining brain parenchyma, which never incorporates new neurons during the adulthood, be as plastic and readily accommodate neurons in networks that suffered neuronal loss due to injury or neurological disease? Which microenvironment is permissive for neuronal replacement and synaptic integration and which cells perform best? Can lost function be restored and how adequate is the participation in the pre-existing circuitry? Could aberrant connections cause malfunction especially in networks dominated by excitatory neurons, such as the cerebral cortex? These questions show how important connectivity and circuitry aspects are for regenerative medicine, which is the focus of this review. We will discuss the impressive advances in neuronal replacement strategies and success from exogenous as well as endogenous cell sources. Both have seen key novel technologies, like the groundbreaking discovery of induced pluripotent stem cells and direct neuronal reprogramming, offering alternatives to the transplantation of fetal neurons, and both herald great expectations. For these to become reality, neuronal circuitry analysis is key now. As our understanding of neuronal circuits increases, neuronal replacement therapy should fulfill those prerequisites in network structure and function, in brain-wide input and output. Now is the time to incorporate neural circuitry research into regenerative medicine if we ever want to truly repair brain injury.

  1. Path integration of head direction: updating a packet of neural activity at the correct speed using neuronal time constants.

    PubMed

    Walters, D M; Stringer, S M

    2010-07-01

    A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.

  2. Sequence memory based on coherent spin-interaction neural networks.

    PubMed

    Xia, Min; Wong, W K; Wang, Zhijie

    2014-12-01

    Sequence information processing, for instance, the sequence memory, plays an important role on many functions of brain. In the workings of the human brain, the steady-state period is alterable. However, in the existing sequence memory models using heteroassociations, the steady-state period cannot be changed in the sequence recall. In this work, a novel neural network model for sequence memory with controllable steady-state period based on coherent spininteraction is proposed. In the proposed model, neurons fire collectively in a phase-coherent manner, which lets a neuron group respond differently to different patterns and also lets different neuron groups respond differently to one pattern. The simulation results demonstrating the performance of the sequence memory are presented. By introducing a new coherent spin-interaction sequence memory model, the steady-state period can be controlled by dimension parameters and the overlap between the input pattern and the stored patterns. The sequence storage capacity is enlarged by coherent spin interaction compared with the existing sequence memory models. Furthermore, the sequence storage capacity has an exponential relationship to the dimension of the neural network.

  3. Membrane Properties and the Balance between Excitation and Inhibition Control Gamma-Frequency Oscillations Arising from Feedback Inhibition

    PubMed Central

    Economo, Michael N.; White, John A.

    2012-01-01

    Computational studies as well as in vivo and in vitro results have shown that many cortical neurons fire in a highly irregular manner and at low average firing rates. These patterns seem to persist even when highly rhythmic signals are recorded by local field potential electrodes or other methods that quantify the summed behavior of a local population. Models of the 30–80 Hz gamma rhythm in which network oscillations arise through ‘stochastic synchrony’ capture the variability observed in the spike output of single cells while preserving network-level organization. We extend upon these results by constructing model networks constrained by experimental measurements and using them to probe the effect of biophysical parameters on network-level activity. We find in simulations that gamma-frequency oscillations are enabled by a high level of incoherent synaptic conductance input, similar to the barrage of noisy synaptic input that cortical neurons have been shown to receive in vivo. This incoherent synaptic input increases the emergent network frequency by shortening the time scale of the membrane in excitatory neurons and by reducing the temporal separation between excitation and inhibition due to decreased spike latency in inhibitory neurons. These mechanisms are demonstrated in simulations and in vitro current-clamp and dynamic-clamp experiments. Simulation results further indicate that the membrane potential noise amplitude has a large impact on network frequency and that the balance between excitatory and inhibitory currents controls network stability and sensitivity to external inputs. PMID:22275859

  4. Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation

    PubMed Central

    Chua, Yansong; Morrison, Abigail

    2016-01-01

    The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level. PMID:27499740

  5. Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation.

    PubMed

    Chua, Yansong; Morrison, Abigail

    2016-01-01

    The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level.

  6. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    PubMed

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2018-06-01

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  7. Vulnerable Parkin Loss-of-Function Drosophila Dopaminergic Neurons Have Advanced Mitochondrial Aging, Mitochondrial Network Loss and Transiently Reduced Autophagosome Recruitment.

    PubMed

    Cackovic, Juliana; Gutierrez-Luke, Susana; Call, Gerald B; Juba, Amber; O'Brien, Stephanie; Jun, Charles H; Buhlman, Lori M

    2018-01-01

    Selective degeneration of substantia nigra dopaminergic (DA) neurons is a hallmark pathology of familial Parkinson's disease (PD). While the mechanism of degeneration is elusive, abnormalities in mitochondrial function and turnover are strongly implicated. An Autosomal Recessive-Juvenile Parkinsonism (AR-JP) Drosophila melanogaster model exhibits DA neurodegeneration as well as aberrant mitochondrial dynamics and function. Disruptions in mitophagy have been observed in parkin loss-of-function models, and changes in mitochondrial respiration have been reported in patient fibroblasts. Whether loss of parkin causes selective DA neurodegeneration in vivo as a result of lost or decreased mitophagy is unknown. This study employs the use of fluorescent constructs expressed in Drosophila DA neurons that are functionally homologous to those of the mammalian substantia nigra. We provide evidence that degenerating DA neurons in parkin loss-of-function mutant flies have advanced mitochondrial aging, and that mitochondrial networks are fragmented and contain swollen organelles. We also found that mitophagy initiation is decreased in park ( Drosophila parkin/PARK2 ortholog) homozygous mutants, but autophagosome formation is unaffected, and mitochondrial network volumes are decreased. As the fly ages, autophagosome recruitment becomes similar to control, while mitochondria continue to show signs of damage, and climbing deficits persist. Interestingly, aberrant mitochondrial morphology, aging and mitophagy initiation were not observed in DA neurons that do not degenerate. Our results suggest that parkin is important for mitochondrial homeostasis in vulnerable Drosophila DA neurons, and that loss of parkin-mediated mitophagy may play a role in degeneration of relevant DA neurons or motor deficits in this model.

  8. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity.

    PubMed

    Zhang, Xiaoyu; Ju, Han; Penney, Trevor B; VanDongen, Antonius M J

    2017-01-01

    Humans instantly recognize a previously seen face as "familiar." To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher's discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.

  9. From neurons to epidemics: How trophic coherence affects spreading processes.

    PubMed

    Klaise, Janis; Johnson, Samuel

    2016-06-01

    Trophic coherence, a measure of the extent to which the nodes of a directed network are organised in levels, has recently been shown to be closely related to many structural and dynamical aspects of complex systems, including graph eigenspectra, the prevalence or absence of feedback cycles, and linear stability. Furthermore, non-trivial trophic structures have been observed in networks of neurons, species, genes, metabolites, cellular signalling, concatenated words, P2P users, and world trade. Here, we consider two simple yet apparently quite different dynamical models-one a susceptible-infected-susceptible epidemic model adapted to include complex contagion and the other an Amari-Hopfield neural network-and show that in both cases the related spreading processes are modulated in similar ways by the trophic coherence of the underlying networks. To do this, we propose a network assembly model which can generate structures with tunable trophic coherence, limiting in either perfectly stratified networks or random graphs. We find that trophic coherence can exert a qualitative change in spreading behaviour, determining whether a pulse of activity will percolate through the entire network or remain confined to a subset of nodes, and whether such activity will quickly die out or endure indefinitely. These results could be important for our understanding of phenomena such as epidemics, rumours, shocks to ecosystems, neuronal avalanches, and many other spreading processes.

  10. Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity

    PubMed Central

    2017-01-01

    Abstract Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits. PMID:28534043

  11. Network feedback regulates motor output across a range of modulatory neuron activity.

    PubMed

    Spencer, Robert M; Blitz, Dawn M

    2016-06-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.

  12. A generalized analog implementation of piecewise linear neuron models using CCII building blocks.

    PubMed

    Soleimani, Hamid; Ahmadi, Arash; Bavandpour, Mohammad; Sharifipoor, Ozra

    2014-03-01

    This paper presents a set of reconfigurable analog implementations of piecewise linear spiking neuron models using second generation current conveyor (CCII) building blocks. With the same topology and circuit elements, without W/L modification which is impossible after circuit fabrication, these circuits can produce different behaviors, similar to the biological neurons, both for a single neuron as well as a network of neurons just by tuning reference current and voltage sources. The models are investigated, in terms of analog implementation feasibility and costs, targeting large scale hardware implementations. Results show that, in order to gain the best performance, area and accuracy; these models can be compromised. Simulation results are presented for different neuron behaviors with CMOS 350 nm technology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics.

    PubMed

    Ly, Cheng

    2013-10-01

    The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.

  14. COMPENSATION FOR VARIABLE INTRINSIC NEURONAL EXCITABILITY BY CIRCUIT-SYNAPTIC INTERACTIONS

    PubMed Central

    Grashow, Rachel; Brookings, Ted; Marder, Eve

    2010-01-01

    Recent theoretical and experimental work indicates that neurons tune themselves to maintain target levels of excitation by modulating ion channel expression and synaptic strengths. As a result, functionally equivalent circuits can produce similar activity despite disparate underlying network and cellular properties. To experimentally test the extent to which synaptic and intrinsic conductances can produce target activity in the presence of variability in neuronal intrinsic properties, we used the dynamic clamp to create hybrid two-cell circuits built from four types of stomatogastric (STG) neurons coupled to the same model Morris-Lecar neuron by reciprocal inhibition. We measured six intrinsic properties (input resistance, minimum membrane potential, firing rate in response to +1nA of injected current, slope of the FI curve, spike height and spike voltage threshold) of Dorsal Gastric (DG), Gastric Mill (GM), Lateral Pyloric (LP) and Pyloric Dilator (PD) neurons from male crabs, Cancer borealis. The intrinsic properties varied two to seven-fold in each cell type. We coupled each biological neuron to the Morris-Lecar model with seven different values of inhibitory synaptic conductance, and also used the dynamic clamp to add seven different values of an artificial h-conductance, thus creating 49 different circuits for each biological neuron. Despite the variability in intrinsic excitability, networks formed from each neuron produced similar circuit performance at some values of synaptic and h-conductances. This work experimentally confirms results from previous modeling studies; tuning synaptic and intrinsic conductances can yield similar circuit output from neurons with variable intrinsic excitability. PMID:20610748

  15. The combination of circle topology and leaky integrator neurons remarkably improves the performance of echo state network on time series prediction.

    PubMed

    Xue, Fangzheng; Li, Qian; Li, Xiumin

    2017-01-01

    Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction.

  16. Artificial neural network modeling and optimization of ultrahigh pressure extraction of green tea polyphenols.

    PubMed

    Xi, Jun; Xue, Yujing; Xu, Yinxiang; Shen, Yuhong

    2013-11-01

    In this study, the ultrahigh pressure extraction of green tea polyphenols was modeled and optimized by a three-layer artificial neural network. A feed-forward neural network trained with an error back-propagation algorithm was used to evaluate the effects of pressure, liquid/solid ratio and ethanol concentration on the total phenolic content of green tea extracts. The neural network coupled with genetic algorithms was also used to optimize the conditions needed to obtain the highest yield of tea polyphenols. The obtained optimal architecture of artificial neural network model involved a feed-forward neural network with three input neurons, one hidden layer with eight neurons and one output layer including single neuron. The trained network gave the minimum value in the MSE of 0.03 and the maximum value in the R(2) of 0.9571, which implied a good agreement between the predicted value and the actual value, and confirmed a good generalization of the network. Based on the combination of neural network and genetic algorithms, the optimum extraction conditions for the highest yield of green tea polyphenols were determined as follows: 498.8 MPa for pressure, 20.8 mL/g for liquid/solid ratio and 53.6% for ethanol concentration. The total phenolic content of the actual measurement under the optimum predicated extraction conditions was 582.4 ± 0.63 mg/g DW, which was well matched with the predicted value (597.2mg/g DW). This suggests that the artificial neural network model described in this work is an efficient quantitative tool to predict the extraction efficiency of green tea polyphenols. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.

    PubMed

    Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S

    2002-06-01

    Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.

  18. Irregular synchronous activity in stochastically-coupled networks of integrate-and-fire neurons.

    PubMed

    Lin, J K; Pawelzik, K; Ernst, U; Sejnowski, T J

    1998-08-01

    We investigate the spatial and temporal aspects of firing patterns in a network of integrate-and-fire neurons arranged in a one-dimensional ring topology. The coupling is stochastic and shaped like a Mexican hat with local excitation and lateral inhibition. With perfect precision in the couplings, the attractors of activity in the network occur at every position in the ring. Inhomogeneities in the coupling break the translational invariance of localized attractors and lead to synchronization within highly active as well as weakly active clusters. The interspike interval variability is high, consistent with recent observations of spike time distributions in visual cortex. The robustness of our results is demonstrated with more realistic simulations on a network of McGregor neurons which model conductance changes and after-hyperpolarization potassium currents.

  19. Event-driven simulations of nonlinear integrate-and-fire neurons.

    PubMed

    Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique

    2007-12-01

    Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.

  20. A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks

    PubMed Central

    Schaffer, Evan S.; Ostojic, Srdjan; Abbott, L. F.

    2013-01-01

    Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons. PMID:24204236

  1. The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system.

    PubMed

    Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon

    2018-07-01

    We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    NASA Astrophysics Data System (ADS)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  3. An ultra-low-voltage electronic implementation of inertial neuron model with nonmonotonous Liao's activation function.

    PubMed

    Kant, Nasir Ali; Dar, Mohamad Rafiq; Khanday, Farooq Ahmad

    2015-01-01

    The output of every neuron in neural network is specified by the employed activation function (AF) and therefore forms the heart of neural networks. As far as the design of artificial neural networks (ANNs) is concerned, hardware approach is preferred over software one because it promises the full utilization of the application potential of ANNs. Therefore, besides some arithmetic blocks, designing AF in hardware is the most important for designing ANN. While attempting to design the AF in hardware, the designs should be compatible with the modern Very Large Scale Integration (VLSI) design techniques. In this regard, the implemented designs should: only be in Metal Oxide Semiconductor (MOS) technology in order to be compatible with the digital designs, provide electronic tunability feature, and be able to operate at ultra-low voltage. Companding is one of the promising circuit design techniques for achieving these goals. In this paper, 0.5 V design of Liao's AF using sinh-domain technique is introduced. Furthermore, the function is tested by implementing inertial neuron model. The performance of the AF and inertial neuron model have been evaluated through simulation results, using the PSPICE software with the MOS transistor models provided by the 0.18-μm Taiwan Semiconductor Manufacturer Complementary Metal Oxide Semiconductor (TSM CMOS) process.

  4. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    PubMed

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  5. Bidirectional Coupling between Astrocytes and Neurons Mediates Learning and Dynamic Coordination in the Brain: A Multiple Modeling Approach

    PubMed Central

    Wade, John J.; McDaid, Liam J.; Harkin, Jim; Crunelli, Vincenzo; Kelso, J. A. Scott

    2011-01-01

    In recent years research suggests that astrocyte networks, in addition to nutrient and waste processing functions, regulate both structural and synaptic plasticity. To understand the biological mechanisms that underpin such plasticity requires the development of cell level models that capture the mutual interaction between astrocytes and neurons. This paper presents a detailed model of bidirectional signaling between astrocytes and neurons (the astrocyte-neuron model or AN model) which yields new insights into the computational role of astrocyte-neuronal coupling. From a set of modeling studies we demonstrate two significant findings. Firstly, that spatial signaling via astrocytes can relay a “learning signal” to remote synaptic sites. Results show that slow inward currents cause synchronized postsynaptic activity in remote neurons and subsequently allow Spike-Timing-Dependent Plasticity based learning to occur at the associated synapses. Secondly, that bidirectional communication between neurons and astrocytes underpins dynamic coordination between neuron clusters. Although our composite AN model is presently applied to simplified neural structures and limited to coordination between localized neurons, the principle (which embodies structural, functional and dynamic complexity), and the modeling strategy may be extended to coordination among remote neuron clusters. PMID:22242121

  6. Soft chitosan microbeads scaffold for 3D functional neuronal networks.

    PubMed

    Tedesco, Maria Teresa; Di Lisa, Donatella; Massobrio, Paolo; Colistra, Nicolò; Pesce, Mattia; Catelani, Tiziano; Dellacasa, Elena; Raiteri, Roberto; Martinoia, Sergio; Pastorino, Laura

    2018-02-01

    The availability of 3D biomimetic in vitro neuronal networks of mammalian neurons represents a pivotal step for the development of brain-on-a-chip experimental models to study neuronal (dys)functions and particularly neuronal connectivity. The use of hydrogel-based scaffolds for 3D cell cultures has been extensively studied in the last years. However, limited work on biomimetic 3D neuronal cultures has been carried out to date. In this respect, here we investigated the use of a widely popular polysaccharide, chitosan (CHI), for the fabrication of a microbead based 3D scaffold to be coupled to primary neuronal cells. CHI microbeads were characterized by optical and atomic force microscopies. The cell/scaffold interaction was deeply characterized by transmission electron microscopy and by immunocytochemistry using confocal microscopy. Finally, a preliminary electrophysiological characterization by micro-electrode arrays was carried out. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Noise effects on robust synchronization of a small pacemaker neuronal ensemble via nonlinear controller: electronic circuit design.

    PubMed

    Megam Ngouonkadi, Elie Bertrand; Fotsin, Hilaire Bertrand; Kabong Nono, Martial; Louodop Fotso, Patrick Herve

    2016-10-01

    In this paper, we report on the synchronization of a pacemaker neuronal ensemble constituted of an AB neuron electrically coupled to two PD neurons. By the virtue of this electrical coupling, they can fire synchronous bursts of action potential. An external master neuron is used to induce to the whole system the desired dynamics, via a nonlinear controller. Such controller is obtained by a combination of sliding mode and feedback control. The proposed controller is able to offset uncertainties in the synchronized systems. We show how noise affects the synchronization of the pacemaker neuronal ensemble, and briefly discuss its potential benefits in our synchronization scheme. An extended Hindmarsh-Rose neuronal model is used to represent a single cell dynamic of the network. Numerical simulations and Pspice implementation of the synchronization scheme are presented. We found that, the proposed controller reduces the stochastic resonance of the network when its gain increases.

  8. Neuron array with plastic synapses and programmable dendrites.

    PubMed

    Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma

    2013-10-01

    We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.

  9. Effect of Heterogeneity on Decorrelation Mechanisms in Spiking Neural Networks: A Neuromorphic-Hardware Study

    NASA Astrophysics Data System (ADS)

    Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz

    2016-04-01

    High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.

  10. Rich-Club Organization in Effective Connectivity among Cortical Neurons.

    PubMed

    Nigam, Sunny; Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C; Masmanidis, Sotiris C; Litke, Alan M; Sporns, Olaf; Beggs, John M

    2016-01-20

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a "rich club." We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. Significance statement: Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. Copyright © 2016 Nigam et al.

  11. Rich-Club Organization in Effective Connectivity among Cortical Neurons

    PubMed Central

    Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C.; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C.; Masmanidis, Sotiris C.; Litke, Alan M.; Sporns, Olaf; Beggs, John M.

    2016-01-01

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases. PMID:26791200

  12. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive.

    PubMed

    Encke, Jörg; Hemmert, Werner

    2018-01-01

    The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs) which are first analyzed in the medial superior olive (MSO) in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  14. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures.

    PubMed

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results.

  15. Challenges in Reproducibility, Replicability, and Comparability of Computational Models and Tools for Neuronal and Glial Networks, Cells, and Subcellular Structures

    PubMed Central

    Manninen, Tiina; Aćimović, Jugoslava; Havela, Riikka; Teppola, Heidi; Linne, Marja-Leena

    2018-01-01

    The possibility to replicate and reproduce published research results is one of the biggest challenges in all areas of science. In computational neuroscience, there are thousands of models available. However, it is rarely possible to reimplement the models based on the information in the original publication, let alone rerun the models just because the model implementations have not been made publicly available. We evaluate and discuss the comparability of a versatile choice of simulation tools: tools for biochemical reactions and spiking neuronal networks, and relatively new tools for growth in cell cultures. The replicability and reproducibility issues are considered for computational models that are equally diverse, including the models for intracellular signal transduction of neurons and glial cells, in addition to single glial cells, neuron-glia interactions, and selected examples of spiking neuronal networks. We also address the comparability of the simulation results with one another to comprehend if the studied models can be used to answer similar research questions. In addition to presenting the challenges in reproducibility and replicability of published results in computational neuroscience, we highlight the need for developing recommendations and good practices for publishing simulation tools and computational models. Model validation and flexible model description must be an integral part of the tool used to simulate and develop computational models. Constant improvement on experimental techniques and recording protocols leads to increasing knowledge about the biophysical mechanisms in neural systems. This poses new challenges for computational neuroscience: extended or completely new computational methods and models may be required. Careful evaluation and categorization of the existing models and tools provide a foundation for these future needs, for constructing multiscale models or extending the models to incorporate additional or more detailed biophysical mechanisms. Improving the quality of publications in computational neuroscience, enabling progressive building of advanced computational models and tools, can be achieved only through adopting publishing standards which underline replicability and reproducibility of research results. PMID:29765315

  16. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    PubMed

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  17. Searching for collective behavior in a large network of sensory neurons.

    PubMed

    Tkačik, Gašper; Marre, Olivier; Amodei, Dario; Schneidman, Elad; Bialek, William; Berry, Michael J

    2014-01-01

    Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1) estimating its entropy, which constrains the population's capacity to represent visual information; 2) classifying activity patterns into a small set of metastable collective modes; 3) showing that the neural codeword ensembles are extremely inhomogenous; 4) demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.

  18. Untangling Basal Ganglia Network Dynamics and Function: Role of Dopamine Depletion and Inhibition Investigated in a Spiking Network Model

    PubMed Central

    2016-01-01

    Abstract The basal ganglia are a crucial brain system for behavioral selection, and their function is disturbed in Parkinson’s disease (PD), where neurons exhibit inappropriate synchronization and oscillations. We present a spiking neural model of basal ganglia including plausible details on synaptic dynamics, connectivity patterns, neuron behavior, and dopamine effects. Recordings of neuronal activity in the subthalamic nucleus and Type A (TA; arkypallidal) and Type I (TI; prototypical) neurons in globus pallidus externa were used to validate the model. Simulation experiments predict that both local inhibition in striatum and the existence of an indirect pathway are important for basal ganglia to function properly over a large range of cortical drives. The dopamine depletion–induced increase of AMPA efficacy in corticostriatal synapses to medium spiny neurons (MSNs) with dopamine receptor D2 synapses (CTX-MSN D2) and the reduction of MSN lateral connectivity (MSN–MSN) were found to contribute significantly to the enhanced synchrony and oscillations seen in PD. Additionally, reversing the dopamine depletion–induced changes to CTX–MSN D1, CTX–MSN D2, TA–MSN, and MSN–MSN couplings could improve or restore basal ganglia action selection ability. In summary, we found multiple changes of parameters for synaptic efficacy and neural excitability that could improve action selection ability and at the same time reduce oscillations. Identification of such targets could potentially generate ideas for treatments of PD and increase our understanding of the relation between network dynamics and network function. PMID:28101525

  19. Orientation-selective aVLSI spiking neurons.

    PubMed

    Liu, S C; Kramer, J; Indiveri, G; Delbrück, T; Burg, T; Douglas, R

    2001-01-01

    We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

  20. Mean-field equations for neuronal networks with arbitrary degree distributions.

    PubMed

    Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  1. Mean-field equations for neuronal networks with arbitrary degree distributions

    NASA Astrophysics Data System (ADS)

    Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  2. Intrinsic and Extrinsic Neuromodulation of Olfactory Processing.

    PubMed

    Lizbinski, Kristyn M; Dacks, Andrew M

    2017-01-01

    Neuromodulation is a ubiquitous feature of neural systems, allowing flexible, context specific control over network dynamics. Neuromodulation was first described in invertebrate motor systems and early work established a basic dichotomy for neuromodulation as having either an intrinsic origin (i.e., neurons that participate in network coding) or an extrinsic origin (i.e., neurons from independent networks). In this conceptual dichotomy, intrinsic sources of neuromodulation provide a "memory" by adjusting network dynamics based upon previous and ongoing activation of the network itself, while extrinsic neuromodulators provide the context of ongoing activity of other neural networks. Although this dichotomy has been thoroughly considered in motor systems, it has received far less attention in sensory systems. In this review, we discuss intrinsic and extrinsic modulation in the context of olfactory processing in invertebrate and vertebrate model systems. We begin by discussing presynaptic modulation of olfactory sensory neurons by local interneurons (LNs) as a mechanism for gain control based on ongoing network activation. We then discuss the cell-class specific effects of serotonergic centrifugal neurons on olfactory processing. Finally, we briefly discuss the integration of intrinsic and extrinsic neuromodulation (metamodulation) as an effective mechanism for exerting global control over olfactory network dynamics. The heterogeneous nature of neuromodulation is a recurring theme throughout this review as the effects of both intrinsic and extrinsic modulation are generally non-uniform.

  3. Color encoding in biologically-inspired convolutional neural networks.

    PubMed

    Rafegas, Ivet; Vanrell, Maria

    2018-05-11

    Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Biomorphic networks: approach to invariant feature extraction and segmentation for ATR

    NASA Astrophysics Data System (ADS)

    Baek, Andrew; Farhat, Nabil H.

    1998-10-01

    Invariant features in two dimensional binary images are extracted in a single layer network of locally coupled spiking (pulsating) model neurons with prescribed synapto-dendritic response. The feature vector for an image is represented as invariant structure in the aggregate histogram of interspike intervals obtained by computing time intervals between successive spikes produced from each neuron over a given period of time and combining such intervals from all neurons in the network into a histogram. Simulation results show that the feature vectors are more pattern-specific and invariant under translation, rotation, and change in scale or intensity than achieved in earlier work. We also describe an application of such networks to segmentation of line (edge-enhanced or silhouette) images. The biomorphic spiking network's capabilities in segmentation and invariant feature extraction may prove to be, when they are combined, valuable in Automated Target Recognition (ATR) and other automated object recognition systems.

  5. Predicting neural network firing pattern from phase resetting curve

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel; Oprisan, Ana

    2007-04-01

    Autonomous neural networks called central pattern generators (CPG) are composed of endogenously bursting neurons and produce rhythmic activities, such as flying, swimming, walking, chewing, etc. Simplified CPGs for quadrupedal locomotion and swimming are modeled by a ring of neural oscillators such that the output of one oscillator constitutes the input for the subsequent neural oscillator. The phase response curve (PRC) theory discards the detailed conductance-based description of the component neurons of a network and reduces them to ``black boxes'' characterized by a transfer function, which tabulates the transient change in the intrinsic period of a neural oscillator subject to external stimuli. Based on open-loop PRC, we were able to successfully predict the phase-locked period and relative phase between neurons in a half-center network. We derived existence and stability criteria for heterogeneous ring neural networks that are in good agreement with experimental data.

  6. The Effects of GABAergic Polarity Changes on Episodic Neural Network Activity in Developing Neural Systems.

    PubMed

    Blanco, Wilfredo; Bertram, Richard; Tabak, Joël

    2017-01-01

    Early in development, neural systems have primarily excitatory coupling, where even GABAergic synapses are excitatory. Many of these systems exhibit spontaneous episodes of activity that have been characterized through both experimental and computational studies. As development progress the neural system goes through many changes, including synaptic remodeling, intrinsic plasticity in the ion channel expression, and a transformation of GABAergic synapses from excitatory to inhibitory. What effect each of these, and other, changes have on the network behavior is hard to know from experimental studies since they all happen in parallel. One advantage of a computational approach is that one has the ability to study developmental changes in isolation. Here, we examine the effects of GABAergic synapse polarity change on the spontaneous activity of both a mean field and a neural network model that has both glutamatergic and GABAergic coupling, representative of a developing neural network. We find some intuitive behavioral changes as the GABAergic neurons go from excitatory to inhibitory, shared by both models, such as a decrease in the duration of episodes. We also find some paradoxical changes in the activity that are only present in the neural network model. In particular, we find that during early development the inter-episode durations become longer on average, while later in development they become shorter. In addressing this unexpected finding, we uncover a priming effect that is particularly important for a small subset of neurons, called the "intermediate neurons." We characterize these neurons and demonstrate why they are crucial to episode initiation, and why the paradoxical behavioral change result from priming of these neurons. The study illustrates how even arguably the simplest of developmental changes that occurs in neural systems can present non-intuitive behaviors. It also makes predictions about neural network behavioral changes that occur during development that may be observable even in actual neural systems where these changes are convoluted with changes in synaptic connectivity and intrinsic neural plasticity.

  7. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    PubMed

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. A neuronal network model for context-dependence of pitch change perception.

    PubMed

    Huang, Chengcheng; Englitz, Bernhard; Shamma, Shihab; Rinzel, John

    2015-01-01

    Many natural stimuli have perceptual ambiguities that can be cognitively resolved by the surrounding context. In audition, preceding context can bias the perception of speech and non-speech stimuli. Here, we develop a neuronal network model that can account for how context affects the perception of pitch change between a pair of successive complex tones. We focus especially on an ambiguous comparison-listeners experience opposite percepts (either ascending or descending) for an ambiguous tone pair depending on the spectral location of preceding context tones. We developed a recurrent, firing-rate network model, which detects frequency-change-direction of successively played stimuli and successfully accounts for the context-dependent perception demonstrated in behavioral experiments. The model consists of two tonotopically organized, excitatory populations, E up and E down, that respond preferentially to ascending or descending stimuli in pitch, respectively. These preferences are generated by an inhibitory population that provides inhibition asymmetric in frequency to the two populations; context dependence arises from slow facilitation of inhibition. We show that contextual influence depends on the spectral distribution of preceding tones and the tuning width of inhibitory neurons. Further, we demonstrate, using phase-space analysis, how the facilitated inhibition from previous stimuli and the waning inhibition from the just-preceding tone shape the competition between the E up and E down populations. In sum, our model accounts for contextual influences on the pitch change perception of an ambiguous tone pair by introducing a novel decoding strategy based on direction-selective units. The model's network architecture and slow facilitating inhibition emerge as predictions of neuronal mechanisms for these perceptual dynamics. Since the model structure does not depend on the specific stimuli, we show that it generalizes to other contextual effects and stimulus types.

  9. Robust spatial memory maps in flickering neuronal networks: a topological model

    NASA Astrophysics Data System (ADS)

    Dabaghian, Yuri; Babichev, Andrey; Memoli, Facundo; Chowdhury, Samir; Rice University Collaboration; Ohio State University Collaboration

    It is widely accepted that the hippocampal place cells provide a substrate of the neuronal representation of the environment--the ``cognitive map''. However, hippocampal network, as any other network in the brain is transient: thousands of hippocampal neurons die every day and the connections formed by these cells constantly change due to various forms of synaptic plasticity. What then explains the remarkable reliability of our spatial memories? We propose a computational approach to answering this question based on a couple of insights. First, we propose that the hippocampal cognitive map is fundamentally topological, and hence it is amenable to analysis by topological methods. We then apply several novel methods from homology theory, to understand how dynamic connections between cells influences the speed and reliability of spatial learning. We simulate the rat's exploratory movements through different environments and study how topological invariants of these environments arise in a network of simulated neurons with ``flickering'' connectivity. We find that despite transient connectivity the network of place cells produces a stable representation of the topology of the environment.

  10. Exact event-driven implementation for recurrent networks of stochastic perfect integrate-and-fire neurons.

    PubMed

    Taillefumier, Thibaud; Touboul, Jonathan; Magnasco, Marcelo

    2012-12-01

    In vivo cortical recording reveals that indirectly driven neural assemblies can produce reliable and temporally precise spiking patterns in response to stereotyped stimulation. This suggests that despite being fundamentally noisy, the collective activity of neurons conveys information through temporal coding. Stochastic integrate-and-fire models delineate a natural theoretical framework to study the interplay of intrinsic neural noise and spike timing precision. However, there are inherent difficulties in simulating their networks' dynamics in silico with standard numerical discretization schemes. Indeed, the well-posedness of the evolution of such networks requires temporally ordering every neuronal interaction, whereas the order of interactions is highly sensitive to the random variability of spiking times. Here, we answer these issues for perfect stochastic integrate-and-fire neurons by designing an exact event-driven algorithm for the simulation of recurrent networks, with delayed Dirac-like interactions. In addition to being exact from the mathematical standpoint, our proposed method is highly efficient numerically. We envision that our algorithm is especially indicated for studying the emergence of polychronized motifs in networks evolving under spike-timing-dependent plasticity with intrinsic noise.

  11. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks.

    PubMed

    Honegger, Thibault; Thielen, Moritz I; Feizi, Soheil; Sanjana, Neville E; Voldman, Joel

    2016-06-22

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  12. Field coupling-induced pattern formation in two-layer neuronal network

    NASA Astrophysics Data System (ADS)

    Qin, Huixin; Wang, Chunni; Cai, Ning; An, Xinlei; Alzahrani, Faris

    2018-07-01

    The exchange of charged ions across membrane can generate fluctuation of membrane potential and also complex effect of electromagnetic induction. Diversity in excitability of neurons induces different modes selection and dynamical responses to external stimuli. Based on a neuron model with electromagnetic induction, which is described by magnetic flux and memristor, a two-layer network is proposed to discuss the pattern control and wave propagation in the network. In each layer, gap junction coupling is applied to connect the neurons, while field coupling is considered between two layers of the network. The field coupling is approached by using coupling of magnetic flux, which is associated with distribution of electromagnetic field. It is found that appropriate intensity of field coupling can enhance wave propagation from one layer to another one, and beautiful spatial patterns are formed. The developed target wave in the second layer shows some difference from target wave triggered in the first layer of the network when two layers are considered by different excitabilities. The potential mechanism could be pacemaker-like driving from the first layer will be encoded by the second layer.

  13. Self-organized criticality occurs in non-conservative neuronal networks during Up states

    PubMed Central

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-01-01

    During sleep, under anesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between Up and Down states (UDS) characterized by distinct membrane potentials and spike rates [1, 2, 3, 4, 5]. Another phenomenon observed in preparations similar to those that exhibit UDS, such as anesthetized rats [6], brain slices and cultures devoid of sensory input [7], as well as awake monkey cortex [8] is self-organized criticality (SOC). This is characterized by activity “avalanches” whose size distributions obey a power law with critical exponent of about −32 and branching parameter near unity. Recent work has demonstrated SOC in conservative neuronal network models [9, 10], however critical behavior breaks down when biologically realistic non-conservatism is introduced [9]. We here report robust SOC behavior in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have 2 stable activity levels corresponding to Up and Down states, that the networks switch spontaneously between them, and that Up states are critical and Down states are subcritical. PMID:21804861

  14. Microfluidic neurite guidance to study structure-function relationships in topologically-complex population-based neural networks

    NASA Astrophysics Data System (ADS)

    Honegger, Thibault; Thielen, Moritz I.; Feizi, Soheil; Sanjana, Neville E.; Voldman, Joel

    2016-06-01

    The central nervous system is a dense, layered, 3D interconnected network of populations of neurons, and thus recapitulating that complexity for in vitro CNS models requires methods that can create defined topologically-complex neuronal networks. Several three-dimensional patterning approaches have been developed but none have demonstrated the ability to control the connections between populations of neurons. Here we report a method using AC electrokinetic forces that can guide, accelerate, slow down and push up neurites in un-modified collagen scaffolds. We present a means to create in vitro neural networks of arbitrary complexity by using such forces to create 3D intersections of primary neuronal populations that are plated in a 2D plane. We report for the first time in vitro basic brain motifs that have been previously observed in vivo and show that their functional network is highly decorrelated to their structure. This platform can provide building blocks to reproduce in vitro the complexity of neural circuits and provide a minimalistic environment to study the structure-function relationship of the brain circuitry.

  15. A stochastic-field description of finite-size spiking neural networks

    PubMed Central

    Longtin, André

    2017-01-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics. PMID:28787447

  16. Feedforward Inhibition Allows Input Summation to Vary in Recurrent Cortical Networks

    PubMed Central

    2018-01-01

    Abstract Brain computations depend on how neurons transform inputs to spike outputs. Here, to understand input-output transformations in cortical networks, we recorded spiking responses from visual cortex (V1) of awake mice of either sex while pairing sensory stimuli with optogenetic perturbation of excitatory and parvalbumin-positive inhibitory neurons. We found that V1 neurons’ average responses were primarily additive (linear). We used a recurrent cortical network model to determine whether these data, as well as past observations of nonlinearity, could be described by a common circuit architecture. Simulations showed that cortical input-output transformations can be changed from linear to sublinear with moderate (∼20%) strengthening of connections between inhibitory neurons, but this change away from linear scaling depends on the presence of feedforward inhibition. Simulating a variety of recurrent connection strengths showed that, compared with when input arrives only to excitatory neurons, networks produce a wider range of output spiking responses in the presence of feedforward inhibition. PMID:29682603

  17. Least Square Fast Learning Network for modeling the combustion efficiency of a 300WM coal-fired boiler.

    PubMed

    Li, Guoqiang; Niu, Peifeng; Wang, Huaibao; Liu, Yongchao

    2014-03-01

    This paper presents a novel artificial neural network with a very fast learning speed, all of whose weights and biases are determined by the twice Least Square method, so it is called Least Square Fast Learning Network (LSFLN). In addition, there is another difference from conventional neural networks, which is that the output neurons of LSFLN not only receive the information from the hidden layer neurons, but also receive the external information itself directly from the input neurons. In order to test the validity of LSFLN, it is applied to 6 classical regression applications, and also employed to build the functional relation between the combustion efficiency and operating parameters of a 300WM coal-fired boiler. Experimental results show that, compared with other methods, LSFLN with very less hidden neurons could achieve much better regression precision and generalization ability at a much faster learning speed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics

    PubMed Central

    Sinapayen, Lana; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle “Learning by Stimulation Avoidance” (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system. PMID:28158309

  19. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    PubMed

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  20. Dynamics of networks of excitatory and inhibitory neurons in response to time-dependent inputs.

    PubMed

    Ledoux, Erwan; Brunel, Nicolas

    2011-01-01

    We investigate the dynamics of recurrent networks of excitatory (E) and inhibitory (I) neurons in the presence of time-dependent inputs. The dynamics is characterized by the network dynamical transfer function, i.e., how the population firing rate is modulated by sinusoidal inputs at arbitrary frequencies. Two types of networks are studied and compared: (i) a Wilson-Cowan type firing rate model; and (ii) a fully connected network of leaky integrate-and-fire (LIF) neurons, in a strong noise regime. We first characterize the region of stability of the "asynchronous state" (a state in which population activity is constant in time when external inputs are constant) in the space of parameters characterizing the connectivity of the network. We then systematically characterize the qualitative behaviors of the dynamical transfer function, as a function of the connectivity. We find that the transfer function can be either low-pass, or with a single or double resonance, depending on the connection strengths and synaptic time constants. Resonances appear when the system is close to Hopf bifurcations, that can be induced by two separate mechanisms: the I-I connectivity and the E-I connectivity. Double resonances can appear when excitatory delays are larger than inhibitory delays, due to the fact that two distinct instabilities exist with a finite gap between the corresponding frequencies. In networks of LIF neurons, changes in external inputs and external noise are shown to be able to change qualitatively the network transfer function. Firing rate models are shown to exhibit the same diversity of transfer functions as the LIF network, provided delays are present. They can also exhibit input-dependent changes of the transfer function, provided a suitable static non-linearity is incorporated.

  1. Self-organized Criticality in Hierarchical Brain Network

    NASA Astrophysics Data System (ADS)

    Yang, Qiu-Ying; Zhang, Ying-Yue; Chen, Tian-Lun

    2008-11-01

    It is shown that the cortical brain network of the macaque displays a hierarchically clustered organization and the neuron network shows small-world properties. Now the two factors will be considered in our model and the dynamical behavior of the model will be studied. We study the characters of the model and find that the distribution of avalanche size of the model follows power-law behavior.

  2. Critical neural networks with short- and long-term plasticity.

    PubMed

    Michiels van Kessenich, L; Luković, M; de Arcangelis, L; Herrmann, H J

    2018-03-01

    In recent years self organized critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time series of neuronal activity exhibits temporal bursts leading to 1/f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as xor, providing the foundation of future research on more complicated tasks such as pattern recognition.

  3. Critical neural networks with short- and long-term plasticity

    NASA Astrophysics Data System (ADS)

    Michiels van Kessenich, L.; Luković, M.; de Arcangelis, L.; Herrmann, H. J.

    2018-03-01

    In recent years self organized critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behavior of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as Hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time series of neuronal activity exhibits temporal bursts leading to 1 /f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as xor, providing the foundation of future research on more complicated tasks such as pattern recognition.

  4. Ion track based tunable device as humidity sensor: a neural network approach

    NASA Astrophysics Data System (ADS)

    Sharma, Mamta; Sharma, Anuradha; Bhattacherjee, Vandana

    2013-01-01

    Artificial Neural Network (ANN) has been applied in statistical model development, adaptive control system, pattern recognition in data mining, and decision making under uncertainty. The nonlinear dependence of any sensor output on the input physical variable has been the motivation for many researchers to attempt unconventional modeling techniques such as neural networks and other machine learning approaches. Artificial neural network (ANN) is a computational tool inspired by the network of neurons in biological nervous system. It is a network consisting of arrays of artificial neurons linked together with different weights of connection. The states of the neurons as well as the weights of connections among them evolve according to certain learning rules.. In the present work we focus on the category of sensors which respond to electrical property changes such as impedance or capacitance. Recently, sensor materials have been embedded in etched tracks due to their nanometric dimensions and high aspect ratio which give high surface area available for exposure to sensing material. Various materials can be used for this purpose to probe physical (light intensity, temperature etc.), chemical (humidity, ammonia gas, alcohol etc.) or biological (germs, hormones etc.) parameters. The present work involves the application of TEMPOS structures as humidity sensors. The sample to be studied was prepared using the polymer electrolyte (PEO/NH4ClO4) with CdS nano-particles dispersed in the polymer electrolyte. In the present research we have attempted to correlate the combined effects of voltage and frequency on impedance of humidity sensors using a neural network model and results have indicated that the mean absolute error of the ANN Model for the training data was 3.95% while for the validation data it was 4.65%. The corresponding values for the LR model were 8.28% and 8.35% respectively. It was also demonstrated the percentage improvement of the ANN Model with respect to the linear regression model. This demonstrates the suitability of neural networks to perform such modeling.

  5. Extending the Cortical Grasping Network: Pre-supplementary Motor Neuron Activity During Vision and Grasping of Objects

    PubMed Central

    Lanzilotto, Marco; Livi, Alessandro; Maranesi, Monica; Gerbella, Marzio; Barz, Falk; Ruther, Patrick; Fogassi, Leonardo; Rizzolatti, Giacomo; Bonini, Luca

    2016-01-01

    Grasping relies on a network of parieto-frontal areas lying on the dorsolateral and dorsomedial parts of the hemispheres. However, the initiation and sequencing of voluntary actions also requires the contribution of mesial premotor regions, particularly the pre-supplementary motor area F6. We recorded 233 F6 neurons from 2 monkeys with chronic linear multishank neural probes during reaching–grasping visuomotor tasks. We showed that F6 neurons play a role in the control of forelimb movements and some of them (26%) exhibit visual and/or motor specificity for the target object. Interestingly, area F6 neurons form 2 functionally distinct populations, showing either visually-triggered or movement-related bursts of activity, in contrast to the sustained visual-to-motor activity displayed by ventral premotor area F5 neurons recorded in the same animals and with the same task during previous studies. These findings suggest that F6 plays a role in object grasping and extend existing models of the cortical grasping network. PMID:27733538

  6. Collective Behavior of Place and Non-place Neurons in the Hippocampal Network.

    PubMed

    Meshulam, Leenoy; Gauthier, Jeffrey L; Brody, Carlos D; Tank, David W; Bialek, William

    2017-12-06

    Discussions of the hippocampus often focus on place cells, but many neurons are not place cells in any given environment. Here we describe the collective activity in such mixed populations, treating place and non-place cells on the same footing. We start with optical imaging experiments on CA1 in mice as they run along a virtual linear track and use maximum entropy methods to approximate the distribution of patterns of activity in the population, matching the correlations between pairs of cells but otherwise assuming as little structure as possible. We find that these simple models accurately predict the activity of each neuron from the state of all the other neurons in the network, regardless of how well that neuron codes for position. Our results suggest that understanding the neural activity may require not only knowledge of the external variables modulating it but also of the internal network state. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Thermodynamics and signatures of criticality in a network of neurons.

    PubMed

    Tkačik, Gašper; Mora, Thierry; Marre, Olivier; Amodei, Dario; Palmer, Stephanie E; Berry, Michael J; Bialek, William

    2015-09-15

    The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but, with more spikes, the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to N = 160 neurons in a small patch of the vertebrate retina, using a combination of direct and model-based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as N increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. We suggest further tests of criticality, and give a brief discussion of its functional significance.

  8. Fitting neuron models to spike trains.

    PubMed

    Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain

    2011-01-01

    Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.

  9. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator.

    PubMed

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation.

  10. Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator

    PubMed Central

    Hahne, Jan; Dahmen, David; Schuecker, Jannis; Frommer, Andreas; Bolten, Matthias; Helias, Moritz; Diesmann, Markus

    2017-01-01

    Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models with instantaneous interactions. Finally we demonstrate the broad applicability of the framework by considering various examples from the literature, ranging from random networks to neural-field models. The study provides the prerequisite for interactions between rate-based and spiking models in a joint simulation. PMID:28596730

  11. Network inference from functional experimental data (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic conditions.

  12. Neuroprotective Role of Gap Junctions in a Neuron Astrocyte Network Model.

    PubMed

    Huguet, Gemma; Joglekar, Anoushka; Messi, Leopold Matamba; Buckalew, Richard; Wong, Sarah; Terman, David

    2016-07-26

    A detailed biophysical model for a neuron/astrocyte network is developed to explore mechanisms responsible for the initiation and propagation of cortical spreading depolarizations and the role of astrocytes in maintaining ion homeostasis, thereby preventing these pathological waves. Simulations of the model illustrate how properties of spreading depolarizations, such as wave speed and duration of depolarization, depend on several factors, including the neuron and astrocyte Na(+)-K(+) ATPase pump strengths. In particular, we consider the neuroprotective role of astrocyte gap junction coupling. The model demonstrates that a syncytium of electrically coupled astrocytes can maintain a physiological membrane potential in the presence of an elevated extracellular K(+) concentration and efficiently distribute the excess K(+) across the syncytium. This provides an effective neuroprotective mechanism for delaying or preventing the initiation of spreading depolarizations. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Enhanced polychronization in a spiking network with metaplasticity.

    PubMed

    Guise, Mira; Knott, Alistair; Benuskova, Lubica

    2015-01-01

    Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002). In this paper we study the effect of metaplasticity on network behavior. Our guiding assumption is that the primary purpose of metaplasticity is to regulate synaptic plasticity, by increasing it when input is low and decreasing it when input is high. For our experiments we adopt a model of metaplasticity that demonstrably has this effect for a single synapse; our primary interest is in how metaplasticity thus defined affects network-level phenomena. We focus on a network-level phenomenon called polychronicity, that has a potential role in representation and memory. A network with polychronicity has the ability to produce non-synchronous but precisely timed sequences of neural firing events that can arise from strongly connected groups of neurons called polychronous neural groups (Izhikevich et al., 2004). Polychronous groups (PNGs) develop readily when spiking networks are exposed to repeated spatio-temporal stimuli under the influence of spike-timing-dependent plasticity (STDP), but are sensitive to changes in synaptic weight distribution. We use a technique we have recently developed called Response Fingerprinting to show that PNGs formed in the presence of metaplasticity are significantly larger than those with no metaplasticity. A potential mechanism for this enhancement is proposed that links an inherent property of integrator type neurons called spike latency to an increase in the tolerance of PNG neurons to jitter in their inputs.

  14. Robust Adaptive Synchronization of Ring Configured Uncertain Chaotic FitzHugh–Nagumo Neurons under Direction-Dependent Coupling

    PubMed Central

    Iqbal, Muhammad; Rehan, Muhammad; Hong, Keum-Shik

    2018-01-01

    This paper exploits the dynamical modeling, behavior analysis, and synchronization of a network of four different FitzHugh–Nagumo (FHN) neurons with unknown parameters linked in a ring configuration under direction-dependent coupling. The main purpose is to investigate a robust adaptive control law for the synchronization of uncertain and perturbed neurons, communicating in a medium of bidirectional coupling. The neurons are assumed to be different and interconnected in a ring structure. The strength of the gap junctions is taken to be different for each link in the network, owing to the inter-neuronal coupling medium properties. Robust adaptive control mechanism based on Lyapunov stability analysis is employed and theoretical criteria are derived to realize the synchronization of the network of four FHN neurons in a ring form with unknown parameters under direction-dependent coupling and disturbances. The proposed scheme for synchronization of dissimilar neurons, under external electrical stimuli, coupled in a ring communication topology, having all parameters unknown, and subject to directional coupling medium and perturbations, is addressed for the first time as per our knowledge. To demonstrate the efficacy of the proposed strategy, simulation results are provided. PMID:29535622

  15. Functional Interaction between the Scaffold Protein Kidins220/ARMS and Neuronal Voltage-Gated Na+ Channels.

    PubMed

    Cesca, Fabrizia; Satapathy, Annyesha; Ferrea, Enrico; Nieus, Thierry; Benfenati, Fabio; Scholz-Starke, Joachim

    2015-07-17

    Kidins220 (kinase D-interacting substrate of 220 kDa)/ankyrin repeat-rich membrane spanning (ARMS) acts as a signaling platform at the plasma membrane and is implicated in a multitude of neuronal functions, including the control of neuronal activity. Here, we used the Kidins220(-/-) mouse model to study the effects of Kidins220 ablation on neuronal excitability. Multielectrode array recordings showed reduced evoked spiking activity in Kidins220(-/-) hippocampal networks, which was compatible with the increased excitability of GABAergic neurons determined by current-clamp recordings. Spike waveform analysis further indicated an increased sodium conductance in this neuronal subpopulation. Kidins220 association with brain voltage-gated sodium channels was shown by co-immunoprecipitation experiments and Na(+) current recordings in transfected HEK293 cells, which revealed dramatic alterations of kinetics and voltage dependence. Finally, an in silico interneuronal model incorporating the Kidins220-induced Na(+) current alterations reproduced the firing phenotype observed in Kidins220(-/-) neurons. These results identify Kidins220 as a novel modulator of Nav channel activity, broadening our understanding of the molecular mechanisms regulating network excitability. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  16. Stochastic inference with spiking neurons in the high-conductance state

    NASA Astrophysics Data System (ADS)

    Petrovici, Mihai A.; Bill, Johannes; Bytschok, Ilja; Schemmel, Johannes; Meier, Karlheinz

    2016-10-01

    The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.

  17. Modeling spike-wave discharges by a complex network of neuronal oscillators.

    PubMed

    Medvedeva, Tatiana M; Sysoeva, Marina V; van Luijtelaar, Gilles; Sysoev, Ilya V

    2018-02-01

    The organization of neural networks and the mechanisms, which generate the highly stereotypical for absence epilepsy spike-wave discharges (SWDs) is heavily debated. Here we describe such a model which can both reproduce the characteristics of SWDs and dynamics of coupling between brain regions, relying mainly on properties of hierarchically organized networks of a large number of neuronal oscillators. We used a two level mesoscale model. The first level consists of three structures: the nervus trigeminus serving as an input, the thalamus and the somatosensory cortex; the second level of a group of nearby situated neurons belonging to one of three modeled structures. The model reproduces the main features of the transition from normal to epileptiformic activity and its spontaneous abortion: an increase in the oscillation amplitude, the emergence of the main frequency and its higher harmonics, and the ability to generate trains of seizures. The model was stable with respect to variations in the structure of couplings and to scaling. The analyzes of the interactions between model structures from their time series using Granger causality method showed that the model reproduced the preictal coupling increase detected previously from experimental data. SWDs can be generated by changes in network organization. It is proposed that a specific pathological architecture of couplings in the brain is necessary to allow the transition from normal to epileptiformic activity, next to by others modeled and reported factors referring to complex, intrinsic, and synaptic mechanisms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A solution to neural field equations by a recurrent neural network method

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2012-09-01

    Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.

  19. Morin hydrate promotes inner ear neural stem cell survival and differentiation and protects cochlea against neuronal hearing loss.

    PubMed

    He, Qiang; Jia, Zhanwei; Zhang, Ying; Ren, Xiumin

    2017-03-01

    We aimed to investigate the effect of morin hydrate on neural stem cells (NSCs) isolated from mouse inner ear and its potential in protecting neuronal hearing loss. 3-(4,5-dimethyl-2-thiazolyl)-2,5-diphenyl-2-H-tetrazolium bromide (MTT) and bromodeoxyuridine incorporation assays were employed to assess the effect of morin hydrate on the viability and proliferation of in vitro NSC culture. The NSCs were then differentiated into neurons, in which neurosphere formation and differentiation were evaluated, followed by neurite outgrowth and neural excitability measurements in the subsequent in vitro neuronal network. Mechanotransduction of cochlea ex vivo culture and auditory brainstem responses threshold and distortion product optoacoustic emissions amplitude in mouse ototoxicity model were also measured following gentamicin treatment to investigate the protective role of morin hydrate against neuronal hearing loss. Morin hydrate improved viability and proliferation, neurosphere formation and neuronal differentiation of inner ear NSCs, and promoted in vitro neuronal network functions. In both ex vivo and in vivo ototoxicity models, morin hydrate prevented gentamicin-induced neuronal hearing loss. Morin hydrate exhibited potent properties in promoting growth and differentiation of inner ear NSCs into functional neurons and protecting from gentamicin ototoxicity. Our study supports its clinical potential in treating neuronal hearing loss. © 2016 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine.

  20. Synchronization properties of heterogeneous neuronal networks with mixed excitability type

    NASA Astrophysics Data System (ADS)

    Leone, Michael J.; Schurter, Brandon N.; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G.

    2015-03-01

    We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.

  1. Assessing the Role of Inhibition in Stabilizing Neocortical Networks Requires Large-Scale Perturbation of the Inhibitory Population

    PubMed Central

    Mrsic-Flogel, Thomas D.

    2017-01-01

    Neurons within cortical microcircuits are interconnected with recurrent excitatory synaptic connections that are thought to amplify signals (Douglas and Martin, 2007), form selective subnetworks (Ko et al., 2011), and aid feature discrimination. Strong inhibition (Haider et al., 2013) counterbalances excitation, enabling sensory features to be sharpened and represented by sparse codes (Willmore et al., 2011). This balance between excitation and inhibition makes it difficult to assess the strength, or gain, of recurrent excitatory connections within cortical networks, which is key to understanding their operational regime and the computations that they perform. Networks that combine an unstable high-gain excitatory population with stabilizing inhibitory feedback are known as inhibition-stabilized networks (ISNs) (Tsodyks et al., 1997). Theoretical studies using reduced network models predict that ISNs produce paradoxical responses to perturbation, but experimental perturbations failed to find evidence for ISNs in cortex (Atallah et al., 2012). Here, we reexamined this question by investigating how cortical network models consisting of many neurons behave after perturbations and found that results obtained from reduced network models fail to predict responses to perturbations in more realistic networks. Our models predict that a large proportion of the inhibitory network must be perturbed to reliably detect an ISN regime robustly in cortex. We propose that wide-field optogenetic suppression of inhibition under promoters targeting a large fraction of inhibitory neurons may provide a perturbation of sufficient strength to reveal the operating regime of cortex. Our results suggest that detailed computational models of optogenetic perturbations are necessary to interpret the results of experimental paradigms. SIGNIFICANCE STATEMENT Many useful computational mechanisms proposed for cortex require local excitatory recurrence to be very strong, such that local inhibitory feedback is necessary to avoid epileptiform runaway activity (an “inhibition-stabilized network” or “ISN” regime). However, recent experimental results suggest that this regime may not exist in cortex. We simulated activity perturbations in cortical networks of increasing realism and found that, to detect ISN-like properties in cortex, large proportions of the inhibitory population must be perturbed. Current experimental methods for inhibitory perturbation are unlikely to satisfy this requirement, implying that existing experimental observations are inconclusive about the computational regime of cortex. Our results suggest that new experimental designs targeting a majority of inhibitory neurons may be able to resolve this question. PMID:29074575

  2. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  3. A simplified protocol for differentiation of electrophysiologically mature neuronal networks from human induced pluripotent stem cells.

    PubMed

    Gunhanlar, N; Shpak, G; van der Kroeg, M; Gouty-Colomer, L A; Munshi, S T; Lendemeijer, B; Ghazvini, M; Dupont, C; Hoogendijk, W J G; Gribnau, J; de Vrij, F M S; Kushner, S A

    2018-05-01

    Progress in elucidating the molecular and cellular pathophysiology of neuropsychiatric disorders has been hindered by the limited availability of living human brain tissue. The emergence of induced pluripotent stem cells (iPSCs) has offered a unique alternative strategy using patient-derived functional neuronal networks. However, methods for reliably generating iPSC-derived neurons with mature electrophysiological characteristics have been difficult to develop. Here, we report a simplified differentiation protocol that yields electrophysiologically mature iPSC-derived cortical lineage neuronal networks without the need for astrocyte co-culture or specialized media. This protocol generates a consistent 60:40 ratio of neurons and astrocytes that arise from a common forebrain neural progenitor. Whole-cell patch-clamp recordings of 114 neurons derived from three independent iPSC lines confirmed their electrophysiological maturity, including resting membrane potential (-58.2±1.0 mV), capacitance (49.1±2.9 pF), action potential (AP) threshold (-50.9±0.5 mV) and AP amplitude (66.5±1.3 mV). Nearly 100% of neurons were capable of firing APs, of which 79% had sustained trains of mature APs with minimal accommodation (peak AP frequency: 11.9±0.5 Hz) and 74% exhibited spontaneous synaptic activity (amplitude, 16.03±0.82 pA; frequency, 1.09±0.17 Hz). We expect this protocol to be of broad applicability for implementing iPSC-based neuronal network models of neuropsychiatric disorders.

  4. Optogenetic dissection reveals multiple rhythmogenic modules underlying locomotion

    PubMed Central

    Hägglund, Martin; Dougherty, Kimberly J.; Borgius, Lotta; Itohara, Shigeyoshi; Iwasato, Takuji; Kiehn, Ole

    2013-01-01

    Neural networks in the spinal cord known as central pattern generators produce the sequential activation of muscles needed for locomotion. The overall locomotor network architectures in limbed vertebrates have been much debated, and no consensus exists as to how they are structured. Here, we use optogenetics to dissect the excitatory and inhibitory neuronal populations and probe the organization of the mammalian central pattern generator. We find that locomotor-like rhythmic bursting can be induced unilaterally or independently in flexor or extensor networks. Furthermore, we show that individual flexor motor neuron pools can be recruited into bursting without any activity in other nearby flexor motor neuron pools. Our experiments differentiate among several proposed models for rhythm generation in the vertebrates and show that the basic structure underlying the locomotor network has a distributed organization with many intrinsically rhythmogenic modules. PMID:23798384

  5. Serotonin targets inhibitory synapses to induce modulation of network functions

    PubMed Central

    Manzke, Till; Dutschmann, Mathias; Schlaf, Gerald; Mörschel, Michael; Koch, Uwe R.; Ponimaskin, Evgeni; Bidon, Olivier; Lalley, Peter M.; Richter, Diethelm W.

    2009-01-01

    The cellular effects of serotonin (5-HT), a neuromodulator with widespread influences in the central nervous system, have been investigated. Despite detailed knowledge about the molecular biology of cellular signalling, it is not possible to anticipate the responses of neuronal networks to a global action of 5-HT. Heterogeneous expression of various subtypes of serotonin receptors (5-HTR) in a variety of neurons differently equipped with cell-specific transmitter receptors and ion channel assemblies can provoke diverse cellular reactions resulting in various forms of network adjustment and, hence, motor behaviour. Using the respiratory network as a model for reciprocal synaptic inhibition, we demonstrate that 5-HT1AR modulation primarily affects inhibition through glycinergic synapses. Potentiation of glycinergic inhibition of both excitatory and inhibitory neurons induces a functional reorganization of the network leading to a characteristic change of motor output. The changes in network operation are robust and help to overcome opiate-induced respiratory depression. Hence, 5-HT1AR activation stabilizes the rhythmicity of breathing during opiate medication of pain. PMID:19651659

  6. Categorization and decision-making in a neurobiologically plausible spiking network using a STDP-like learning rule.

    PubMed

    Beyeler, Michael; Dutt, Nikil D; Krichmar, Jeffrey L

    2013-12-01

    Understanding how the human brain is able to efficiently perceive and understand a visual scene is still a field of ongoing research. Although many studies have focused on the design and optimization of neural networks to solve visual recognition tasks, most of them either lack neurobiologically plausible learning rules or decision-making processes. Here we present a large-scale model of a hierarchical spiking neural network (SNN) that integrates a low-level memory encoding mechanism with a higher-level decision process to perform a visual classification task in real-time. The model consists of Izhikevich neurons and conductance-based synapses for realistic approximation of neuronal dynamics, a spike-timing-dependent plasticity (STDP) synaptic learning rule with additional synaptic dynamics for memory encoding, and an accumulator model for memory retrieval and categorization. The full network, which comprised 71,026 neurons and approximately 133 million synapses, ran in real-time on a single off-the-shelf graphics processing unit (GPU). The network was constructed on a publicly available SNN simulator that supports general-purpose neuromorphic computer chips. The network achieved 92% correct classifications on MNIST in 100 rounds of random sub-sampling, which is comparable to other SNN approaches and provides a conservative and reliable performance metric. Additionally, the model correctly predicted reaction times from psychophysical experiments. Because of the scalability of the approach and its neurobiological fidelity, the current model can be extended to an efficient neuromorphic implementation that supports more generalized object recognition and decision-making architectures found in the brain. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Functional network inference of the suprachiasmatic nucleus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel

    2016-04-04

    In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data frommore » individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure.« less

  8. Noise focusing and the emergence of coherent activity in neuronal cultures

    NASA Astrophysics Data System (ADS)

    Orlandi, Javier G.; Soriano, Jordi; Alvarez-Lacalle, Enrique; Teller, Sara; Casademunt, Jaume

    2013-09-01

    At early stages of development, neuronal cultures in vitro spontaneously reach a coherent state of collective firing in a pattern of nearly periodic global bursts. Although understanding the spontaneous activity of neuronal networks is of chief importance in neuroscience, the origin and nature of that pulsation has remained elusive. By combining high-resolution calcium imaging with modelling in silico, we show that this behaviour is controlled by the propagation of waves that nucleate randomly in a set of points that is specific to each culture and is selected by a non-trivial interplay between dynamics and topology. The phenomenon is explained by the noise focusing effect--a strong spatio-temporal localization of the noise dynamics that originates in the complex structure of avalanches of spontaneous activity. Results are relevant to neuronal tissues and to complex networks with integrate-and-fire dynamics and metric correlations, for instance, in rumour spreading on social networks.

  9. In silico Evolutionary Developmental Neurobiology and the Origin of Natural Language

    NASA Astrophysics Data System (ADS)

    Szathmáry, Eörs; Szathmáry, Zoltán; Ittzés, Péter; Orbaán, Geroő; Zachár, István; Huszár, Ferenc; Fedor, Anna; Varga, Máté; Számadó, Szabolcs

    It is justified to assume that part of our genetic endowment contributes to our language skills, yet it is impossible to tell at this moment exactly how genes affect the language faculty. We complement experimental biological studies by an in silico approach in that we simulate the evolution of neuronal networks under selection for language-related skills. At the heart of this project is the Evolutionary Neurogenetic Algorithm (ENGA) that is deliberately biomimetic. The design of the system was inspired by important biological phenomena such as brain ontogenesis, neuron morphologies, and indirect genetic encoding. Neuronal networks were selected and were allowed to reproduce as a function of their performance in the given task. The selected neuronal networks in all scenarios were able to solve the communication problem they had to face. The most striking feature of the model is that it works with highly indirect genetic encoding--just as brains do.

  10. Engineering Devices to Treat Epilepsy: A Clinical Perspective

    DTIC Science & Technology

    2001-10-25

    Research over the next three decades reinforced the idea that seizures likely spread through discrete, functional neuronal networks [2]. Over the last...15 years, researchers have demonstrated that it is possible to modulate the activity of functional neuronal networks in animal models of epilepsy by...hypothalamus [5], mamillary bodies [6], cerebellum [7], basal ganglia [8], locus ceruleus [9] and the substantia nigra [10]. At the same time some

  11. A neural network technique for remeshing of bone microstructure.

    PubMed

    Fischer, Anath; Holdstein, Yaron

    2012-01-01

    Today, there is major interest within the biomedical community in developing accurate noninvasive means for the evaluation of bone microstructure and bone quality. Recent improvements in 3D imaging technology, among them development of micro-CT and micro-MRI scanners, allow in-vivo 3D high-resolution scanning and reconstruction of large specimens or even whole bone models. Thus, the tendency today is to evaluate bone features using 3D assessment techniques rather than traditional 2D methods. For this purpose, high-quality meshing methods are required. However, the 3D meshes produced from current commercial systems usually are of low quality with respect to analysis and rapid prototyping. 3D model reconstruction of bone is difficult due to the complexity of bone microstructure. The small bone features lead to a great deal of neighborhood ambiguity near each vertex. The relatively new neural network method for mesh reconstruction has the potential to create or remesh 3D models accurately and quickly. A neural network (NN), which resembles an artificial intelligence (AI) algorithm, is a set of interconnected neurons, where each neuron is capable of making an autonomous arithmetic calculation. Moreover, each neuron is affected by its surrounding neurons through the structure of the network. This paper proposes an extension of the growing neural gas (GNN) neural network technique for remeshing a triangular manifold mesh that represents bone microstructure. This method has the advantage of reconstructing the surface of a genus-n freeform object without a priori knowledge regarding the original object, its topology, or its shape.

  12. Decorrelation of Neural-Network Activity by Inhibitory Feedback

    PubMed Central

    Einevoll, Gaute T.; Diesmann, Markus

    2012-01-01

    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II). PMID:23133368

  13. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    PubMed

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  14. Contextual Modulation is Related to Efficiency in a Spiking Network Model of Visual Cortex.

    PubMed

    Sharifian, Fariba; Heikkinen, Hanna; Vigário, Ricardo; Vanni, Simo

    2015-01-01

    In the visual cortex, stimuli outside the classical receptive field (CRF) modulate the neural firing rate, without driving the neuron by themselves. In the primary visual cortex (V1), such contextual modulation can be parametrized with an area summation function (ASF): increasing stimulus size causes first an increase and then a decrease of firing rate before reaching an asymptote. Earlier work has reported increase of sparseness when CRF stimulation is extended to its surroundings. However, there has been no clear connection between the ASF and network efficiency. Here we aimed to investigate possible link between ASF and network efficiency. In this study, we simulated the responses of a biomimetic spiking neural network model of the visual cortex to a set of natural images. We varied the network parameters, and compared the V1 excitatory neuron spike responses to the corresponding responses predicted from earlier single neuron data from primate visual cortex. The network efficiency was quantified with firing rate (which has direct association to neural energy consumption), entropy per spike and population sparseness. All three measures together provided a clear association between the network efficiency and the ASF. The association was clear when varying the horizontal connectivity within V1, which influenced both the efficiency and the distance to ASF, DAS. Given the limitations of our biophysical model, this association is qualitative, but nevertheless suggests that an ASF-like receptive field structure can cause efficient population response.

  15. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  16. Transition between Functional Regimes in an Integrate-And-Fire Network Model of the Thalamus

    PubMed Central

    Barardi, Alessandro; Mazzoni, Alberto

    2016-01-01

    The thalamus is a key brain element in the processing of sensory information. During the sleep and awake states, this brain area is characterized by the presence of two distinct dynamical regimes: in the sleep state activity is dominated by spindle oscillations (7 − 15 Hz) weakly affected by external stimuli, while in the awake state the activity is primarily driven by external stimuli. Here we develop a simple and computationally efficient model of the thalamus that exhibits two dynamical regimes with different information-processing capabilities, and study the transition between them. The network model includes glutamatergic thalamocortical (TC) relay neurons and GABAergic reticular (RE) neurons described by adaptive integrate-and-fire models in which spikes are induced by either depolarization or hyperpolarization rebound. We found a range of connectivity conditions under which the thalamic network composed by these neurons displays the two aforementioned dynamical regimes. Our results show that TC-RE loops generate spindle-like oscillations and that a minimum level of clustering (i.e. local connectivity density) in the RE-RE connections is necessary for the coexistence of the two regimes. We also observe that the transition between the two regimes occurs when the external excitatory input on TC neurons (mimicking sensory stimulation) is large enough to cause a significant fraction of them to switch from hyperpolarization-rebound-driven firing to depolarization-driven firing. Overall, our model gives a novel and clear description of the role that the two types of neurons and their connectivity play in the dynamical regimes observed in the thalamus, and in the transition between them. These results pave the way for the development of efficient models of the transmission of sensory information from periphery to cortex. PMID:27598260

  17. Predictive Coding of Dynamical Variables in Balanced Spiking Networks

    PubMed Central

    Boerlin, Martin; Machens, Christian K.; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated. PMID:24244113

  18. Cultured networks of excitatory projection neurons and inhibitory interneurons for studying human cortical neurotoxicity

    PubMed Central

    Xu, Jin-Chong; Fan, Jing; Wang, Xueqing; Eacker, Stephen M.; Kam, Tae-In; Chen, Li; Yin, Xiling; Zhu, Juehua; Chi, Zhikai; Jiang, Haisong; Chen, Rong; Dawson, Ted M.; Dawson, Valina L.

    2017-01-01

    Translating neuroprotective treatments from discovery in cell and animal models to the clinic has proven challenging. To reduce the gap between basic studies of neurotoxicity and neuroprotection and clinically relevant therapies, we developed a human cortical neuron culture system from human embryonic stem cells (ESCs) or inducible pluripotent stem cells (iPSCs) that generated both excitatory and inhibitory neuronal networks resembling the composition of the human cortex. This methodology used timed administration of retinoic acid (RA) to FOXG1 neural precursor cells leading to differentiation of neuronal populations representative of the six cortical layers with both excitatory and inhibitory neuronal networks that were functional and homeostatically stable. In human cortical neuron cultures, excitotoxicity or ischemia due to oxygen and glucose deprivation led to cell death that was dependent on N-methyl-D-aspartate (NMDA) receptors, nitric oxide (NO), and the poly (ADP-ribose) polymerase (PARP)-dependent cell death, a cell death pathway designated parthanatos to separate it from apoptosis, necroptosis and other forms of cell death. Neuronal cell death was attenuated by PARP inhibitors that are currently in clinical trials for cancer treatment. This culture system provides a new platform for the study of human cortical neurotoxicity and suggests that PARP inhibitors may be useful for ameliorating excitotoxic and ischemic cell death in human neurons. PMID:27053772

  19. [Network of plastic neurons capable of forming conditioned reflexes ("membrane" model of learning)].

    PubMed

    Litvinov, E G; Frolov, A A

    1978-01-01

    Simple net neuronal model was suggested which was able to form the conditioning due to changes of the neuron excitability. The model was based on the following main concepts: (a) the conditioning formation should result in reduction of the firing threshold in the same neurons where the conditioning and reinforcement stimuli were converged, (b) neuron threshold may have only two possible states: initial and final ones, these were identical for all cells, the threshold may be changed only once from the initial value to the final one, (c) isomorphous relation may be introduced between some pair of arbitrary stimuli and some subset of the net neurons; any two pairs differing at least in one stimulus have unlike subsets of the convergent neurons. Stochastically organized neuronal net was used for analysis of the model. Considerable information capacity of the net gives the opportunity to consider that the conditioning formation is possible on the basis of the nervous cells. The efficienty of the model turn out to be comparable with the well known models where the conditioning formation was due to the modification of the synapses.

  20. On a phase diagram for random neural networks with embedded spike timing dependent plasticity.

    PubMed

    Turova, Tatyana S; Villa, Alessandro E P

    2007-01-01

    This paper presents an original mathematical framework based on graph theory which is a first attempt to investigate the dynamics of a model of neural networks with embedded spike timing dependent plasticity. The neurons correspond to integrate-and-fire units located at the vertices of a finite subset of 2D lattice. There are two types of vertices, corresponding to the inhibitory and the excitatory neurons. The edges are directed and labelled by the discrete values of the synaptic strength. We assume that there is an initial firing pattern corresponding to a subset of units that generate a spike. The number of activated externally vertices is a small fraction of the entire network. The model presented here describes how such pattern propagates throughout the network as a random walk on graph. Several results are compared with computational simulations and new data are presented for identifying critical parameters of the model.

  1. Noise focusing in neuronal tissues: Symmetry breaking and localization in excitable networks with quenched disorder

    NASA Astrophysics Data System (ADS)

    Orlandi, Javier G.; Casademunt, Jaume

    2017-05-01

    We introduce a coarse-grained stochastic model for the spontaneous activity of neuronal cultures to explain the phenomenon of noise focusing, which entails localization of the noise activity in excitable networks with metric correlations. The system is modeled as a continuum excitable medium with a state-dependent spatial coupling that accounts for the dynamics of synaptic connections. The most salient feature is the emergence at the mesoscale of a vector field V (r ) , which acts as an advective carrier of the noise. This entails an explicit symmetry breaking of isotropy and homogeneity that stems from the amplification of the quenched fluctuations of the network by the activity avalanches, concomitant with the excitable dynamics. We discuss the microscopic interpretation of V (r ) and propose an explicit construction of it. The coarse-grained model shows excellent agreement with simulations at the network level. The generic nature of the observed phenomena is discussed.

  2. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    NASA Astrophysics Data System (ADS)

    Naous, Rawan; AlShedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled Nabil

    2016-11-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  3. Noise Tolerance of Attractor and Feedforward Memory Models

    PubMed Central

    Lim, Sukbin; Goldman, Mark S.

    2017-01-01

    In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can out-perform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli. PMID:22091664

  4. SMN is required for sensory-motor circuit function in Drosophila

    PubMed Central

    Imlach, Wendy L.; Beck, Erin S.; Choi, Ben Jiwon; Lotti, Francesco; Pellizzoni, Livio; McCabe, Brian D.

    2012-01-01

    Summary Spinal muscular atrophy (SMA) is a lethal human disease characterized by motor neuron dysfunction and muscle deterioration due to depletion of the ubiquitous Survival Motor Neuron (SMN) protein. Drosophila SMN mutants have reduced muscle size and defective locomotion, motor rhythm and motor neuron neurotransmission. Unexpectedly, restoration of SMN in either muscles or motor neurons did not alter these phenotypes. Instead, SMN must be expressed in proprioceptive neurons and interneurons in the motor circuit to non-autonomously correct defects in motor neurons and muscles. SMN depletion disrupts the motor system subsequent to circuit development and can be mimicked by the inhibition of motor network function. Furthermore, increasing motor circuit excitability by genetic or pharmacological inhibition of K+ channels can correct SMN-dependent phenotypes. These results establish sensory-motor circuit dysfunction as the origin of motor system deficits in this SMA model and suggest that enhancement of motor neural network activity could ameliorate the disease. PMID:23063130

  5. Propagating waves can explain irregular neural dynamics.

    PubMed

    Keane, Adam; Gong, Pulin

    2015-01-28

    Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.

  6. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    PubMed

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  7. Nonlinear functional approximation with networks using adaptive neurons

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1992-01-01

    A novel mathematical framework for the rapid learning of nonlinear mappings and topological transformations is presented. It is based on allowing the neuron's parameters to adapt as a function of learning. This fully recurrent adaptive neuron model (ANM) has been successfully applied to complex nonlinear function approximation problems such as the highly degenerate inverse kinematics problem in robotics.

  8. Signal propagation and logic gating in networks of integrate-and-fire neurons.

    PubMed

    Vogels, Tim P; Abbott, L F

    2005-11-16

    Transmission of signals within the brain is essential for cognitive function, but it is not clear how neural circuits support reliable and accurate signal propagation over a sufficiently large dynamic range. Two modes of propagation have been studied: synfire chains, in which synchronous activity travels through feedforward layers of a neuronal network, and the propagation of fluctuations in firing rate across these layers. In both cases, a sufficient amount of noise, which was added to previous models from an external source, had to be included to support stable propagation. Sparse, randomly connected networks of spiking model neurons can generate chaotic patterns of activity. We investigate whether this activity, which is a more realistic noise source, is sufficient to allow for signal transmission. We find that, for rate-coded signals but not for synfire chains, such networks support robust and accurate signal reproduction through up to six layers if appropriate adjustments are made in synaptic strengths. We investigate the factors affecting transmission and show that multiple signals can propagate simultaneously along different pathways. Using this feature, we show how different types of logic gates can arise within the architecture of the random network through the strengthening of specific synapses.

  9. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    PubMed

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  10. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    PubMed Central

    Cheung, Kit; Schultz, Simon R.; Luk, Wayne

    2016-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation. PMID:26834542

  11. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  12. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  13. Network reconfiguration and neuronal plasticity in rhythm-generating networks.

    PubMed

    Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino

    2011-12-01

    Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.

  14. Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1)

    PubMed Central

    Willmore, Ben D.B.; Bulstrode, Harry; Tolhurst, David J.

    2012-01-01

    Neuronal populations in the primary visual cortex (V1) of mammals exhibit contrast normalization. Neurons that respond strongly to simple visual stimuli – such as sinusoidal gratings – respond less well to the same stimuli when they are presented as part of a more complex stimulus which also excites other, neighboring neurons. This phenomenon is generally attributed to generalized patterns of inhibitory connections between nearby V1 neurons. The Bienenstock, Cooper and Munro (BCM) rule is a neural network learning rule that, when trained on natural images, produces model neurons which, individually, have many tuning properties in common with real V1 neurons. However, when viewed as a population, a BCM network is very different from V1 – each member of the BCM population tends to respond to the same dominant features of visual input, producing an incomplete, highly redundant code for visual information. Here, we demonstrate that, by adding contrast normalization into the BCM rule, we arrive at a neurally-plausible Hebbian learning rule that can learn an efficient sparse, overcomplete representation that is a better model for stimulus selectivity in V1. This suggests that one role of contrast normalization in V1 is to guide the neonatal development of receptive fields, so that neurons respond to different features of visual input. PMID:22230381

  15. Intrinsic Cellular Properties and Connectivity Density Determine Variable Clustering Patterns in Randomly Connected Inhibitory Neural Networks

    PubMed Central

    Rich, Scott; Booth, Victoria; Zochowski, Michal

    2016-01-01

    The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics compared to those in networks of Type I or Type II neurons. To understand these results, we compute neuronal PRCs calculated with a perturbation matching the profile of the synaptic current in our networks. Differences in profiles of these PRCs across the different neuron types reveal mechanisms underlying the divergent network dynamics. PMID:27812323

  16. PyNN: A Common Interface for Neuronal Network Simulators.

    PubMed

    Davison, Andrew P; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre

    2008-01-01

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.

  17. Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.

    PubMed

    Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik

    2015-12-01

    The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.

  18. PyNN: A Common Interface for Neuronal Network Simulators

    PubMed Central

    Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre

    2008-01-01

    Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529

  19. High-Degree Neurons Feed Cortical Computations

    PubMed Central

    Timme, Nicholas M.; Ito, Shinya; Shimono, Masanori; Yeh, Fang-Chin; Litke, Alan M.; Beggs, John M.

    2016-01-01

    Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network. PMID:27159884

  20. Classification capacity of a modular neural network implementing neurally inspired architecture and training rules.

    PubMed

    Poirazi, Panayiota; Neocleous, Costas; Pattichis, Costantinos S; Schizas, Christos N

    2004-05-01

    A three-layer neural network (NN) with novel adaptive architecture has been developed. The hidden layer of the network consists of slabs of single neuron models, where neurons within a slab--but not between slabs--have the same type of activation function. The network activation functions in all three layers have adaptable parameters. The network was trained using a biologically inspired, guided-annealing learning rule on a variety of medical data. Good training/testing classification performance was obtained on all data sets tested. The performance achieved was comparable to that of SVM classifiers. It was shown that the adaptive network architecture, inspired from the modular organization often encountered in the mammalian cerebral cortex, can benefit classification performance.

  1. Aberrant within- and between-network connectivity of the mirror neuron system network and the mentalizing network in first episode psychosis.

    PubMed

    Choe, Eugenie; Lee, Tae Young; Kim, Minah; Hur, Ji-Won; Yoon, Youngwoo Bryan; Cho, Kang-Ik K; Kwon, Jun Soo

    2018-03-26

    It has been suggested that the mentalizing network and the mirror neuron system network support important social cognitive processes that are impaired in schizophrenia. However, the integrity and interaction of these two networks have not been sufficiently studied, and their effects on social cognition in schizophrenia remain unclear. Our study included 26 first-episode psychosis (FEP) patients and 26 healthy controls. We utilized resting-state functional connectivity to examine the a priori-defined mirror neuron system network and the mentalizing network and to assess the within- and between-network connectivities of the networks in FEP patients. We also assessed the correlation between resting-state functional connectivity measures and theory of mind performance. FEP patients showed altered within-network connectivity of the mirror neuron system network, and aberrant between-network connectivity between the mirror neuron system network and the mentalizing network. The within-network connectivity of the mirror neuron system network was noticeably correlated with theory of mind task performance in FEP patients. The integrity and interaction of the mirror neuron system network and the mentalizing network may be altered during the early stages of psychosis. Additionally, this study suggests that alterations in the integrity of the mirror neuron system network are highly related to deficient theory of mind in schizophrenia, and this problem would be present from the early stage of psychosis. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Modularity Induced Gating and Delays in Neuronal Networks

    PubMed Central

    Shein-Idelson, Mark; Cohen, Gilad; Hanein, Yael

    2016-01-01

    Neural networks, despite their highly interconnected nature, exhibit distinctly localized and gated activation. Modularity, a distinctive feature of neural networks, has been recently proposed as an important parameter determining the manner by which networks support activity propagation. Here we use an engineered biological model, consisting of engineered rat cortical neurons, to study the role of modular topology in gating the activity between cell populations. We show that pairs of connected modules support conditional propagation (transmitting stronger bursts with higher probability), long delays and propagation asymmetry. Moreover, large modular networks manifest diverse patterns of both local and global activation. Blocking inhibition decreased activity diversity and replaced it with highly consistent transmission patterns. By independently controlling modularity and disinhibition, experimentally and in a model, we pose that modular topology is an important parameter affecting activation localization and is instrumental for population-level gating by disinhibition. PMID:27104350

  3. Hierarchical winner-take-all particle swarm optimization social network for neural model fitting.

    PubMed

    Coventry, Brandon S; Parthasarathy, Aravindakshan; Sommer, Alexandra L; Bartlett, Edward L

    2017-02-01

    Particle swarm optimization (PSO) has gained widespread use as a general mathematical programming paradigm and seen use in a wide variety of optimization and machine learning problems. In this work, we introduce a new variant on the PSO social network and apply this method to the inverse problem of input parameter selection from recorded auditory neuron tuning curves. The topology of a PSO social network is a major contributor to optimization success. Here we propose a new social network which draws influence from winner-take-all coding found in visual cortical neurons. We show that the winner-take-all network performs exceptionally well on optimization problems with greater than 5 dimensions and runs at a lower iteration count as compared to other PSO topologies. Finally we show that this variant of PSO is able to recreate auditory frequency tuning curves and modulation transfer functions, making it a potentially useful tool for computational neuroscience models.

  4. Dynamics of human subthalamic neuron phase-locking to motor and sensory cortical oscillations during movement.

    PubMed

    Lipski, Witold J; Wozny, Thomas A; Alhourani, Ahmad; Kondylis, Efstathios D; Turner, Robert S; Crammond, Donald J; Richardson, Robert Mark

    2017-09-01

    Coupled oscillatory activity recorded between sensorimotor regions of the basal ganglia-thalamocortical loop is thought to reflect information transfer relevant to movement. A neuronal firing-rate model of basal ganglia-thalamocortical circuitry, however, has dominated thinking about basal ganglia function for the past three decades, without knowledge of the relationship between basal ganglia single neuron firing and cortical population activity during movement itself. We recorded activity from 34 subthalamic nucleus (STN) neurons, simultaneously with cortical local field potentials and motor output, in 11 subjects with Parkinson's disease (PD) undergoing awake deep brain stimulator lead placement. STN firing demonstrated phase synchronization to both low- and high-beta-frequency cortical oscillations, and to the amplitude envelope of gamma oscillations, in motor cortex. We found that during movement, the magnitude of this synchronization was dynamically modulated in a phase-frequency-specific manner. Importantly, we found that phase synchronization was not correlated with changes in neuronal firing rate. Furthermore, we found that these relationships were not exclusive to motor cortex, because STN firing also demonstrated phase synchronization to both premotor and sensory cortex. The data indicate that models of basal ganglia function ultimately will need to account for the activity of populations of STN neurons that are bound in distinct functional networks with both motor and sensory cortices and code for movement parameters independent of changes in firing rate. NEW & NOTEWORTHY Current models of basal ganglia-thalamocortical networks do not adequately explain simple motor functions, let alone dysfunction in movement disorders. Our findings provide data that inform models of human basal ganglia function by demonstrating how movement is encoded by networks of subthalamic nucleus (STN) neurons via dynamic phase synchronization with cortex. The data also demonstrate, for the first time in humans, a mechanism through which the premotor and sensory cortices are functionally connected to the STN. Copyright © 2017 the American Physiological Society.

  5. Intrinsic and Extrinsic Neuromodulation of Olfactory Processing

    PubMed Central

    Lizbinski, Kristyn M.; Dacks, Andrew M.

    2018-01-01

    Neuromodulation is a ubiquitous feature of neural systems, allowing flexible, context specific control over network dynamics. Neuromodulation was first described in invertebrate motor systems and early work established a basic dichotomy for neuromodulation as having either an intrinsic origin (i.e., neurons that participate in network coding) or an extrinsic origin (i.e., neurons from independent networks). In this conceptual dichotomy, intrinsic sources of neuromodulation provide a “memory” by adjusting network dynamics based upon previous and ongoing activation of the network itself, while extrinsic neuromodulators provide the context of ongoing activity of other neural networks. Although this dichotomy has been thoroughly considered in motor systems, it has received far less attention in sensory systems. In this review, we discuss intrinsic and extrinsic modulation in the context of olfactory processing in invertebrate and vertebrate model systems. We begin by discussing presynaptic modulation of olfactory sensory neurons by local interneurons (LNs) as a mechanism for gain control based on ongoing network activation. We then discuss the cell-class specific effects of serotonergic centrifugal neurons on olfactory processing. Finally, we briefly discuss the integration of intrinsic and extrinsic neuromodulation (metamodulation) as an effective mechanism for exerting global control over olfactory network dynamics. The heterogeneous nature of neuromodulation is a recurring theme throughout this review as the effects of both intrinsic and extrinsic modulation are generally non-uniform. PMID:29375314

  6. Suprathreshold stochastic resonance in neural processing tuned by correlation.

    PubMed

    Durrant, Simon; Kang, Yanmei; Stocks, Nigel; Feng, Jianfeng

    2011-07-01

    Suprathreshold stochastic resonance (SSR) is examined in the context of integrate-and-fire neurons, with an emphasis on the role of correlation in the neuronal firing. We employed a model based on a network of spiking neurons which received synaptic inputs modeled by Poisson processes stimulated by a stepped input signal. The smoothed ensemble firing rate provided an output signal, and the mutual information between this signal and the input was calculated for networks with different noise levels and different numbers of neurons. It was found that an SSR effect was present in this context. We then examined a more biophysically plausible scenario where the noise was not controlled directly, but instead was tuned by the correlation between the inputs. The SSR effect remained present in this scenario with nonzero noise providing improved information transmission, and it was found that negative correlation between the inputs was optimal. Finally, an examination of SSR in the context of this model revealed its connection with more traditional stochastic resonance and showed a trade-off between supratheshold and subthreshold components. We discuss these results in the context of existing empirical evidence concerning correlations in neuronal firing.

  7. Suprathreshold stochastic resonance in neural processing tuned by correlation

    NASA Astrophysics Data System (ADS)

    Durrant, Simon; Kang, Yanmei; Stocks, Nigel; Feng, Jianfeng

    2011-07-01

    Suprathreshold stochastic resonance (SSR) is examined in the context of integrate-and-fire neurons, with an emphasis on the role of correlation in the neuronal firing. We employed a model based on a network of spiking neurons which received synaptic inputs modeled by Poisson processes stimulated by a stepped input signal. The smoothed ensemble firing rate provided an output signal, and the mutual information between this signal and the input was calculated for networks with different noise levels and different numbers of neurons. It was found that an SSR effect was present in this context. We then examined a more biophysically plausible scenario where the noise was not controlled directly, but instead was tuned by the correlation between the inputs. The SSR effect remained present in this scenario with nonzero noise providing improved information transmission, and it was found that negative correlation between the inputs was optimal. Finally, an examination of SSR in the context of this model revealed its connection with more traditional stochastic resonance and showed a trade-off between supratheshold and subthreshold components. We discuss these results in the context of existing empirical evidence concerning correlations in neuronal firing.

  8. Context-Dependent Encoding of Fear and Extinction Memories in a Large-Scale Network Model of the Basal Amygdala

    PubMed Central

    Vlachos, Ioannis; Herry, Cyril; Lüthi, Andreas; Aertsen, Ad; Kumar, Arvind

    2011-01-01

    The basal nucleus of the amygdala (BA) is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS)-related input from the adjacent lateral nucleus (LA) and contextual input from the hippocampus or medial prefrontal cortex (mPFC). We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA) thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories. PMID:21437238

  9. Uniting functional network topology and oscillations in the fronto-parietal single unit network of behaving primates.

    PubMed

    Dann, Benjamin; Michaels, Jonathan A; Schaffelhofer, Stefan; Scherberger, Hansjörg

    2016-08-15

    The functional communication of neurons in cortical networks underlies higher cognitive processes. Yet, little is known about the organization of the single neuron network or its relationship to the synchronization processes that are essential for its formation. Here, we show that the functional single neuron network of three fronto-parietal areas during active behavior of macaque monkeys is highly complex. The network was closely connected (small-world) and consisted of functional modules spanning these areas. Surprisingly, the importance of different neurons to the network was highly heterogeneous with a small number of neurons contributing strongly to the network function (hubs), which were in turn strongly inter-connected (rich-club). Examination of the network synchronization revealed that the identified rich-club consisted of neurons that were synchronized in the beta or low frequency range, whereas other neurons were mostly non-oscillatory synchronized. Therefore, oscillatory synchrony may be a central communication mechanism for highly organized functional spiking networks.

  10. Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks

    PubMed Central

    Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming

    2017-01-01

    In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections. PMID:28197088

  11. Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks.

    PubMed

    Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming

    2017-01-01

    In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections.

  12. Automatic Fitting of Spiking Neuron Models to Electrophysiological Recordings

    PubMed Central

    Rossant, Cyrille; Goodman, Dan F. M.; Platkiewicz, Jonathan; Brette, Romain

    2010-01-01

    Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models. PMID:20224819

  13. Neuronal avalanches of a self-organized neural network with active-neuron-dominant structure.

    PubMed

    Li, Xiumin; Small, Michael

    2012-06-01

    Neuronal avalanche is a spontaneous neuronal activity which obeys a power-law distribution of population event sizes with an exponent of -3/2. It has been observed in the superficial layers of cortex both in vivo and in vitro. In this paper, we analyze the information transmission of a novel self-organized neural network with active-neuron-dominant structure. Neuronal avalanches can be observed in this network with appropriate input intensity. We find that the process of network learning via spike-timing dependent plasticity dramatically increases the complexity of network structure, which is finally self-organized to be active-neuron-dominant connectivity. Both the entropy of activity patterns and the complexity of their resulting post-synaptic inputs are maximized when the network dynamics are propagated as neuronal avalanches. This emergent topology is beneficial for information transmission with high efficiency and also could be responsible for the large information capacity of this network compared with alternative archetypal networks with different neural connectivity.

  14. The Drosophila Clock Neuron Network Features Diverse Coupling Modes and Requires Network-wide Coherence for Robust Circadian Rhythms.

    PubMed

    Yao, Zepeng; Bennett, Amelia J; Clem, Jenna L; Shafer, Orie T

    2016-12-13

    In animals, networks of clock neurons containing molecular clocks orchestrate daily rhythms in physiology and behavior. However, how various types of clock neurons communicate and coordinate with one another to produce coherent circadian rhythms is not well understood. Here, we investigate clock neuron coupling in the brain of Drosophila and demonstrate that the fly's various groups of clock neurons display unique and complex coupling relationships to core pacemaker neurons. Furthermore, we find that coordinated free-running rhythms require molecular clock synchrony not only within the well-characterized lateral clock neuron classes but also between lateral clock neurons and dorsal clock neurons. These results uncover unexpected patterns of coupling in the clock neuron network and reveal that robust free-running behavioral rhythms require a coherence of molecular oscillations across most of the fly's clock neuron network. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  15. Brain without mind: Computer simulation of neural networks with modifiable neuronal interactions

    NASA Astrophysics Data System (ADS)

    Clark, John W.; Rafelski, Johann; Winston, Jeffrey V.

    1985-07-01

    Aspects of brain function are examined in terms of a nonlinear dynamical system of highly interconnected neuron-like binary decision elements. The model neurons operate synchronously in discrete time, according to deterministic or probabilistic equations of motion. Plasticity of the nervous system, which underlies such cognitive collective phenomena as adaptive development, learning, and memory, is represented by temporal modification of interneuronal connection strengths depending on momentary or recent neural activity. A formal basis is presented for the construction of local plasticity algorithms, or connection-modification routines, spanning a large class. To build an intuitive understanding of the behavior of discrete-time network models, extensive computer simulations have been carried out (a) for nets with fixed, quasirandom connectivity and (b) for nets with connections that evolve under one or another choice of plasticity algorithm. From the former experiments, insights are gained concerning the spontaneous emergence of order in the form of cyclic modes of neuronal activity. In the course of the latter experiments, a simple plasticity routine (“brainwashing,” or “anti-learning”) was identified which, applied to nets with initially quasirandom connectivity, creates model networks which provide more felicitous starting points for computer experiments on the engramming of content-addressable memories and on learning more generally. The potential relevance of this algorithm to developmental neurobiology and to sleep states is discussed. The model considered is at the same time a synthesis of earlier synchronous neural-network models and an elaboration upon them; accordingly, the present article offers both a focused review of the dynamical properties of such systems and a selection of new findings derived from computer simulation.

  16. Relationship between inter-stimulus-intervals and intervals of autonomous activities in a neuronal network.

    PubMed

    Ito, Hidekatsu; Minoshima, Wataru; Kudoh, Suguru N

    2015-08-01

    To investigate relationships between neuronal network activity and electrical stimulus, we analyzed autonomous activity before and after electrical stimulus. Recordings of autonomous activity were performed using dissociated culture of rat hippocampal neurons on a multi-electrodes array (MEA) dish. Single stimulus and pared stimuli were applied to a cultured neuronal network. Single stimulus was applied every 1 min, and paired stimuli was performed by two sequential stimuli every 1 min. As a result, the patterns of synchronized activities of a neuronal network were changed after stimulus. Especially, long range synchronous activities were induced by paired stimuli. When 1 s inter-stimulus-intervals (ISI) and 1.5 s ISI paired stimuli are applied to a neuronal network, relatively long range synchronous activities expressed in case of 1.5 s ISI. Temporal synchronous activity of neuronal network is changed according to inter-stimulus-intervals (ISI) of electrical stimulus. In other words, dissociated neuronal network can maintain given information in temporal pattern and a certain type of an information maintenance mechanism was considered to be implemented in a semi-artificial dissociated neuronal network. The result is useful toward manipulation technology of neuronal activity in a brain system.

  17. Network architectures and circuit function: testing alternative hypotheses in multifunctional networks.

    PubMed

    Leonard, J L

    2000-05-01

    Understanding how species-typical movement patterns are organized in the nervous system is a central question in neurobiology. The current explanations involve 'alphabet' models in which an individual neuron may participate in the circuit for several behaviors but each behavior is specified by a specific neural circuit. However, not all of the well-studied model systems fit the 'alphabet' model. The 'equation' model provides an alternative possibility, whereby a system of parallel motor neurons, each with a unique (but overlapping) field of innervation, can account for the production of stereotyped behavior patterns by variable circuits. That is, it is possible for such patterns to arise as emergent properties of a generalized neural network in the absence of feedback, a simple version of a 'self-organizing' behavioral system. Comparison of systems of identified neurons suggest that the 'alphabet' model may account for most observations where CPGs act to organize motor patterns. Other well-known model systems, involving architectures corresponding to feed-forward neural networks with a hidden layer, may organize patterned behavior in a manner consistent with the 'equation' model. Such architectures are found in the Mauthner and reticulospinal circuits, 'escape' locomotion in cockroaches, CNS control of Aplysia gill, and may also be important in the coordination of sensory information and motor systems in insect mushroom bodies and the vertebrate hippocampus. The hidden layer of such networks may serve as an 'internal representation' of the behavioral state and/or body position of the animal, allowing the animal to fine-tune oriented, or particularly context-sensitive, movements to the prevalent conditions. Experiments designed to distinguish between the two models in cases where they make mutually exclusive predictions provide an opportunity to elucidate the neural mechanisms by which behavior is organized in vivo and in vitro. Copyright 2000 S. Karger AG, Basel

  18. A Model for the Fast Synchronous Oscillations of Firing Rate in Rat Suprachiasmatic Nucleus Neurons Cultured in a Multielectrode Array Dish

    PubMed Central

    Stepanyuk, Andrey R.; Belan, Pavel V.; Kononenko, Nikolai I.

    2014-01-01

    When dispersed and cultured in a multielectrode dish (MED), suprachiasmatic nucleus (SCN) neurons express fast oscillations of firing rate (FOFR; fast relative to the circadian cycle), with burst duration ∼10 min, and interburst interval varying from 20 to 60 min in different cells but remaining nevertheless rather regular in individual cells. In many cases, separate neurons in distant parts of the 1 mm recording area of a MED exhibited correlated FOFR. Neither the mechanism of FOFR nor the mechanism of their synchronization among neurons is known. Based on recent data implicating vasoactive intestinal polypeptide (VIP) as a key intercellular synchronizing agent, we built a model in which VIP acts as both a feedback regulator to generate FOFR in individual neurons, and a diffusible synchronizing agent to produce coherent electrical output of a neuronal network. In our model, VIP binding to its (VPAC2) receptors acts through Gs G-proteins to activate adenylyl cyclase (AC), increase intracellular cAMP, and open cyclic-nucleotide-gated (CNG) cation channels, thus depolarizing the cell and generating neuronal firing to release VIP. In parallel, slowly developing homologous desensitization and internalization of VPAC2 receptors terminates elevation of cAMP and thereby provides an interpulse silent interval. Through mathematical modeling, we show that this VIP/VPAC2/AC/cAMP/CNG-channel mechanism is sufficient for generating reliable FOFR in single neurons. When our model for FOFR is combined with a published model of synchronization of circadian rhythms based on VIP/VPAC2 and Per gene regulation synchronization of circadian rhythms is significantly accelerated. These results suggest that (a) auto/paracrine regulation by VIP/VPAC2 and intracellular AC/cAMP/CNG-channels are sufficient to provide robust FOFR and synchrony among neurons in a heterogeneous network, and (b) this system may also participate in synchronization of circadian rhythms. PMID:25192180

  19. Inter-synaptic learning of combination rules in a cortical network model

    PubMed Central

    Lavigne, Frédéric; Avnaïm, Francis; Dumercy, Laurent

    2014-01-01

    Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network. PMID:25221529

  20. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons

    PubMed Central

    2012-01-01

    We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis. Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80. PMID:22657695

Top